Statistica Sinica: Volume 28, Number 2, April 2018This is an example of an RSS feedhttp://www3.stat.sinica.edu.tw/statistica/Thu, 17 March 2018 00:01:00 +0000 Thu, 15 March 2018 00:01:00 +00001800
/statistica/J28N2/J28N21/J28N21.html
UNCERTAINTY QUANTIFICATION WITH α-STABLE-PROCESS MODELS Rui Tuo 553-576<span style='font-size=12pt;'><center>Abstract</center> In this article we consider using a class of α-stable processes, which can be regarded as generalizations of the Gaussian processes, as the surrogate models for uncertainty quantification. We introduce a class of α-stable processes, whose finite-dimensional distributions can be represented using independent stable random variables. This representation allows for Bayesian inference for the proposed statistical model. We can obtain the posterior distributions for the untried points as well as the model parameters through an MCMC algorithm. The computation for the representation requires some geometrical information given by the design points. We propose an efficient algorithm to solve this computational geometry problem. Two examples are given to illustrate the proposed method and its potential advantages.<p>Key words and phrases: Computer experiments, kriging, Lévy processes, stable distributions.</span>
/statistica/J28N2/J28N210/J28N210.html
SURROGATE-ASSISTED TUNING FOR COMPUTER EXPERIMENTS WITH QUALITATIVE AND QUANTITATIVE PARAMETERS Jiahong K. Chen, Ray-Bing Chen, Akihiro Fujii, Reiji Suda and Weichung Wang 761-789<span style='font-size=12pt;'><center>Abstract</center> Performance tuning of computer codes is an essential issue in computer experiments. By suitable choosing the values of the tuning parameters, we can optimize the codes in terms of timing, accuracy, robustness, or other performance objectives. As computer software and hardware are becoming more and more complicated, such a tuning process is not an easy task, and there are strong needs for developing efficient and automatic tuning methods. In this article, we consider software auto-tuning problems that involve qualitative and quantitative tuning parameters by solving the resulting optimization problems. Because the performance objective functions in the target optimization problems are usually not explicitly defined, we build up surrogates from the response data and attempt to mimic the true, yet unknown, performance response surfaces. The proposed surrogate-assisted tuning process is an iterative procedure. At each iteration, surrogates are updated and new experimental points are chosen based on the prediction uncertainties providedby the surrogate models until a satisfactory solution is obtained. We propose two surrogate construction methods that adopt two infill criteria for the tuning problems containing qualitative and quantitative parameters. The four variants of the proposed algorithm are used to optimize computational fluid dynamic simulation codes and artificial problems to illustrate the usefulness and strengths of the proposed algorithms. <p>Key words and phrases: Computer experiments, expected improvement, Gaussian process, infill criteria, performance tuning, qualitative and quantitative parameters,surrogates modeling.</span>
/statistica/J28N2/J28N211/J28N211.html
SENSITIVITY ANALYSIS AND EMULATION FOR FUNCTIONAL DATA USING BAYESIAN ADAPTIVE SPLINES Devin Francom, Bruno SansÓ, Ana Kupresanin and Gardar Johannesson 791-816<span style='font-size=12pt;'><center>Abstract</center> When a computer code is used to simulate a complex system, one of the fundamental tasks is to assess the sensitivity of the simulator to the different input parameters. In the case of computationally expensive simulators, this is often accomplished via a surrogate statistical model, a statistical output emulator. An effective emulator is one that provides good approximations to the computer code output for wide ranges of input values. In addition, an emulator should be able to handle large dimensional simulation output for a relevant number of inputs; it should flexibly capture heterogeneities in the variability of the response surface; it should be fast to evaluate for arbitrary combinations of input parameters, and it should provide an accurate quantification of the emulation uncertainty. In this paper we discuss the Bayesian approach to multivariate adaptive regression splines (BMARS) as an emulator for a computer model that outputs curves. We introduce modifications to traditional BMARS approaches that allow for fitting large amounts of data and allow for more efficient MCMC sampling. We emphasize the ease with which sensitivity analysis can be performed in this situation. We present a sensitivity analysis of a computer model of the deformation of a protective plate used in pressure-driven experiments. Our example serves as an illustration of the ability of BMARS emulators to fulfill all the necessities of computability, flexibility and reliable calculation on relevant measures of sensitivity. <p>Key words and phrases: Functional data analysis, global sensitivity analysis, multivariate adaptive regression splines, nonlinear regression, parallel tempering.</span>
/statistica/J28N2/J28N212/J28N212.html
SENSITIVITY ANALYSIS USING PERMUTATIONS Shifeng Xiong, Xu He, Yuanzhen He and Weiyan Mu 817-837<span style='font-size=12pt;'><center>Abstract</center> Sensitivity analysis quantifies the uncertainty in an input-output system by measuring the influence of the inputs on the output. This article presents a new sensitivity index by permuting the observations of an input. The proposed index is related to a statistical problem of testing the significance of the input, and thus possesses some frequentist properties that the current sensitivity analysis methods do not have. Numerical simulations and an application are presented to illustrate the proposed method. <p>Key words and phrases: Kriging, permutation test, significance, Sobol' index, uncertainty quantification.</span>
/statistica/J28N2/J28N213/J28N213.html
CONTROLLING CORRELATIONS IN SLICED LATIN HYPERCUBE DESIGNS Jiajie Chen and Peter Qian 839-851<span style='font-size=12pt;'><center>Abstract</center> A sliced Latin hypercube design is a special Latin hypercube design that can be partitioned into smaller Latin hypercube designs. We propose an algorithm to construct sliced Latin hypercube designs with controlled column-wise correlations for each slice and the entire design. The proposed algorithm can significantly decrease the column-wise correlations in each slice as the number of slices increases even if the number of runs in each slice is fixed. The algorithm is flexible in sample size and can be extended to control the quadratic canonical correlations of the larger design. The convergence behavior of the algorithm is studied and the effectiveness of the algorithm is illustrated by several examples.<p>Key words and phrases: Computer experiments, design of experiments, numericalintegration, space-filling design, uncertainty quantification.</span>
/statistica/J28N2/J28N214/J28N214.html
SEQUENTIAL DESIGN OF EXPERIMENTS FOR ESTIMATING QUANTILES OF BLACK-BOX FUNCTIONS T. Labopin-Richard and V. Picheny 853-877<span style='font-size=12pt;'><center>Abstract</center> Estimating quantiles of black-box deterministic functions with random inputs is a challenging task when the number of function evaluations is severelyrestricted, which is typical for computer experiments. This article proposes two new sequential Bayesian methods for quantile estimation based on the Gaussian process metamodel. Both rely on the Stepwise Uncertainty Reduction paradigm, hence aim at providing a sequence of function evaluations that reduces an uncertainty measure associated with the quantile estimator. The proposed strategies are tested on several numerical examples, showing that accurate estimators can be obtained using only a small number of function evaluations. <p>Key words and phrases: Gaussian processes, risk assessment, stepwise uncertainty reduction.</span>
/statistica/J28N2/J28N215/J28N215.html
A SEQUENTIAL MAXIMUM PROJECTION DESIGN FRAMEWORK FOR COMPUTER EXPERIMENTS WITH INERT FACTORS Shan Ba, William R. Myers1 and Dianpeng Wang 879-897<span style='font-size=12pt;'><center>Abstract</center> Many computer experiments involve a large number of input factors, but many of them are inert and only a subset are important. This paper develops a new sequential design framework that can accommodate multiple responses and quickly screen out inert factors so that the final design is space-filling with respect to the active factors. By folding over Latin hypercube designs with sliced structure, this sequential design can have flexible sample size in each stage and also ensure that each stage, as well as the whole combined design, are all approximately Latin hypercube designs. The sequential framework does not require prescribing the total sample size and, under the presence of inert factors, can lead to substantial savings in simulation resources. Even if all factors are important, the proposed sequential design can still achieve a similar overall space-filling property compared to a maximin Latin hypercube design optimized in a single stage.<p>Key words and phrases: Effect sparsity, foldover design, sample size determination, sliced Latin hypercube design, space-filling criterion.</span>
/statistica/J28N2/J28N216/J28N216.html
COMPUTER EXPERIMENTS: PREDICTION ACCURACY, SAMPLE SIZE AND MODEL COMPLEXITY REVISITED Ofir Harari, Derek Bingham, Angela Dean and Dave Higdon 899-919<span style='font-size=12pt;'><center>Abstract</center> We revisit the problem of determining the sample size for a Gaussian process emulator and provide a data analytic tool for exact sample size calculations that goes beyond the 𝓃 = 10𝒹 rule of thumb and is based on an IMSPE-related criterion. This allows us to tie sample size and prediction accuracy to the anticipated roughness of the simulated data, and to propose an experimental process for computer experiments, with extension to a robust scheme. <p>Key words and phrases: Computer experiments, Gaussian processes, sample size calculation.</span>
/statistica/J28N2/J28N217/J28N217.html
STATISTICAL-PHYSICAL ESTIMATION OF POLLUTION EMISSION Youngdeok Hwang, Emre Barut and Kyongmin Yeo 921-940<span style='font-size=12pt;'><center>Abstract</center> Air pollution is driven by non-local dynamics, in which air quality at a site is determined by transport of pollutants from distant pollution emission sources to the site by atmospheric processes. To understand the underlying nature of pollution generation, it is crucial to employ physical knowledge to account for pollution transport by wind. However, in most cases, it is not possible to utilize physics models to obtain useful information; this would require massive calibration and computation. In this paper, we propose a method to estimate the pollution emission from the domain of interest by using the physical knowledge and observed data. The proposed method uses an efficient optimization algorithm to estimate the emission from each of the spatial locations, while incorporating physics knowledge. We demonstrate the effectiveness of the new method through a simulation study. <p>Key words and phrases: Alternating direction method of multipliers, dispersion, inverse model, penalized regression.</span>
/statistica/J28N2/J28N218/J28N218.html
GENERALIZED SPARSE PRECISION MATRIX SELECTION FOR FITTING MULTIVARIATE GAUSSIAN RANDOM FIELDS TO LARGE DATA SETS Sam Davanloo Tajbakhsh, Necdet Serhat Aybat and Enrique del Castillo 941-962<span style='font-size=12pt;'><center>Abstract</center> We present a new method for estimating multivariate, second-order stationary Gaussian Random Field (GRF) models based on the Sparse Precision matrix Selection (SPS) algorithm, proposed by Davanloo Tajbakhsh, Aybat and Del Castillo (2015) for estimating scalar GRF models. Theoretical convergence rates for the estimated between-response covariance matrix and for the estimated parameters of the underlying spatial correlation function are established. Numerical tests using simulations and datasets validate our theoretical findings. Data segmentation is used to handle large data sets. <p>Key words and phrases: Convex optimization, covariance selection, Gaussian Markov random fields, multivariate Gaussian processes, spatial statistics.</span>
/statistica/J28N2/J28N219/J28N219.html
HIGH-DIMENSIONAL GAUSSIAN COPULA REGRESSION: ADAPTIVE ESTIMATION AND STATISTICAL INFERENCE T. Tony Cai and Linjun Zhang 963-993
/statistica/J28N2/J28N22/J28N22.html
EXPLOITING VARIANCE REDUCTION POTENTIAL IN LOCAL GAUSSIAN PROCESS SEARCH Chih-Li Sung, Robert B. Gramacy and Benjamin Haaland 577-600<span style='font-size=12pt;'><center>Abstract</center> Gaussian process models are commonly used as emulators for computer experiments. However, developing a Gaussian process emulator can be computationally prohibitive when the number of experimental samples is even moderately large. Local Gaussian process approximation (Gramacy and Apley (2015)) was proposed as an accurate and computationally feasible emulation alternative. Constructing local sub-designs specific to predictions at a particular location of interest remains a substantial computational bottleneck to the technique. In this paper, two computationally efficient neighborhood search limiting techniques are proposed, a maximum distance method and a feature approximation method. Two examples demonstrate that the proposed methods indeed save substantial computation while retaining emulation accuracy.<p>Key words and phrases: Emulation, feature approximation, large-scale data, local Gaussian process, locality sensitive hashing.</span>
/statistica/J28N2/J28N220/J28N220.html
MULTI-ASSET EMPIRICAL MARTINGALE PRICE ESTIMATORS FOR FINANCIAL DERIVATIVES Shih-Feng Huang and Guan-Chih Ciou 995-1008<span style='font-size=12pt;'><center>Abstract</center> This study proposes an empirical martingale simulation (EMS) and an empirical P-martingale simulation (EPMS) as price estimators for multi-asset financial derivatives. Under mild assumptions on the payoff functions, strong consistency and asymptotic normality of the proposed estimators are established. Several simulation scenarios are conducted to investigate the performance of the proposed price estimators under multivariate geometric Brownian motion, multivariate GARCH models, multivariate jump-diffusion models, and multivariate stochastic volatility models. Numerical results indicate that the multi-asset EMS and EPMS price estimators are capable of improving the efficiency of their Monte Carlo counterparts. In addition, the asymptotic distribution serves as a persuasive approximation to the finite-sample distribution of the EPMS price estimator, which helps to reduce the computation time of finding confidence intervals for the prices of multi-asset derivatives.<p>Key words and phrases: Empirical martingale simulation, Esscher transform, multi-asset derivatives pricing.</span>
/statistica/J28N2/J28N221/J28N221.html
FLEXIBLE DIMENSION REDUCTION IN REGRESSION Tao Wang and Lixing Zhu 1009-1029<span style='font-size=12pt;'><center>Abstract</center> Sliced inverse regression is a valuable tool for dimension reduction. One can replace the predictor vector with a few linear combinations of its components without loss of information on the regression. This paper is about richer nonlinear dimension reduction. Each direction of sliced inverse regression is simply a slope vector of multiple linear regression applied to an optimally transformed response. Using this connection, we propose a nonlinear version of sliced inverse regression by replacing linear function by an additive function of the predictors. Our procedure has a clear interpretation as sliced inverse regression on a set of adaptively chosen transformations of the predictors. The flexibility of our method is illustrated via a simulation study and a data application. <p>Key words and phrases: Canonical correlation, optimal scoring, sufficient dimension reduction.</span>
/statistica/J28N2/J28N222/J28N222.html
FULLY EFFICIENT ROBUST ESTIMATION, OUTLIER DETECTION AND VARIABLE SELECTION VIA PENALIZED REGRESSION Dehan Kong, Howard D. Bondell and Yichao Wu 1031-1052<span style='font-size=12pt;'><center>Abstract</center> This paper studies the outlier detection and variable selection problem in linear regression. A mean shift parameter is added to the linear model to reflect the effect of outliers, where an outlier has a nonzero shift parameter. We then apply an adaptive regularization to these shift parameters to shrink most of them to zero. Those observations with nonzero mean shift parameter estimates are regarded as outliers. An L1 penalty is added to the regression parameters to select important predictors. We propose an efficient algorithm to solve this jointly penalized optimization problem and use the extended Bayesian information criteria tuning method to select the regularization parameters, since the number of parameters exceeds the sample size. Theoretical results are provided in terms of high breakdown point, full efficiency, as well as outlier detection consistency. We illustrate our method with simulations and data. Our method is extended to high-dimensional problems with dimension much larger than the sample size. <p>Key words and phrases: Adaptive, breakdown point, least trimmed squares, outliers, penalized regression, robust regression, variable selection.</span>
/statistica/J28N2/J28N223/J28N223.html
SCALABLE BAYESIAN VARIABLE SELECTION USING NONLOCAL PRIOR DENSITIES IN ULTRAHIGH-DIMENSIONAL SETTINGS Minsuk Shin, Anirban Bhattacharya and Valen E. Johnson 1053-1078<span style='font-size=12pt;'><center>Abstract</center> Bayesian model selection procedures based on nonlocal alternative prior densities are extended to ultrahigh dimensional settings and compared to other variable selection procedures using precision-recall curves. Variable selection procedures included in these comparisons include methods based on ℊ-priors, reciprocal lasso, adaptive lasso, scad, and minimax concave penalty criteria. The use of precision-recall curves eliminates the sensitivity of our conclusions to the choice of tuning parameters. We find that Bayesian variable selection procedures based on nonlocal priors are competitive to all other procedures in a range of simulation scenarios, and we subsequently explain this favorable performance through a theoretical examination of their consistency properties. When certain regularity conditions apply, we demonstrate that the nonlocal procedures are consistent for linear models even when the number of covariates p increases sub-exponentially with the sample size n. A model selection procedure based on Zellner's ℊ-prior is also found to be competitive with penalized likelihood methods in identifying the true model, but the posterior distribution on the model space induced by this method is much more dispersed than the posterior distribution induced on the model space by the nonlocal prior methods. We investigate the asymptotic form of the marginal likelihood based on the nonlocal priors and show that it attains a unique term that cannot be derived from the other Bayesian model selection procedures. We also propose a scalable and efficient algorithm called Simplified Shotgun Stochastic Search with Screening (S5) to explore the enormous model space, and we show that S5 dramatically reduces the computing time without losing the capacity to search the interesting region in the model space, at least in the simulation settings considered. The S5 algorithm is available in an Rpackage BayesS5 on CRAN. <p>Key words and phrases: Bayesian variable selection, nonlocal prior, precision-recall curve, strong model consistency, ultrahigh-dimensional data.</span>
/statistica/J28N2/J28N224/J28N224.html
ESTIMATING STANDARD ERRORS FOR IMPORTANCE SAMPLING ESTIMATORS WITH MULTIPLE MARKOV CHAINS Vivekananda Roy, Aixin Tan and James M. Flegal 1079-1101<span style='font-size=12pt;'><center>Abstract</center> The naive importance sampling estimator, based on samples from a single importance density, can be numerically unstable. We consider generalized importance sampling estimators where samples from more than one probability distribution are combined. We study this problem in the Markov chain Monte Carlo context, where independent samples are replaced with Markov chain samples. If the chains converge to their respective target distributions at a polynomial rate, then under two finite moment conditions, we show a central limit theorem holds for the generalized estimators. We develop an easy-to-implement method to calculate valid asymptotic standard errors based on batch means. We provide a batch means estimator for calculating asymptotically valid standard errors of Geyer's (1994) reverse logistic estimator. We illustrate the method via three examples. In particular, the generalized importance sampling estimator is used for Bayesian spatial modeling of binary data and to perform empirical Bayes variable selection where the batch means estimator enables standard error calculations in high-dimensional settings.<p>Key words and phrases: Bayes factors, Markov chain Monte Carlo, polynomial ergodicity, ratios of normalizing constants, reverse logistic estimator.</span>
/statistica/J28N2/J28N23/J28N23.html
ORTHOGONAL GAUSSIAN PROCESS MODELS Matthew Plumlee and V. Roshan Joseph 601-619<span style='font-size=12pt;'><center>Abstract</center> Gaussian processes models are widely adopted for nonparameteric/semiparametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. This paper also discusses applications to multi-fidelity simulations using data examples. <p>Key words and phrases: Computer experiments, identifiability, kriging, multifidelity simulations, universal kriging.</span>
/statistica/J28N2/J28N24/J28N24.html
GAUSSIAN PROCESS MODELING WITH BOUNDARY INFORMATION Matthias Hwai Yong Tan 621-648<span style='font-size=12pt;'><center>Abstract</center> Gaussian process (GP) models are widely used to approximate time consuming deterministic computer codes, which are often models of physical systems based on partial differential equations (PDEs). Limiting or boundary behavior of the PDE solutions (e.g., behavior when an input tends to infinity) is often known based on physical considerations or mathematical analysis. However, widely used stationary GP priors do not take this information into account. It should be expected that if the GP prior is forced to reproduce the known limiting behavior, it will give better prediction accuracy and extrapolation capability. This paper shows how a GP prior that reproduce known boundary behavior of the computer model can be constructed. Real examples are given to demonstrate the improved prediction performance of the proposed approach. <p>Key words and phrases: Computer experiments, constrained Gaussian process emulator, extrapolation in finite element simulations.</span>
/statistica/J28N2/J28N25/J28N25.html
SINGLE NUGGET KRIGING Minyong R. Lee and Art B. Owen 649-669<span style='font-size=12pt;'><center>Abstract</center> We propose a method with better predictions at extreme values than the standard method of Kriging. We construct our predictor in two ways: by penalizing the mean squared error through conditional bias and by penalizing the conditional likelihood at the target function value. Our prediction exhibits robustness to the model mismatch in the covariance parameters, a desirable feature for computer simulations with a restricted number of data points. Applications on several functions show that our predictor is robust to the non-Gaussianity of the function. <p>Key words and phrases: Computer experiments, conditional bias, Gaussian process regression.</span>
/statistica/J28N2/J28N26/J28N26.html
SEQUENTIAL PARETO MINIMIZATION OF PHYSICAL SYSTEMS USING CALIBRATED COMPUTER SIMULATORS Po-Hsu Allen Chen, Thomas J. Santner and Angela M. Dean 671-692<span style='font-size=12pt;'><center>Abstract</center> This paper proposes a sequential design methodology for a combined physical system and computer simulator experiment having multiple outputs, in the setting where the goal is to find the Pareto Front and Set of the means of the physical system outputs. The methodology is based on a statistically-calibrated simulator. In this paper, the simulator is a computer implementation of a deterministic mathematical model of the physical system; it contains the same set of control(able) inputs as those used to represent the physical system, plus additional calibration inputs for adjusting the simulator output to better mimic the mean of the physical system. A minimax fitness function is proposed for guiding the sequential search for new vectors of control input settings when additional observations on the physical system are to be taken. Based on a Bayesian calibrated model, the update step maximizes the posterior expected minimax fitness function over untried control inputs. When additional runs of the simulator are to be taken, the control input settings are chosen as above; then calibration input settings are selected to minimize the sum, over the set of predicted output means, of the posterior mean squared prediction errors. Using the Hypervolume Indicator function to assess Pareto Front accuracy, the performance of the sequential procedure is evaluated using analytic test functions from the multiple-objective optimization literature. <p>Key words and phrases: Combined physical and simulator experiment, computer experiment, multiobjective optimization.</span>
/statistica/J28N2/J28N27/J28N27.html
BAYESIAN CALIBRATION OF MULTISTATE STOCHASTIC SIMULATORS Mathew T. Pratola and Oksana Chkrebtii 693-719<span style='font-size=12pt;'><center>Abstract</center> Inference on large-scale models is of great interest in modern science. Examples include deterministic simulators of fluid dynamics to recover the sourceof a pollutant, and stochastic agent-based simulators to infer features of consumer behaviour. When computational constraints prohibit model evaluation at all but a small ensemble of parameter settings, exact inference is infeasible. In such cases, emulation of the simulator enables the interrogation of a surrogate model at arbitrary parameter values. Combining emulators with observational data to estimate parameters and predict a real-world process is known as computer model calibration. The choice of the emulator model is a critical aspect of calibration. Existing approaches treat the mathematical model as implemented on computer as an unknown but deterministic response surface. In many cases the underlying mathematical model, or the simulator approximating the mathematical model, are not determinsitic and in fact have some uncertainty associated with their output. In this paper, we propose a Bayesian statistical calibration model for stochastic simulators. The approach is motivated by two applied problems: a deterministic mathematical model of intra-cellular signalling whose implementation on computer nonetheless has discretization uncertainty, and a stochastic model of river water temperature commonly used in hydrology. We show the proposed approach is able to map the uncertainties of such non-deterministic simulators through to the resulting inference while retaining computational feasibility. Supplementary computer code and datasets are provided online. <p>Key words and phrases: Computer experiments, differential equation, models, physical statistical, stochastic simulation, uncertainty quantification.</span>
/statistica/J28N2/J28N28/J28N28.html
NONPARAMETRIC FUNCTIONAL CALIBRATION OF COMPUTER MODELS D. Andrew Brown and Sez Atamturktur 721-742<span style='font-size=12pt;'><center>Abstract</center> Standard methods in computer model calibration treat the calibration parameters as constant throughout the domain of control inputs. In many applications, systematic variation may cause the best values for the calibration parameters to change across different settings. When not accounted for in the code, this variation can make the computer model inadequate. We propose a framework for modeling the calibration parameters as functions of the control inputs to account for a computer model's incomplete system representation in this regard, while simultaneously allowing for possible constraints imposed by prior expert opinion. We demonstrate how inappropriate modeling assumptions can mislead a researcher into thinking a calibrated model is in need of an empirical discrepancy term when it is only needed to allow for a functional dependence of the calibration parameters on the inputs. We apply our approach to plastic deformation of a visco-plastic self-consistent material in which the critical resolved shear stress is known to vary with temperature. <p>Key words and phrases: Bayesian statistics, Gaussian process, identifiability, model validation, uncertainty quantification, visco-plastic self-consistent material.</span>
/statistica/J28N2/J28N29/J28N29.html
PREDICTION BASED ON THE KENNEDY-O'HAGAN CALIBRATION MODEL: ASYMPTOTIC CONSISTENCY AND OTHER PROPERTIES Rui Tuo and C. F. Jeff Wu 743-759<span style='font-size=12pt;'><center>Abstract</center> Kennedy and O'Hagan (2001) propose a model for calibrating some unknown parameters in a computer model and estimating the discrepancy betweenthe computer output and physical response. This model is known to have certain identifiability issues. Tuo and Wu (2016) show that there are examples for which the Kennedy-O'Hagan method renders unreasonable results in calibration. In spite of its unstable performance in calibration, the Kennedy-O'Hagan approach has a more robust behavior in predicting the physical response. In this work, we present some theoretical analysis to show the consistency of predictor based on their calibration model in the context of radial basis functions. <p>Key words and phrases: Bayesian inference, computer experiments, kriging.</span>