Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

8.10 unit assessment the great gatsby k12

05/12/2021 Client: muhammad11 Deadline: 2 Day

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Gaussian Processes for Machine Learning

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Adaptive Computation and Machine Learning Thomas Dietterich, Editor Christopher Bishop, David Heckerman, Michael Jordan, and Michael Kearns, Associate Editors

Bioinformatics: The Machine Learning Approach, Pierre Baldi and Søren Brunak

Reinforcement Learning: An Introduction, Richard S. Sutton and Andrew G. Barto

Graphical Models for Machine Learning and Digital Communication, Brendan J. Frey

Learning in Graphical Models, Michael I. Jordan

Causation, Prediction, and Search, second edition, Peter Spirtes, Clark Glymour, and Richard Scheines

Principles of Data Mining, David Hand, Heikki Mannila, and Padhraic Smyth

Bioinformatics: The Machine Learning Approach, second edition, Pierre Baldi and Søren Brunak

Learning Kernel Classifiers: Theory and Algorithms, Ralf Herbrich

Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, Bernhard Schölkopf and Alexander J. Smola

Introduction to Machine Learning, Ethem Alpaydin

Gaussian Processes for Machine Learning, Carl Edward Rasmussen and Christopher K. I. Williams

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Gaussian Processes for Machine Learning

Carl Edward Rasmussen Christopher K. I. Williams

The MIT Press Cambridge, Massachusetts London, England

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

c© 2006 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please email special sales@mitpress.mit.edu or write to Special Sales Department, The MIT Press, 55 Hayward Street, Cambridge, MA 02142.

Typeset by the authors using LATEX2ε. This book was printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Rasmussen, Carl Edward. Gaussian processes for machine learning / Carl Edward Rasmussen, Christopher K. I. Williams.

p. cm. —(Adaptive computation and machine learning) Includes bibliographical references and indexes. ISBN 0-262-18253-X 1. Gaussian processes—Data processing. 2. Machine learning—Mathematical models. I. Williams, Christopher K. I. II. Title. III. Series.

QA274.4.R37 2006 519.2'3—dc22

2005053433

10 9 8 7 6 5 4 3 2

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.

— James Clerk Maxwell [1850]

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Contents

Series Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Symbols and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

1 Introduction 1 1.1 A Pictorial Introduction to Bayesian Modelling . . . . . . . . . . . . . . . 3 1.2 Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Regression 7 2.1 Weight-space View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 The Standard Linear Model . . . . . . . . . . . . . . . . . . . . . . 8 2.1.2 Projections of Inputs into Feature Space . . . . . . . . . . . . . . . 11

2.2 Function-space View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Varying the Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4 Decision Theory for Regression . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 An Example Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.6 Smoothing, Weight Functions and Equivalent Kernels . . . . . . . . . . . 24

∗ 2.7 Incorporating Explicit Basis Functions . . . . . . . . . . . . . . . . . . . . 27 2.7.1 Marginal Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.8 History and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3 Classification 33 3.1 Classification Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.1.1 Decision Theory for Classification . . . . . . . . . . . . . . . . . . 35 3.2 Linear Models for Classification . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3 Gaussian Process Classification . . . . . . . . . . . . . . . . . . . . . . . . 39 3.4 The Laplace Approximation for the Binary GP Classifier . . . . . . . . . . 41

3.4.1 Posterior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.2 Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4.4 Marginal Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . 47

∗ 3.5 Multi-class Laplace Approximation . . . . . . . . . . . . . . . . . . . . . . 48 3.5.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.6 Expectation Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.6.1 Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.6.2 Marginal Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.6.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.7 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.7.1 A Toy Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.7.2 One-dimensional Example . . . . . . . . . . . . . . . . . . . . . . 62 3.7.3 Binary Handwritten Digit Classification Example . . . . . . . . . . 63 3.7.4 10-class Handwritten Digit Classification Example . . . . . . . . . 70

3.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 ∗Sections marked by an asterisk contain advanced material that may be omitted on a first reading.

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

viii Contents

∗ 3.9 Appendix: Moment Derivations . . . . . . . . . . . . . . . . . . . . . . . . 74 3.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4 Covariance Functions 79 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

∗ 4.1.1 Mean Square Continuity and Differentiability . . . . . . . . . . . . 81 4.2 Examples of Covariance Functions . . . . . . . . . . . . . . . . . . . . . . 81

4.2.1 Stationary Covariance Functions . . . . . . . . . . . . . . . . . . . 82 4.2.2 Dot Product Covariance Functions . . . . . . . . . . . . . . . . . . 89 4.2.3 Other Non-stationary Covariance Functions . . . . . . . . . . . . . 90 4.2.4 Making New Kernels from Old . . . . . . . . . . . . . . . . . . . . 94

4.3 Eigenfunction Analysis of Kernels . . . . . . . . . . . . . . . . . . . . . . . 96 ∗ 4.3.1 An Analytic Example . . . . . . . . . . . . . . . . . . . . . . . . . 97

4.3.2 Numerical Approximation of Eigenfunctions . . . . . . . . . . . . . 98 4.4 Kernels for Non-vectorial Inputs . . . . . . . . . . . . . . . . . . . . . . . 99

4.4.1 String Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.4.2 Fisher Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5 Model Selection and Adaptation of Hyperparameters 105 5.1 The Model Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.2 Bayesian Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.3 Cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.4 Model Selection for GP Regression . . . . . . . . . . . . . . . . . . . . . . 112

5.4.1 Marginal Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.4.2 Cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.4.3 Examples and Discussion . . . . . . . . . . . . . . . . . . . . . . . 118

5.5 Model Selection for GP Classification . . . . . . . . . . . . . . . . . . . . . 124 ∗ 5.5.1 Derivatives of the Marginal Likelihood for Laplace’s Approximation 125 ∗ 5.5.2 Derivatives of the Marginal Likelihood for EP . . . . . . . . . . . . 127

5.5.3 Cross-validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 5.5.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

6 Relationships between GPs and Other Models 129 6.1 Reproducing Kernel Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . 129 6.2 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

∗ 6.2.1 Regularization Defined by Differential Operators . . . . . . . . . . 133 6.2.2 Obtaining the Regularized Solution . . . . . . . . . . . . . . . . . . 135 6.2.3 The Relationship of the Regularization View to Gaussian Process

Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.3 Spline Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

∗ 6.3.1 A 1-d Gaussian Process Spline Construction . . . . . . . . . . . . . 138 ∗ 6.4 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6.4.1 Support Vector Classification . . . . . . . . . . . . . . . . . . . . . 141 6.4.2 Support Vector Regression . . . . . . . . . . . . . . . . . . . . . . 145

∗ 6.5 Least-squares Classification . . . . . . . . . . . . . . . . . . . . . . . . . . 146 6.5.1 Probabilistic Least-squares Classification . . . . . . . . . . . . . . . 147

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Contents ix

∗ 6.6 Relevance Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . 149 6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

7 Theoretical Perspectives 151 7.1 The Equivalent Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

7.1.1 Some Specific Examples of Equivalent Kernels . . . . . . . . . . . 153 ∗ 7.2 Asymptotic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

7.2.1 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.2.2 Equivalence and Orthogonality . . . . . . . . . . . . . . . . . . . . 157

∗ 7.3 Average-case Learning Curves . . . . . . . . . . . . . . . . . . . . . . . . . 159 ∗ 7.4 PAC-Bayesian Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

7.4.1 The PAC Framework . . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.4.2 PAC-Bayesian Analysis . . . . . . . . . . . . . . . . . . . . . . . . 163 7.4.3 PAC-Bayesian Analysis of GP Classification . . . . . . . . . . . . . 164

7.5 Comparison with Other Supervised Learning Methods . . . . . . . . . . . 165 ∗ 7.6 Appendix: Learning Curve for the Ornstein-Uhlenbeck Process . . . . . . 168

7.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

8 Approximation Methods for Large Datasets 171 8.1 Reduced-rank Approximations of the Gram Matrix . . . . . . . . . . . . . 171 8.2 Greedy Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 8.3 Approximations for GPR with Fixed Hyperparameters . . . . . . . . . . . 175

8.3.1 Subset of Regressors . . . . . . . . . . . . . . . . . . . . . . . . . . 175 8.3.2 The Nyström Method . . . . . . . . . . . . . . . . . . . . . . . . . 177 8.3.3 Subset of Datapoints . . . . . . . . . . . . . . . . . . . . . . . . . 177 8.3.4 Projected Process Approximation . . . . . . . . . . . . . . . . . . . 178 8.3.5 Bayesian Committee Machine . . . . . . . . . . . . . . . . . . . . . 180 8.3.6 Iterative Solution of Linear Systems . . . . . . . . . . . . . . . . . 181 8.3.7 Comparison of Approximate GPR Methods . . . . . . . . . . . . . 182

8.4 Approximations for GPC with Fixed Hyperparameters . . . . . . . . . . . 185 ∗ 8.5 Approximating the Marginal Likelihood and its Derivatives . . . . . . . . 185 ∗ 8.6 Appendix: Equivalence of SR and GPR Using the Nyström Approximate

Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 8.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

9 Further Issues and Conclusions 189 9.1 Multiple Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 9.2 Noise Models with Dependencies . . . . . . . . . . . . . . . . . . . . . . . 190 9.3 Non-Gaussian Likelihoods . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 9.4 Derivative Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 9.5 Prediction with Uncertain Inputs . . . . . . . . . . . . . . . . . . . . . . . 192 9.6 Mixtures of Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . 192 9.7 Global Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 9.8 Evaluation of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 9.9 Student’s t Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 9.10 Invariances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 9.11 Latent Variable Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 9.12 Conclusions and Future Directions . . . . . . . . . . . . . . . . . . . . . . 196

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

x Contents

Appendix A Mathematical Background 199 A.1 Joint, Marginal and Conditional Probability . . . . . . . . . . . . . . . . . 199 A.2 Gaussian Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 A.3 Matrix Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

A.3.1 Matrix Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 A.3.2 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

A.4 Cholesky Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 A.5 Entropy and Kullback-Leibler Divergence . . . . . . . . . . . . . . . . . . 203 A.6 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 A.7 Measure and Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

A.7.1 Lp Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 A.8 Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 A.9 Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Appendix B Gaussian Markov Processes 207 B.1 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

B.1.1 Sampling and Periodization . . . . . . . . . . . . . . . . . . . . . . 209 B.2 Continuous-time Gaussian Markov Processes . . . . . . . . . . . . . . . . 211

B.2.1 Continuous-time GMPs on R . . . . . . . . . . . . . . . . . . . . . 211 B.2.2 The Solution of the Corresponding SDE on the Circle . . . . . . . 213

B.3 Discrete-time Gaussian Markov Processes . . . . . . . . . . . . . . . . . . 214 B.3.1 Discrete-time GMPs on Z . . . . . . . . . . . . . . . . . . . . . . . 214 B.3.2 The Solution of the Corresponding Difference Equation on PN . . 215

B.4 The Relationship Between Discrete-time and Sampled Continuous-time GMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

B.5 Markov Processes in Higher Dimensions . . . . . . . . . . . . . . . . . . . 218

Appendix C Datasets and Code 221

Bibliography 223

Author Index 239

Subject Index 245

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Series Foreword

The goal of building systems that can adapt to their environments and learn from their experience has attracted researchers from many fields, including com- puter science, engineering, mathematics, physics, neuroscience, and cognitive science. Out of this research has come a wide variety of learning techniques that have the potential to transform many scientific and industrial fields. Recently, several research communities have converged on a common set of issues sur- rounding supervised, unsupervised, and reinforcement learning problems. The MIT Press series on Adaptive Computation and Machine Learning seeks to unify the many diverse strands of machine learning research and to foster high quality research and innovative applications.

One of the most active directions in machine learning has been the de- velopment of practical Bayesian methods for challenging learning problems. Gaussian Processes for Machine Learning presents one of the most important Bayesian machine learning approaches based on a particularly effective method for placing a prior distribution over the space of functions. Carl Edward Ras- mussen and Chris Williams are two of the pioneers in this area, and their book describes the mathematical foundations and practical application of Gaussian processes in regression and classification tasks. They also show how Gaussian processes can be interpreted as a Bayesian version of the well-known support vector machine methods. Students and researchers who study this book will be able to apply Gaussian process methods in creative ways to solve a wide range of problems in science and engineering.

Thomas Dietterich

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Preface

Over the last decade there has been an explosion of work in the “kernel ma- kernel machines chines” area of machine learning. Probably the best known example of this is work on support vector machines, but during this period there has also been much activity concerning the application of Gaussian process models to ma- chine learning tasks. The goal of this book is to provide a systematic and uni- fied treatment of this area. Gaussian processes provide a principled, practical, probabilistic approach to learning in kernel machines. This gives advantages with respect to the interpretation of model predictions and provides a well- founded framework for learning and model selection. Theoretical and practical developments of over the last decade have made Gaussian processes a serious competitor for real supervised learning applications.

Roughly speaking a stochastic process is a generalization of a probability Gaussian process distribution (which describes a finite-dimensional random variable) to func- tions. By focussing on processes which are Gaussian, it turns out that the computations required for inference and learning become relatively easy. Thus, the supervised learning problems in machine learning which can be thought of as learning a function from examples can be cast directly into the Gaussian process framework.

Our interest in Gaussian process (GP) models in the context of machine Gaussian processes in machine learninglearning was aroused in 1994, while we were both graduate students in Geoff

Hinton’s Neural Networks lab at the University of Toronto. This was a time when the field of neural networks was becoming mature and the many con- nections to statistical physics, probabilistic models and statistics became well known, and the first kernel-based learning algorithms were becoming popular. In retrospect it is clear that the time was ripe for the application of Gaussian processes to machine learning problems.

Many researchers were realizing that neural networks were not so easy to neural networks apply in practice, due to the many decisions which needed to be made: what architecture, what activation functions, what learning rate, etc., and the lack of a principled framework to answer these questions. The probabilistic framework was pursued using approximations by MacKay [1992b] and using Markov chain Monte Carlo (MCMC) methods by Neal [1996]. Neal was also a graduate stu- dent in the same lab, and in his thesis he sought to demonstrate that using the Bayesian formalism, one does not necessarily have problems with “overfitting” when the models get large, and one should pursue the limit of large models. While his own work was focused on sophisticated Markov chain methods for inference in large finite networks, he did point out that some of his networks became Gaussian processes in the limit of infinite size, and “there may be sim- large neural networks

≡ Gaussian processespler ways to do inference in this case.”

It is perhaps interesting to mention a slightly wider historical perspective. The main reason why neural networks became popular was that they allowed the use of adaptive basis functions, as opposed to the well known linear models. adaptive basis functions The adaptive basis functions, or hidden units, could “learn” hidden features

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

xiv Preface

useful for the modelling problem at hand. However, this adaptivity came at the cost of a lot of practical problems. Later, with the advancement of the “kernel era”, it was realized that the limitation of fixed basis functions is not a bigmany fixed basis

functions restriction if only one has enough of them, i.e. typically infinitely many, and one is careful to control problems of overfitting by using priors or regularization. The resulting models are much easier to handle than the adaptive basis function models, but have similar expressive power.

Thus, one could claim that (as far a machine learning is concerned) the adaptive basis functions were merely a decade-long digression, and we are now back to where we came from. This view is perhaps reasonable if we think of models for solving practical learning problems, although MacKay [2003, ch. 45], for example, raises concerns by asking “did we throw out the baby with the bath water?”, as the kernel view does not give us any hidden representations, tellinguseful representations us what the useful features are for solving a particular problem. As we will argue in the book, one answer may be to learn more sophisticated covariance functions, and the “hidden” properties of the problem are to be found here. An important area of future developments for GP models is the use of more expressive covariance functions.

Supervised learning problems have been studied for more than a centurysupervised learning in statistics in statistics, and a large body of well-established theory has been developed.

More recently, with the advance of affordable, fast computation, the machine learning community has addressed increasingly large and complex problems.

Much of the basic theory and many algorithms are shared between thestatistics and machine learning statistics and machine learning community. The primary differences are perhaps

the types of the problems attacked, and the goal of learning. At the risk of oversimplification, one could say that in statistics a prime focus is often indata and models understanding the data and relationships in terms of models giving approximate summaries such as linear relations or independencies. In contrast, the goals in machine learning are primarily to make predictions as accurately as possible andalgorithms and

predictions to understand the behaviour of learning algorithms. These differing objectives have led to different developments in the two fields: for example, neural network algorithms have been used extensively as black-box function approximators in machine learning, but to many statisticians they are less than satisfactory, because of the difficulties in interpreting such models.

Gaussian process models in some sense bring together work in the two com-bridging the gap munities. As we will see, Gaussian processes are mathematically equivalent to many well known models, including Bayesian linear models, spline models, large neural networks (under suitable conditions), and are closely related to others, such as support vector machines. Under the Gaussian process viewpoint, the models may be easier to handle and interpret than their conventional coun- terparts, such as e.g. neural networks. In the statistics community Gaussian processes have also been discussed many times, although it would probably be excessive to claim that their use is widespread except for certain specific appli- cations such as spatial models in meteorology and geology, and the analysis of computer experiments. A rich theory also exists for Gaussian process models

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Preface xv

in the time series analysis literature; some pointers to this literature are given in Appendix B.

The book is primarily intended for graduate students and researchers in intended audience machine learning at departments of Computer Science, Statistics and Applied Mathematics. As prerequisites we require a good basic grounding in calculus, linear algebra and probability theory as would be obtained by graduates in nu- merate disciplines such as electrical engineering, physics and computer science. For preparation in calculus and linear algebra any good university-level text- book on mathematics for physics or engineering such as Arfken [1985] would be fine. For probability theory some familiarity with multivariate distributions (especially the Gaussian) and conditional probability is required. Some back- ground mathematical material is also provided in Appendix A.

The main focus of the book is to present clearly and concisely an overview focus of the main ideas of Gaussian processes in a machine learning context. We have also covered a wide range of connections to existing models in the literature, and cover approximate inference for faster practical algorithms. We have pre- sented detailed algorithms for many methods to aid the practitioner. Software implementations are available from the website for the book, see Appendix C. We have also included a small set of exercises in each chapter; we hope these will help in gaining a deeper understanding of the material.

In order limit the size of the volume, we have had to omit some topics, such scope as, for example, Markov chain Monte Carlo methods for inference. One of the most difficult things to decide when writing a book is what sections not to write. Within sections, we have often chosen to describe one algorithm in particular in depth, and mention related work only in passing. Although this causes the omission of some material, we feel it is the best approach for a monograph, and hope that the reader will gain a general understanding so as to be able to push further into the growing literature of GP models.

The book has a natural split into two parts, with the chapters up to and book organization including chapter 5 covering core material, and the remaining sections covering the connections to other methods, fast approximations, and more specialized properties. Some sections are marked by an asterisk. These sections may be ∗ omitted on a first reading, and are not pre-requisites for later (un-starred) material.

We wish to express our considerable gratitude to the many people with acknowledgements whom we have interacted during the writing of this book. In particular Moray Allan, David Barber, Peter Bartlett, Miguel Carreira-Perpiñán, Marcus Gal- lagher, Manfred Opper, Anton Schwaighofer, Matthias Seeger, Hanna Wallach, Joe Whittaker, and Andrew Zisserman all read parts of the book and provided valuable feedback. Dilan Görür, Malte Kuss, Iain Murray, Joaquin Quiñonero- Candela, Leif Rasmussen and Sam Roweis were especially heroic and provided comments on the whole manuscript. We thank Chris Bishop, Miguel Carreira- Perpiñán, Nando de Freitas, Zoubin Ghahramani, Peter Grünwald, Mike Jor- dan, John Kent, Radford Neal, Joaquin Quiñonero-Candela, Ryan Rifkin, Ste- fan Schaal, Anton Schwaighofer, Matthias Seeger, Peter Sollich, Ingo Steinwart,

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

xvi Preface

Amos Storkey, Volker Tresp, Sethu Vijayakumar, Grace Wahba, Joe Whittaker and Tong Zhang for valuable discussions on specific issues. We also thank Bob Prior and the staff at MIT Press for their support during the writing of the book. We thank the Gatsby Computational Neuroscience Unit (UCL) and Neil Lawrence at the Department of Computer Science, University of Sheffield for hosting our visits and kindly providing space for us to work, and the Depart- ment of Computer Science at the University of Toronto for computer support. Thanks to John and Fiona for their hospitality on numerous occasions. Some of the diagrams in this book have been inspired by similar diagrams appearing in published work, as follows: Figure 3.5, Schölkopf and Smola [2002]; Fig- ure 5.2, MacKay [1992b]. CER gratefully acknowledges financial support from the German Research Foundation (DFG). CKIW thanks the School of Infor- matics, University of Edinburgh for granting him sabbatical leave for the period October 2003-March 2004.

Finally, we reserve our deepest appreciation for our wives Agnes and Bar- bara, and children Ezra, Kate, Miro and Ruth for their patience and under- standing while the book was being written.

Despite our best efforts it is inevitable that some errors will make it througherrata to the printed version of the book. Errata will be made available via the book’s website at

http://www.GaussianProcess.org/gpml

We have found the joint writing of this book an excellent experience. Although hard at times, we are confident that the end result is much better than either one of us could have written alone.

Now, ten years after their first introduction into the machine learning com-looking ahead munity, Gaussian processes are receiving growing attention. Although GPs have been known for a long time in the statistics and geostatistics fields, and their use can perhaps be traced back as far as the end of the 19th century, their application to real problems is still in its early phases. This contrasts somewhat the application of the non-probabilistic analogue of the GP, the support vec- tor machine, which was taken up more quickly by practitioners. Perhaps this has to do with the probabilistic mind-set needed to understand GPs, which is not so generally appreciated. Perhaps it is due to the need for computational short-cuts to implement inference for large datasets. Or it could be due to the lack of a self-contained introduction to this exciting field—with this volume, we hope to contribute to the momentum gained by Gaussian processes in machine learning.

Carl Edward Rasmussen and Chris Williams Tübingen and Edinburgh, summer 2005

Second printing: We thank Baback Moghaddam, Mikhail Parakhin, Leif Ras- mussen, Benjamin Sobotta, Kevin S. Van Horn and Aki Vehtari for reporting errors in the first printing which have now been corrected.

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Symbols and Notation

Matrices are capitalized and vectors are in bold type. We do not generally distinguish between proba- bilities and probability densities. A subscript asterisk, such as in X∗, indicates reference to a test set quantity. A superscript asterisk denotes complex conjugate.

Symbol Meaning

\ left matrix divide: A\b is the vector x which solves Ax = b , an equality which acts as a definition c= equality up to an additive constant |K| determinant of K matrix |y| Euclidean length of vector y, i.e.

(∑ i y

2 i

)1/2 〈f, g〉H RKHS inner product ‖f‖H RKHS norm y> the transpose of vector y ∝ proportional to; e.g. p(x|y) ∝ f(x, y) means that p(x|y) is equal to f(x, y) times

a factor which is independent of x ∼ distributed according to; example: x ∼ N (µ, σ2) ∇ or ∇f partial derivatives (w.r.t. f) ∇∇ the (Hessian) matrix of second derivatives 0 or 0n vector of all 0’s (of length n) 1 or 1n vector of all 1’s (of length n) C number of classes in a classification problem cholesky(A) Cholesky decomposition: L is a lower triangular matrix such that LL> = A cov(f∗) Gaussian process posterior covariance D dimension of input space X D data set: D = {(xi, yi)|i = 1, . . . , n} diag(w) (vector argument) a diagonal matrix containing the elements of vector w diag(W ) (matrix argument) a vector containing the diagonal elements of matrix W δpq Kronecker delta, δpq = 1 iff p = q and 0 otherwise E or Eq(x)[z(x)] expectation; expectation of z(x) when x ∼ q(x) f(x) or f Gaussian process (or vector of) latent function values, f = (f(x1), . . . , f(xn))>

f∗ Gaussian process (posterior) prediction (random variable) f̄∗ Gaussian process posterior mean GP Gaussian process: f ∼ GP

( m(x), k(x,x′)

) , the function f is distributed as a

Gaussian process with mean function m(x) and covariance function k(x,x′) h(x) or h(x) either fixed basis function (or set of basis functions) or weight function H or H(X) set of basis functions evaluated at all training points I or In the identity matrix (of size n) Jν(z) Bessel function of the first kind k(x,x′) covariance (or kernel) function evaluated at x and x′

K or K(X,X) n× n covariance (or Gram) matrix K∗ n× n∗ matrix K(X,X∗), the covariance between training and test cases k(x∗) or k∗ vector, short for K(X,x∗), when there is only a single test case Kf or K covariance matrix for the (noise free) f values

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

xviii Symbols and Notation

Symbol Meaning

Ky covariance matrix for the (noisy) y values; for independent homoscedastic noise, Ky = Kf + σ2nI

Kν(z) modified Bessel function L(a, b) loss function, the loss of predicting b, when a is true; note argument order log(z) natural logarithm (base e) log2(z) logarithm to the base 2 ` or `d characteristic length-scale (for input dimension d) λ(z) logistic function, λ(z) = 1/

( 1 + exp(−z)

) m(x) the mean function of a Gaussian process µ a measure (see section A.7) N (µ,Σ) or N (x|µ,Σ) (the variable x has a) Gaussian (Normal) distribution with mean vector µ and

covariance matrix Σ N (x) short for unit Gaussian x ∼ N (0, I) n and n∗ number of training (and test) cases N dimension of feature space NH number of hidden units in a neural network N the natural numbers, the positive integers O(·) big Oh; for functions f and g on N, we write f(n) = O(g(n)) if the ratio

f(n)/g(n) remains bounded as n→∞ O either matrix of all zeros or differential operator y|x and p(y|x) conditional random variable y given x and its probability (density) PN the regular n-polygon φ(xi) or Φ(X) feature map of input xi (or input set X) Φ(z) cumulative unit Gaussian: Φ(z) = (2π)−1/2

∫ z −∞ exp(−t

2/2)dt π(x) the sigmoid of the latent value: π(x) = σ(f(x)) (stochastic if f(x) is stochastic) π̂(x∗) MAP prediction: π evaluated at f̄(x∗). π̄(x∗) mean prediction: expected value of π(x∗). Note, in general that π̂(x∗) 6= π̄(x∗) R the real numbers RL(f) or RL(c) the risk or expected loss for f , or classifier c (averaged w.r.t. inputs and outputs) R̃L(l|x∗) expected loss for predicting l, averaged w.r.t. the model’s pred. distr. at x∗ Rc decision region for class c S(s) power spectrum σ(z) any sigmoid function, e.g. logistic λ(z), cumulative Gaussian Φ(z), etc. σ2f variance of the (noise free) signal σ2n noise variance θ vector of hyperparameters (parameters of the covariance function) tr(A) trace of (square) matrix A Tl the circle with circumference l V or Vq(x)[z(x)] variance; variance of z(x) when x ∼ q(x) X input space and also the index set for the stochastic process X D × n matrix of the training inputs {xi}ni=1: the design matrix X∗ matrix of test inputs xi the ith training input xdi the dth coordinate of the ith training input xi Z the integers . . . ,−2, −1, 0, 1, 2, . . .

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

Chapter 1

Introduction

In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics of the output, this problem is known as either regression, for continuous outputs, or classification, when outputs are discrete.

A well known example is the classification of images of handwritten digits. digit classification The training set consists of small digitized images, together with a classification from 0, . . . , 9, normally provided by a human. The goal is to learn a mapping from image to classification label, which can then be used on new, unseen images. Supervised learning is an attractive way to attempt to tackle this problem, since it is not easy to specify accurately the characteristics of, say, the handwritten digit 4.

An example of a regression problem can be found in robotics, where we wish robotic control to learn the inverse dynamics of a robot arm. Here the task is to map from the state of the arm (given by the positions, velocities and accelerations of the joints) to the corresponding torques on the joints. Such a model can then be used to compute the torques needed to move the arm along a given trajectory. Another example would be in a chemical plant, where we might wish to predict the yield as a function of process parameters such as temperature, pressure, amount of catalyst etc.

In general we denote the input as x, and the output (or target) as y. The the dataset input is usually represented as a vector x as there are in general many input variables—in the handwritten digit recognition example one may have a 256- dimensional input obtained from a raster scan of a 16 × 16 image, and in the robot arm example there are three input measurements for each joint in the arm. The target y may either be continuous (as in the regression case) or discrete (as in the classification case). We have a dataset D of n observations, D = {(xi, yi)|i = 1, . . . , n}.

Given this training data we wish to make predictions for new inputs x∗ training is inductive that we have not seen in the training set. Thus it is clear that the problem at hand is inductive; we need to move from the finite training data D to a

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

2 Introduction

function f that makes predictions for all possible input values. To do this we must make assumptions about the characteristics of the underlying function, as otherwise any function which is consistent with the training data would be equally valid. A wide variety of methods have been proposed to deal with the supervised learning problem; here we describe two common approaches. Thetwo approaches first is to restrict the class of functions that we consider, for example by only considering linear functions of the input. The second approach is (speaking rather loosely) to give a prior probability to every possible function, where higher probabilities are given to functions that we consider to be more likely, for example because they are smoother than other functions.1 The first approach has an obvious problem in that we have to decide upon the richness of the class of functions considered; if we are using a model based on a certain class of functions (e.g. linear functions) and the target function is not well modelled by this class, then the predictions will be poor. One may be tempted to increase the flexibility of the class of functions, but this runs into the danger of overfitting, where we can obtain a good fit to the training data, but perform badly when making test predictions.

The second approach appears to have a serious problem, in that surely there are an uncountably infinite set of possible functions, and how are we going to compute with this set in finite time? This is where the GaussianGaussian process process comes to our rescue. A Gaussian process is a generalization of the Gaussian probability distribution. Whereas a probability distribution describes random variables which are scalars or vectors (for multivariate distributions), a stochastic process governs the properties of functions. Leaving mathematical sophistication aside, one can loosely think of a function as a very long vector, each entry in the vector specifying the function value f(x) at a particular input x. It turns out, that although this idea is a little näıve, it is surprisingly close what we need. Indeed, the question of how we deal computationally with these infinite dimensional objects has the most pleasant resolution imaginable: if you ask only for the properties of the function at a finite number of points, then inference in the Gaussian process will give you the same answer if you ignore the infinitely many other points, as if you would have taken them all into account! And these answers are consistent with answers to any other finite queries youconsistency may have. One of the main attractions of the Gaussian process framework is precisely that it unites a sophisticated and consistent view with computationaltractability tractability.

It should come as no surprise that these ideas have been around for some time, although they are perhaps not as well known as they might be. Indeed, many models that are commonly employed in both machine learning and statis- tics are in fact special cases of, or restricted kinds of Gaussian processes. In this volume, we aim to give a systematic and unified treatment of the area, showing connections to related models.

1These two approaches may be regarded as imposing a restriction bias and a preference bias respectively; see e.g. Mitchell [1997].

C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c© 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml

1.1 A Pictorial Introduction to Bayesian Modelling 3

0 0.5 1

−2

−1

0

1

2

input, x

f(x )

0 0.5 1

−2

−1

0

1

2

input, x f(x

)

(a), prior (b), posterior

Figure 1.1: Panel (a) shows four samples drawn from the prior distribution. Panel (b) shows the situation after two datapoints have been observed. The mean prediction is shown as the solid line and four samples from the posterior are shown as dashed lines. In both plots the shaded region denotes twice the standard deviation at each input value x.

1.1 A Pictorial Introduction to Bayesian Mod- elling

In this section we give graphical illustrations of how the second (Bayesian) method works on some simple regression and classification examples.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Professional Accountant
Accounting & Finance Specialist
Innovative Writer
Online Assignment Help
Quality Assignments
Solution Provider
Writer Writer Name Offer Chat
Professional Accountant

ONLINE

Professional Accountant

I am a PhD writer with 10 years of experience. I will be delivering high-quality, plagiarism-free work to you in the minimum amount of time. Waiting for your message.

$35 Chat With Writer
Accounting & Finance Specialist

ONLINE

Accounting & Finance Specialist

I can assist you in plagiarism free writing as I have already done several related projects of writing. I have a master qualification with 5 years’ experience in; Essay Writing, Case Study Writing, Report Writing.

$42 Chat With Writer
Innovative Writer

ONLINE

Innovative Writer

I am a professional and experienced writer and I have written research reports, proposals, essays, thesis and dissertations on a variety of topics.

$26 Chat With Writer
Online Assignment Help

ONLINE

Online Assignment Help

I find your project quite stimulating and related to my profession. I can surely contribute you with your project.

$36 Chat With Writer
Quality Assignments

ONLINE

Quality Assignments

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$43 Chat With Writer
Solution Provider

ONLINE

Solution Provider

I have read your project description carefully and you will get plagiarism free writing according to your requirements. Thank You

$39 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Nutrition - Flat roof framing plan - Diagnosing fictional characters with psychological disorders - Business Assignment - Stages in service innovation and development - To kill a mockingbird character - Nation to australia's north west - Merge sort pros and cons - Ex16_xl_ch05_grader_cap_hw - fine art 1.9 - Tarasoff v regents of university of california full case - Laws of indices calculator - God's bits of wood characters - Marketing simulation game report - The youth in asia david sedaris summary - Enzyme amylase action on starch lab report - Packet sniffing and spoofing lab report - Cipd level 5 assignment answers - Why are bryophytes considered incompletely liberated from their ancestral aquatic habitat? - Job rewards matrix - Accounting chapter 8 1 application problem answers - The teller at the bank with brown hair - Under armour financial statements 2017 - Future in a suit - Case study on yahoo - Things not seen theme - HR: Post and responses 2 - What does second author mean - Food inc documentary questions - Contribution margin per direct labor hour - 300 in standard form - Microsoft office northumbria university - Ramesh babu barber net worth - Compare - Intercultural communication - Distribution of sample variance - Open source data loss prevention solution mydlp - Executive phd in information technology - What is the correct classification of the following reaction - Translation to arabic - Unit IV Research Paper - The involvement of the csirt in incident response typically starts with prevention. - Automotive industry - Nursing Power point - Venus cupid folly and time peter taylor - Investors require a 15 rate of return - Pros and cons of urban renewal - How to find magnification - Thesis and Annotated bibliography - Credit card size in pixels photoshop - Which of the following is not an outcome of a goods issue to a production order? - Patho - Marriott corporation the cost of capital solution xls - Discussion: Applying Measurement Tools - Prmn fcrr org - A coil consists of 200 turns of wire - Reflection summary - New earth mining case study solution - Zin zin zin a violin reading rainbow - Schedule of expected cash collections - Final project busi 201 - Mastermix pro disc 235 - Project closure report template - How to select cells a3 through f3 in excel - Single phase transformer circuit diagram - Balmain power station history - Types of vernier caliper - Epoxy mastic aluminum ii - Atticus he was real nice - Peak by roland smith - Perception/Motivation - Organizational Behavior Post and Responses - Top ten badmash in pakistan - Scope of online banking project - Rolex brand audit - What is a thesis driven essay - Paper chromatography of food dyes lab answers - Love in the time of cholera wedding reading - How Can a Good Assignment Be Written? - Nmba code of conduct for nurses - Group Research Paper - Umuc biol 103 quiz 1 - Friends of the london transport museum - Response Paper #11: "Chekov and Zulu" - Life of pi author's note pdf - Baddeley and hitch 1977 - A & P II # 5 - Finance - Nova labs cyber security - Celf 5 core language - What is isobar handover - Clinical Field Experience B: Phonics and Word Recognition: I Do, We Do, You Do - How to find a recurrence relation from a sequence - Self discipline plays an important role in leadership development because - Examples of why english is a crazy language - Characteristics of behavior aba - How to prepare a bank reconciliation and record adjustments - Sara smilansky stages of play - Community Health reply - High strength stone casting powder - A6 dimensions in inches - Uncle harry's go away mosquito spray