Statistical Inference In this chapter, we begin our discussion of statistical inference.So, in a sense, our uncertainty has increased, not only do we have the uncertainty associated with an outcome or response as described by a probability measure, but now we are also uncertain about what the probability measure is. Statistical inference is concerned with making statements or inferences about characteristics of the true underlying proba bility measure.:} of probability measures, one 35 of which corresponds to the true unknown probability measure P that produced the data s In other words, we are asserting that there is a random mechanism generating s and we know that the corresponding probability measure P is one of the probability measures in {P?, ?The mean squared error (MSE) is also expressed by MSE( ^?) = V ar( ^?) + (E( ^?) - ?) 2 Note that when the bias in an estimator is 0, then the MSE is just the variance and Sd( ^?) = q V ar( ^?) is an estimate of the standard deviation of ^?Probability theory is pri marily concerned with calculating various quantities associated with a probability model.Inferences then take the form of various statements about the true underlying probability measure from which the data were obtained.As a principle of good statistical practice, whenever we quote an estimate of a quantity, we should also provide its standard error at least when we have an unbiased estimator, as this tells us something about the accuracy of the estimate.SECTION 3.1 Statistical model In a statistical context, we observe the data s, but we are uncertain about P. In such a situation, we want to construct inferences about P based on this data.It is obviously equivalent to talk about making inferences about the true parameter value rather than the true probability measure, i.e., an inference about the true value of ?SECTION 3.2 Point estimation Point estimation method is based on the notion of estimators, this notion is defined by the following concepts.An estimator is said to be convergent (consistent) if the sequence ( ^?n) converges in probability to ?There are several methods for determining point estimators, the method of moments, the maximum likelihood method and other methods.3.2.1 Method of moments In short, the method of moments involves equating sample moments with theoretical mo ments.Of course, these inferences must be based on some kind of information, the statistical model makes up part of it. Another important part of the information will be given by an observed outcome or response, which we refer to as the data.:} : corresponds to the information a statistician brings to the application about what the true probability measure is, or at least what one is willing to assume about it. The variable ?can all be presented via probability functions or density functions f?36 Unbiasedness tells us that, in a sense, the sampling distribution of the estimator is centered on the true value ?.This requires that we know what the correct probability model is. In applications, this is often not the case, and the best we can say is that the correct probability measure to use is in a set of possible probability measures.Inference includes estimation and hypothesis testing which are discussed in this chapter and the next one.Common to virtually all approaches to statistical inference is the concept of the statistical model for the data.The first one gives an approximate value for the unknown parameter, while the second gives a an interval that likely contains the value of the parameter.A Strong convergent (consistent) estimator is one where convergence is almost sure.Typically, we use models where ?????:}.??????....0.