Statistical Theory

ContentsIndex


Statistical Theory

Much of modern statistical theory rests on Fisher’s likelihood principle.  This theory continues to serve as the backbone of general statistical inference across nearly all areas of science.  The past two decades have seen an explosion in advanced statistical methods.  These advances include  generalized linear models (McCullagh and Nelder 1989) and new theory for the analysis of binary data (Cox and Snell 1989).
  Quasi-likelihood methods for modeling the variance-covariance matrix and allowing empirical estimates of sampling variances (Wedderburn 1974) have been developed and are particularly important in product-multinomial models.  Profile likelihood intervals are now frequently used (Cormack 1992). Use of general information theoretic methods in the selection of a parsimonious model (Akaike 1985) has been a major theoretical advance.  This theory, including Akaike’s Information Criterion (AIC) is a substantial advance to the general likelihood theory.
Several important statistical advances have resulted because of the greatly increased computational power of relatively inexpensive computers.  This has lead to a number of computer intensive statistical methods.  The concept of repeated resampling has lead to the bootstrap (Efron 1982) for empirical estimates of the variance-covariance matrix and establishing confidence intervals.  Procedures now exist to compute the exact P-value for contingency tables where the data are sparse (i.e., the expected values are small).  These procedures are relevant for goodness of fit tests for general multinomial models.
This body of theory then finds specific application in the analysis of data from capture-recapture and band recovery surveys and experiments.

More information is available at https://sites.warnercnr.colostate.edu/gwhite/analysis-marked-animal-encounter-data/.