**Auteur :** Carl N. Morris

**la langue :** en

**Éditeur:** Springer

**Date de sortie :** 2008-04-03

Nature didn’t design human beings to be statisticians, and in fact our minds are more naturally attuned to spotting the saber-toothed tiger than seeing the jungle he springs from. Yet scienti?c discovery in practice is often more jungle than tiger. Those of us who devote our scienti?c lives to the deep and satisfying subject of statistical inference usually do so in the face of a certain under-appreciation from the public, and also (though less so these days) from the wider scienti?c world. With this in mind, it feels very nice to be over-appreciated for a while, even at the expense of weathering a 70th birthday. (Are we certain that some terrible chronological error hasn’t been made?) Carl Morris and Rob Tibshirani, the two colleagues I’ve worked most closely with, both ?t my ideal pro?le of the statistician as a mathematical scientist working seamlessly across wide areas of theory and application. They seem to have chosen the papers here in the same catholic spirit, and then cajoled an all-star cast of statistical savants to comment on them.

**Auteur :** Bradley Efron

**la langue :** en

**Éditeur:** Cambridge University Press

**Date de sortie :** 2016-07-20

Take an exhilarating journey through the modern revolution in statistics with two of the ringleaders.

**Auteur :** Bradley Efron

**la langue :** en

**Éditeur:** CRC Press

**Date de sortie :** 1994-05-15

Statistics is a subject of many uses and surprisingly few effective practitioners. The traditional road to statistical knowledge is blocked, for most, by a formidable wall of mathematics. The approach in An Introduction to the Bootstrap avoids that wall. It arms scientists and engineers, as well as statisticians, with the computational techniques they need to analyze and understand complicated data sets.

**Auteur :** Bradley Efron

**la langue :** en

**Éditeur:** Cambridge University Press

**Date de sortie :** 2012-11-29

We live in a new age for statistical inference, where modern scientific technology such as microarrays and fMRI machines routinely produce thousands and sometimes millions of parallel data sets, each with its own estimation or testing problem. Doing thousands of problems at once is more than repeated application of classical methods. Taking an empirical Bayes approach, Bradley Efron, inventor of the bootstrap, shows how information accrues across problems in a way that combines Bayesian and frequentist ideas. Estimation, testing, and prediction blend in this framework, producing opportunities for new methodologies of increased power. New difficulties also arise, easily leading to flawed inferences. This book takes a careful look at both the promise and pitfalls of large-scale statistical inference, with particular attention to false discovery rates, the most successful of the new statistical techniques. Emphasis is on the inferential ideas underlying technical developments, illustrated using a large number of real examples.

**Auteur :** Trevor Hastie

**la langue :** en

**Éditeur:** Springer Science & Business Media

**Date de sortie :** 2013-11-11

During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book’s coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for “wide” data (p bigger than n), including multiple testing and false discovery rates. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.

**Auteur :** Stephen M. Stigler

**la langue :** en

**Éditeur:** Harvard University Press

**Date de sortie :** 2016-03-07

What gives statistics its unity as a science? Stephen Stigler sets forth the seven foundational ideas of statistics—a scientific discipline related to but distinct from mathematics and computer science and one which often seems counterintuitive. His original account will fascinate the interested layperson and engage the professional statistician.

**Auteur :** Trevor Hastie

**la langue :** en

**Éditeur:** CRC Press

**Date de sortie :** 2015-05-07

Discover New Methods for Dealing with High-Dimensional Data A sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data. Top experts in this rapidly evolving field, the authors describe the lasso for linear regression and a simple coordinate descent algorithm for its computation. They discuss the application of l1 penalties to generalized linear models and support vector machines, cover generalized penalties such as the elastic net and group lasso, and review numerical methods for optimization. They also present statistical inference methods for fitted (lasso) models, including the bootstrap, Bayesian methods, and recently developed approaches. In addition, the book examines matrix decomposition, sparse multivariate analysis, graphical models, and compressed sensing. It concludes with a survey of theoretical results for the lasso. In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. Data analysts, computer scientists, and theorists will appreciate this thorough and up-to-date treatment of sparse statistical modeling.