Regression methods

Posted by J.Yeun

Regression Methods

All of the predictive methods implemented in PROC PLS work essentially by finding linear combinations of the predictors (factors) to use to predict the responses linearly. The methods differ only in how the factors are derived, as explained in the following sections.

Partial Least Squares

Partial least squares (PLS) works by extracting one factor at a time. Let X=X0 be the centered and scaled matrix of predictors and Y=Y0 the centered and scaled matrix of response values. The PLS method starts with a linear combination t = X0w of the predictors, where t is called a score vector and w is its associated weight vector. The PLS method predicts both X0 and Y0 by regression on t:
hat{X_0} & = & t{p}',
 {rm where}p'=(t't)^{-1}t'X_0  hat{Y_0} & = & t{c}',
 {rm where}c'=(t't)^{-1}t'Y_0
The vectors p and c are called the X- and Y-loadings, respectively.

The specific linear combination t = X0w is the one that has maximum covariance t'u with some response linear combination u = Y0q. Another characterization is that the X- and Y-weights w and q are proportional to the first left and right singular vectors of the covariance matrix X0'Y0 or, equivalently, the first eigenvectors of X0'Y0Y0'X0 and Y0'X0X0'Y0, respectively.

This accounts for how the first PLS factor is extracted. The second factor is extracted in the same way by replacing X0 and Y0 with the X- and Y-residuals from the first factor

X_1 & = & X_0 - hat{X_0}  Y_1 & = & Y_0 - hat{Y_0}
These residuals are also called the deflated X and Y blocks. The process of extracting a score vector and deflating the data matrices is repeated for as many extracted factors as are desired.

SIMPLS

Note that each extracted PLS factor is defined in terms of different X-variables Xi. This leads to difficulties in comparing different scores, weights, and so forth. The SIMPLS method of de Jong (1993) overcomes these difficulties by computing each score ti = Xri in terms of the original (centered and scaled) predictors X. The SIMPLS X-weight vectors ri are similar to the eigenvectors of SS' = X'YY'X, but they satisfy a different orthogonality condition. The r1 vector is just the first eigenvector e1 (so that the first SIMPLS score is the same as the first PLS score), but whereas the second eigenvector maximizes
e1'SS'e2  subject to e1'e2 = 0
the second SIMPLS weight r2 maximizes
r1'SS'r2  subject to r1'X'Xr2 = t1't2 = 0
The SIMPLS scores are identical to the PLS scores for one response but slightly different for more than one response; refer to de Jong (1993) for details. The X- and Y-loadings are defined as in PLS, but since the scores are all defined in terms of X, it is easy to compute the overall model coefficients B:
hat{Y} & = & sum_i {t_i}{c_i}'  & = & sum_i X{r_i}{c_i}'  & = & XB,{rm where}B=RC'

Principal Components Regression

Like the SIMPLS method, principal components regression (PCR) defines all the scores in terms of the original (centered and scaled) predictors X. However, unlike both the PLS and SIMPLS methods, the PCR method chooses the X-weights/X-scores without regard to the response data. The X-scores are chosen to explain as much variation in X as possible; equivalently, the X-weights for the PCR method are the eigenvectors of the predictor covariance matrix X'X. Again, the X- and Y-loadings are defined as in PLS; but, as in SIMPLS, it is easy to compute overall model coefficients for the original (centered and scaled) responses Y in terms of the original predictors X.

Reduced Rank Regression

As discussed in the preceding sections, partial least squares depends on selecting factors t = Xw of the predictors and u = Yq of the responses that have maximum covariance, whereas principal components regression effectively ignores u and selects t to have maximum variance, subject to orthogonality constraints. In contrast, reduced rank regression selects u to account for as much variation in the predicted responses as possible, effectively ignoring the predictors for the purposes of factor extraction. In reduced rank regression, the Y-weights qi are the eigenvectors of the covariance matrix hat{Y}_{rm LS}'hat{Y}_{rm LS} of the responses predicted by ordinary least squares regression; the X-scores are the projections of the Y-scores Yqi onto the X space.

Relationships Between Methods

When you develop a predictive model, it is important to consider not only the explanatory power of the model for current responses, but also how well sampled the predictive functions are, since this impacts how well the model can extrapolate to future observations. All of the techniques implemented in the PLS procedure work by extracting successive factors, or linear combinations of the predictors, that optimally address one or both of these two goals -explaining response variation and explaining predictor variation. In particular, principal components regression selects factors that explain as much predictor variation as possible, reduced rank regression selects factors that explain as much response variation as possible, and partial least squares balances the two objectives, seeking for factors that explain both response and predictor variation.

To see the relationships between these methods, consider how each one extracts a single factor from the following artificial data set consisting of two predictors and one response:

data data; input x1 x2 y; datalines; 3.37651 2.30716 0.75615 0.74193 -0.88845 1.15285 4.18747 2.17373 1.42392 0.96097 0.57301 0.27433 -1.11161 -0.75225 -0.25410 -1.38029 -1.31343 -0.04728 1.28153 -0.13751 1.00341 -1.39242 -2.03615 0.45518 0.63741 0.06183 0.40699 -2.52533 -1.23726 -0.91080 2.44277 3.61077 -0.82590 ;

proc pls data=data nfac=1 method=rrr; title "Reduced Rank Regression"; model y = x1 x2; proc pls data=data nfac=1 method=pcr; title "Principal Components Regression"; model y = x1 x2; proc pls data=data nfac=1 method=pls; title "Partial Least Squares Regression"; model y = x1 x2; run;

The amount of model and response variation explained by the first factor for each method is shown in Figure 51.7 through Figure 51.9.

Reduced Rank Regression

The PLS Procedure

Percent Variation Accounted for by Reduced
Rank Regression Factors
Number of
Extracted
Factors
Model Effects Dependent Variables
Current Total Current Total
1 15.0661 15.0661 100.0000 100.0000

Figure 51.7: Variation Explained by First Reduced Rank Regression Factor

Principal Components Regression

The PLS Procedure

Percent Variation Accounted for by Principal
Components
Number of
Extracted
Factors
Model Effects Dependent Variables
Current Total Current Total
1 92.9996 92.9996 9.3787 9.3787

Figure 51.8: Variation Explained by First Principal Components Regression Factor

Partial Least Squares Regression

The PLS Procedure

Percent Variation Accounted for by Partial
Least Squares Factors
Number of
Extracted
Factors
Model Effects Dependent Variables
Current Total Current Total
1 88.5357 88.5357 26.5304 26.5304

Figure 51.9: Variation Explained by First Partial Least Squares Regression Factor

Notice that, while the first reduced rank regression factor explains all of the response variation, it accounts for only about 15% of the predictor variation. In contrast, the first principal components regression factor accounts for most of the predictor variation (93%) but only 9% of the response variation. The first partial least squares factor accounts for only slightly less predictor variation than principal components but about three times as much response variation.

Figure 51.10 illustrates how partial least squares balances the goals of explaining response and predictor variation in this case.

plsd4.gif (4335 bytes)

Figure 51.10: Depiction of First Factors for Three Different Regression Methods

The ellipse shows the general shape of the 11 observations in the predictor space, with the contours of increasing y overlaid. Also shown are the directions of the first factor for each of the three methods. Notice that, while the predictors vary most in the x1 = x2 direction, the response changes most in the orthogonal x1 = -x2 direction. This explains why the first principal component accounts for little variation in the response and why the first reduced rank regression factor accounts for little variation in the predictors. The direction of the first partial least squares factor represents a compromise between the other two directions.

 

 

----------------------------------------------------------------------------------------

 

source : http://infoman.mokwon.ac.kr/stat/chap51/sect12.htm

'** J's Story ** > [Statistics Story]' 카테고리의 다른 글

Applied survival analysis  (2) 2009.09.09
Bagging  (3) 2009.09.09
Regression methods  (2) 2009.09.09
RExcel  (1) 2009.09.09
Lagrange Multiplier  (3) 2009.09.09
Logistic Regression  (1) 2009.09.09

블로그 이미지

J.Yeun

행복은 내가 만들어 가는 것이다..

카테고리

@__All that T&J__@ (76)
** T & J 's Life ** (22)
** J's Story ** (54)