# Hat Matrix Least Squares

**Some simple properties of the hat matrix are important in interpreting least squares.**

**Hat matrix least squares**.
The OLS estimator exists and is unique ie.
Least Squares Solution The matrix normal equations can be derived directly from the minimization of.
John Fox in Encyclopedia of Social Measurement 2005.

So y ¼ Hy þMy ¼ yþe where because of 311 and 313 y0e ¼ 0 so that the vectors y and e are orthogonal to each other. If you have fitted your least square problem orthogonal methods like QR factorization and SVD then hat matrix is in simple form. That is y Hywhere H ZZ0Z 1Z0.

Therefore when performing linear regression in the matrix form if hatmathbfY. Tukey coined the term hat matrix for Hbecause it puts the hat on y. A least-squares solution of the matrix equation Ax b is a vector K x in R n such that dist b A K x dist b Ax for all other vectors x in R n.

Its least squares from a linear algebra point of view and adapted from Friedbergs Linear Algebra. You may check out my answer Compute projection hat matrix via QR factorization SVD and Cholesky factorization for explicit form of the hat matrix written in LaTeX. For weighted least squares we want Q1 and Q2.

The discrepancy is due to differences in how the software estimates the hat matrix on. 13 Idempotency of the Hat Matrix H is an n nsquare matrix and moreover it is idempotent which can be veri ed as follows HH XXT X 1XT XXT X 1XT XXT X 1XT XXT X 1XT XXT X 1XT H. Unfortunately estimating weighted least squares with HC2 or HC3 robust variance results in different answers across Stata and common approaches in R as well as Python.

In least-squares fitting it is important to understand the influence which a data y value will have on each fitted y value. Clearly there holds H0 ¼ H H2 ¼ H H þM ¼ I and HM ¼ 0. This approach also simplifies the calculations involved in removing a data point and it requires only simple modifications in the preferred numerical least-squares algorithms.