Least-squares calculations, Normal equations – HP 15c User Manual

Page 93

Advertising
background image

Section 4: Using Matrix Operations

93

Note that r

T

E was scaled by 10

7

so that each row of E and A has roughly the same norm as

every other. Using this new system, the HP-15C calculates the solution

0

0

10

9

10

10

with

,

999980

.

1999

999980

.

1999

999980

.

1999

999980

.

1999

000080

.

2000

6

5

7

AX

X

.

This solution differs from the earlier solution and is correct to 10 digits.

Sometimes the elements of a nearly singular matrix E are calculated using a formula to
which roundoff contributes so much error that the calculated inverse E

−1

must be wrong even

when it is calculated using exact arithmetic. Preconditioning is valuable in this case only if it
is applied to the formula in such a way that the modified row of A is calculated accurately. In
other words, you must change the formula exactly into a new and better formula by the
preconditioning process if you are to gain any benefit.

Least-Squares Calculations

Matrix operations are frequently used in least-squares calculations. The typical least-squares
problem involves an n × p matrix X of observed data and a vector y of n observations from
which you must find a vector b with p coefficients that minimizes

n

i

i

F

r

1

2

2

r

Where r = yXb is the residual vector.

Normal Equations

From the expression above,

Xb

X

b

y

X

b

y

y

Xb

y

Xb

y

r

T

T

T

T

T

T

F

2

)

(

)

(

2

.

Solving the least-squares problem is equivalent to finding a solution b to the normal
equations
.

X

T

Xb = X

T

y.

However, the normal equations are very sensitive to rounding errors. (Orthogonal
factorization, discussed on page 95, is relatively insensitive to rounding errors.)

The weighted least-squares problem is a generalization of the ordinary least-squares
problem. In it you seek to minimize

Advertising