Daniel W. VanArsdale
This is a selection of basic facts, theorems and methods that utilize homogeneous coordinates. Some of these results have been previously published but are rarely seen in textbooks. Others come from my article "Homogeneous Transformation Matrices for Computer Graphics," Computers & Graphics, vol. 18, no. 2, 1994. The presentation is at an undergraduate level, but the reader should have had some exposure to analytic projective geometry and a course in linear algebra. However beware, many facts change when you go from vector spaces and Cartesian coordinates to homogeneous coordinates. For example, "flats," unlike subspaces, need not contain the origin, and a point is a flat. And an eigenvector becomes an invariant point, and its eigenvalue is not fixed.
This is a companion site to Homogeneous Transformation Matrices (HTM), a cookbook presentation of matrices that effect familiar geometric transformations. Many of the conventions and notation used here are introduced in HTM, so you should link there first and read sections I and II. Included here are methods sufficient to derive all the matrix formulas in HTM (referred to as M1, M2, . . ., M16). The references and links presented at the end here are identical to the ones in HTM.
1. Some Basic Facts
3. Oriented Content
4. Rank of the Product of Two Matrices
5. Affine Collineations
6. Null Space
8. Invariant Flats
9. Axis-Center Form
10. Singular Transformations
11. Three Dependent Points
12. A Perspective Matrix
13. References and Links
1. Some Basic Facts
F1. If points P1,
. . ., Pn are independent in Rn, any point
X has a representation of the form X = c1P1 + . .
. + cn Pn where the ci are not all zero.
With P = [P1; . . .; Pn] and C = [c1,
. . .,cn] this is X = CP, so C = XP-1.
F2. If the n x s (s <= n) matrix h has independent columns there exists an s x n matrix H, the left inverse of h, such that Hh = Is, Is the s x s identity matrix. Similarly the r x n (r <=n) independent matrix P has a right inverse. [Shilov, p. 98]
F3. Let P = [P1; . . .; Pr] be an independent r x n point matrix. (1) The single point Q is contained in range(P) if and only if there exists constants k1, . . ., kr such that Q = k1P1 + . . . + krPr, which is Q = KP, with K the 1 x r matrix (k1, . . ., kr). (2) For an r' x n point matrix Q, range(Q) c range(P) if and only if Q = KP for some r' x r matrix K.F3D. Let h = [h1, . . ., hs] be an independent n x s hyperplane matrix. (1) The null space of h, null(h), is contained in the null space of the single hyperplane g if and only if there exists constants c1, . . ., cs such that g = c1h1 + . . . + cshs, which is g = hC, with C the s x 1 matrix C = (c1; . . .;cs). (2) For an n x s' hyperplane matrix g, null(h) c null(g) if and only if g = hC for some s x s' matrix C.
F5. In Rn
an ideal point U is normalized if UUt = 1. If ideal points U
and V are normalized then UVt = cos x where x is the angle between
the two lines connecting any ordinary point to U and to V. The "angle
between U and V" is defined as x. If UVt = 0 then U and
V are orthogonal.
Duality in plane projective geometry is described as follows by Coxeter:
"The principle of duality (in two dimensions) asserts that every definition remains significant, and every theorem remains true, when we interchange point and line, and join and intersection. To establish this principle we merely have to observe that the axioms imply their own duals." (Coxeter, p. 15)In the analytic projective geometry of arbitrary dimension (rank n), duality permits the interchange of point and hyperplane, and join and intersection. A flat of rank r (as a range of points) is dual to a flat of rank n - r (as a bundle of intersecting hyperplanes). For perspective transformations, an axis of rank r is dual to a center of rank n - r.
Say P and Q are distinct points, not both on hyperplane h. Then the point X = (Qh)P - (Ph)Q is the intersection of the line through P and Q with h.
This follows since by its form X is on the line through points P and Q, and Xh = 0. The dual is:
Say p and q are distinct hyperplanes, not both containing point H. Then the hyperplane x = (qH)p - (pH)q is the join of the intersection of p and q with H.
Facts F3 and F3D are dual, as are Theorems 5 and 5D.
3. Oriented Content
A student's first encounter with homogeneous coordinates is often the following:
Theorem. In the Cartesian plane, distinct points P1= (x1, y1), P2= (x2, y2), P3= (x3, y3) are collinear if and only if det [1, x1, y1; 1, x2, y2; 1, x3, y3] = 0.
In the 3 x 3 matrix above the rows (1, x1, y1), (1, x2, y2), (1, x3, y3) are the normalized homogeneous coordinates of points P1, P2, P3. This interpretation is not developed in the texts I have examined, nor do linear algebra textbooks give the following:
Theorem 1: In Rn the oriented content of the simplex formed by the n ordered independent points P1, . . ., Pn, each ordinary and normalized, is [1/(n-1)!] det [P1; . . .; Pn]. [Stolfi, p.164]
Orientation here matches what we expect in one, two and
three dimensions. For example, in the plane the determinant will be positive
if P1, P2 and P3 form a counterclockwise
triangle. When the points are dependent (collinear in R3)
the determinant is zero.
4. The Rank of the Product of Two Matrices
In matrix theory the rank of a matrix A is the maximum number of independent rows (or columns) that can be selected. This is equal to the dimension of the subspace spanned by the rows of A. The following theorem appears in every basic textbook on matrices (e.g. Ayres, p. 43):
Theorem: Let A be an r x n matrix and B be an n x s matrix.
(1) rank (AB) <= rank (A)
(2) rank (AB) <= rank (B).
Interpreting the components of the matrices as homogeneous coordinates we have the following more specific theorem (VanArsdale).
Theorem 2: Let A be an
r x n matrix and B be an n x s matrix. Then
(1) rank (AB) = rank (A) - rank [range(A) ^ null(B)]
(2) rank (AB) = rank (B) - rank [range(Bt) ^ null(At)].
To prove (1), using concepts from linear algebra, extend a basis Y1, . . ., Yj of range(A) ^ null(B) to range(A), producing Y1, . . ., Yj, Yj+1, . . ., Yj+k. Then Yj+1B, . . ., Yj+kB is a basis for range(AB). Thus rank[range(A)] = rank(A) = rank[range(A) ^ null (B)] + rank(AB) as required. The second part of the theorem follows after transposing the first. Here Bt is the transpose of B.
Theorem 2 gives a geometric interpretation of how
much less rank (AB) is from rank (A), and we see at once that if the flats
represented by A and B are disjoint the two ranks are equal.
5. Affine Collineations
An affine collineation (affinity) maps ideal points to ideal points. Geometrically this means parallel lines are mapped to parallel lines. Most familiar linear transformations are affine, for example rotation, dilation, translation, reflection and shear. A nonsingular matrix T represents an affinity if and only if its first column equals [k; 0; . . .; 0], k /= 0. If k = 1 T is normalized and T maps ordinary normalized points to ordinary normalized points.
An affine collineation on Rn is determined by its mapping of n independent ordinary points (Snapper & Troyer, p. 93). But it is useful to allow ideal points as follows.
Theorem 3: Let P1, . . ., Pn and Q1, . . ., Qn be two sets of points in Rn such that: (1) for each i, Pi and Qi have the same homogeneous coordinate, (2) the Pi and the Qi are independent (thus at least one pair, say Pk and Qk, are ordinary). Let P = [P1; . . .; Pn] and Q = [Q1; . . .; Qn]. Then the matrix T = P-1Q is a normalized representation of the unique affine collineation, f, for which: (a) (Pi)f = Qi for all Pi and Qi ordinary, and (b) (Pk + Pj)f = Qk + Qj for all Pj and Qj ideal.
Proof: The mapping conditions on f specify a unique affinity since the Pi and Pk+ Pj constitute n independent ordinary points, likewise their images Qi and Qk + Qj. And matrix T effects these mappings since PT = Q (see F4). Finally we need to show that T is affine and normalized. Since the first (homogeneous) columns of P and Q are equal, Pw = Qw, so w = P-1Qw = Tw which says the first column of T is w = (1: 0; . . . ;0). (Stolfi, p.158, for all ordinary points).
The conditions in Theorem 3 mean that if we use ideal points Pj and Pjf = Qj to specify an affine transformation we must choose representations of Pj and Qj that work when added to some ordinary point. Usually the ordinary point Pk in the theorem will be an invariant point on the axis of f. For a simple example, the reflection f about the x-axis in R3 maps (0,0,1)f = (0,0,1), (1,0,1)f = (1,0,1) and (0,1,0)f = (0, -1, 0) = Qj. Using these coordinates in Theorem 3 will give a correct matrix for f, T = [1,0,0; 0,-1,0; 0,0,1]. But if for (0,1,0)f we had used Qj = (0,1,0), which is projectively equivalent to point (0,-1,0), an incorrect matrix would result.
Let the rank n - 2 ordinary flat S be represented by the point matrix P = [P1; . . .; Pn-2] and by hyperplane matrix g = [g1, g2] = Ph, g oriented and orthonormalized. Then the rotation f by angle b about axis S has the representation T = [P; gN]-1 [P; RgN] where R = [cos b, sin b; -sin b, cos b] (M15 in HTM). This can be verified by noting (see F4): [P; gN]T = [P; RgN], thus PiT = Pi for i = 1, . . ., n-2 and
[g1N; g2N] T = gNT = RgN = [(cos b) g1N+ (sin b) g2N; (-sin b) g1N + (cos b) g2N]
Thus T has mapped n - 2 + 2 = n independent points
correctly, and since f is affine by Theorem 3 this insures T represents
f. If one lacks confidence about rotating the ideal points g1Nand
g2N their coordinates
can both be added to an ordinary invariant point on axis S to return to
the familiar. Theorem 3 can also be used to derive equations M3, M5, M6,
and M9 in HTM.
6. Null Space
In HTM we introduced the notation Ph for the
independent hyperplane matrix that represents the same flat (by intersections)
that a given point matrix P represents (by unions). Procedure B in HTM
provides a method to calculate Ph by elementary column operations.
It is also useful to be able to reverse this process and construct an independent
point matrix, Q, that represents the same flat that a given hyperplane matrix
g represents. This is simply a point representation of the null space of
g, Q = null(g), or Q = gP. Procedure B can be used to calculate
gP as follows.
Procedure D: For a hyperplane matrix g, find an independent point matrix
Q = gP such that range(Q) = null(g).
Step 1. Transpose g (getting gT).
Step 2. Use Procedure B to find (gT)h
Step 3. Transpose (gT)h to find Q = gP
Step 4. (Optional). Normalize the rows of Q.
Homogeneous coordinates are at their best in finding the intersection of two flats. To perform the calculations involved it is necessary to make two simple modifications to the two procedures (B and D) that convert point representations to hyperplane representations (Ph), and visa versa (hP).
Procedure B (and D) modifications:Now to find the intersection of any two flats S1 and S2 simply represent them by hyperplane matrices g1 and g2 and form the compound matrix g = [g1, g2] by adjoining columns. Hyperplane matrix g then represents S1 ^ S2. Likely one will want to convert g to an independent point representation gP by using Procedure D. Formally, with Q1 and Q2 point matrices (here the range of these matrices):
(1) Allow the input matrix P (or h) to have dependent rows (or columns).
(2) Allow the input and output of "improper" matrices: (i) a point matrix with no rows representing the null flat, and (ii) a hyperplane matrix with no columns, representing all of Rn.
Theorem 4. Q1 ^ Q2 = [Q1h, Q2h] P
This works in a space of any dimension, for flats of any dimension, for parallel flats, and for intersections of any dimension including the null flat for disjoint inputs. Coding requirements are simple: just reduction to a standard form by elementary column operations, the rank n variable. If the intersection is a single point that is all you will get, since as specified the procedure for hP produces a matrix with independent rows. This obvious and extremely general method has probably been known long before 1968 (Hodge & Pedoe, p. 189), but it gets little or no attention in basic texts and packaged programs.
The capability to find intersections and unions makes the following two constructions in three dimensional Euclidean space (R4 ) almost a matter of definition.
In R4, given two skew lines l1 and l2 and a point X on neither line, find the line through X that meets both l1 and l2. Answer: [(l1 v X) ^ l2] v X = (l1 v X) ^ (l2 v X).
In R4, given two ordinary skew lines
l1 and l2, find the line that meets both l1
and l2 at right angles. Answer: Let X = [(l1)N
^ (l2)N] in the above construction. This finds
the shortest line segment connecting the two skew lines.
8. Invariant flats
Say a projective transformation f on Rn is represented by the homogeneous matrix T. A point P is invariant under f if PT = kP, k any nonzero constant. The scalar k is called the eigenvalue of P with respect to T. In contrast to nonhomogeneous matrices, if homogeneous matrix T is multiplied by a nonzero constant c it is unchanged as a transformation matrix, but the eigenvalues of all invariant points under T are multiplied by c. Thus T can be "scaled" so the eigenvalue of P is 1. If for a point P, PT = 0, rather than stating "f maps P to a vector of all zeros" it is more accurate to state "P is not in the domain of f." Or we may regard P as mapped by f to the null flat. For homogeneous matrices we do not regard 0 as an eigenvalue.
A flat, considered as a set of points S, is invariant
under a mapping f if f is a bijection on S. If each point of S is itself
invariant the flat is said to be point-wise invariant (P-invariant). All points on a P-invariant
flat have the same eigenvalue, and conversely all invariant points with
equal eigenvalues under a nonsingular projective transformation constitute
a flat. An axis of f is a P-invariant
proper flat that is not strictly contained in a P-invariant flat. I call
the common eigenvalue of points on an axis the P-eigenvalue of the axis to distinguish
this number from the eigenvalue of hyperplanes.
9. Axis - Center FormMost familiar geometric transformation have an axis. The next theorem, which is followed by its dual, is very useful for constructing homogenous matrices representing such transformations.
Theorem 5. Say a projective transformation f on Rn has an axis S of rank r and h is any independent n x s hyperplane matrix representation of S, s = n - r. Then f has a matrix representation of the form
T = I + hC
where C is some independent point matrix. If f is a collineation, C represents a center of f. (VanArsdale)
Proof: Since the common eigenvalue of points on
axis S is nonzero, any matrix representation T of f can be scaled so this
P-eigenvalue is 1. Then any invariant point Q with eigenvalue 1 must be on
h since h is an axis, and we have Q(T - I) = 0 if and only if Qh = 0. Interpreting
T - I as a hyperplane matrix we get (T - I)P = hP and
so by F3D, T - I = hC, C some s x n point matrix.
Now rank(C) > rank(hC) = rank(T- I) = rank(h) = s, and since C
has s rows these must be independent.
It remains to show that if f is a collineation, range(C)
is a center of f. Every hyperplane g of Rn
is in the domain of a collineation f, and those that contain C are invariant
under f with eigenvalue 1 since Tg = (I + hC)g = g using Cg = 0. And any
invariant hyperplane g with eigenvalue 1 contains range(C), for Tg = [I
+ hC]g = g implies hCg = 0 which requires Cg = 0 since h has a left inverse.
This shows there is no h-invariant proper subflat of range(C), hence C represents
a center of f.
Theorem 5D. Say a projective transformation f on Rn has a center S of rank s and C is any independent s x n point matrix representation of S. Then f has a matrix representation of the form
T = I + hC
where h is some independent hyperplane matrix. If f is
a collineation, h represents an axis of f.
Proof. Since the common eigenvalue of hyperplanes containing
center S is nonzero, any matrix representation T of f can be scaled so
this h-eigenvalue is 1. Then any invariant hyperplane g with eigenvalue
1 must contain C since C is a center, and we have (T - I)g = 0 if and only
if Cg = 0. Interpreting T - I as a point matrix we get range(T - I) = range(C)
and so by F3, T - I = hC, h some n x s hyperplane matrix.
Now rank(h) > rank(hC) = rank(T- I) = rank(C) = s, and since h
has s columns these must be independent.
It remains to show that if f is a collineation, hP is an axis of f. Every point P of Rn is in the domain of a collineation f, and those in hP are invariant under f with eigenvalue 1 since PT = P(I + hC) = P using Ph = 0. And any invariant point P with eigenvalue 1 lies in hP, for PT = P[I + hC] = P implies PhC = 0 which requires Ph = 0 since C has a right inverse. This shows there is no P-invariant flat that properly contains hP, hence h represents an axis of f.Theorem 5 and its dual 5D show that axes and centers of any collineation on Rn come in pairs with complementary ranks and equal eigenvalues. An axis and the center core flat corresponding to this axis need not be disjoint.
(9A) T = I + hMC
and further conditions on f may allow us to solve for
matrix M. When rank(h) = rank(C) = 1, the "matrix" M is a scalar. This "axis-center"
method can be used to derive matrices M1, M2, M7, M8, M10, M12, M13
and M14 in HTM. In the next section we illustrate its use in deriving a
matrix for singular projection (M1).
If a projective transformation f on Rn
maps at least one point to null (i.e. there is at least one point not in
the domain of f), then we call f singular, and it is represented by a singular
n x n matrix T.
T = I - h(Ch)-1C
for a matrix representation of a general projection. When
the axis h is a single hyperplane this reduces to T = I - hC/Ch, an attractive
formula that I have not been able to find published prior to 1994 (VanArsdale). More complicated equivalent
expressions do appear. Hodge and Pedoe
give one using Grassmann coordinates (p. 309).
We have, in effect, defined a projection as a projective transformation with complementary null space and axis. Other equivalent definitions appear in the literature [Hodge and Pedoe, p. 283; Halmos, p. 73].
11. Three dependent points.
In linear algebra vectors V1, . . ., Vr are dependent if constants c1, . . ., cr exist, not all zero, such that c1V1 + . . . + crVr = 0. To actually find the constants ci requires the solution of a system of homogeneous linear equations, which is easy enough to do. But there is usually no convenient explicit expression for these constants, nor a geometrical interpretation of them. In projective space, Rn, we have for homogeneous coordinates:
Theorem 6. If three points P1, P2, P3 are dependent and h1, h2 are any two hyperplanes then
(d21d32 - d22d31)P1 + (d31d12 - d32d11)P2 + (d11d22 - d12d21)P3 = 0
where dij = Pihj.
To prove this we can write the lengthy dependence relation as the symbolic determinant
D = det [P1, P2, P3; P1h1, P2h1, P3h1; P1h2, P2h2, P3h2 ].
Here the first row, [P1, P2, P3], consists of three points each with n homogeneous coordinates, so D can not be evaluated as a number. But when we consider each of the coordinates in turn D becomes a legitimate determinant. And each of these n determinants will be zero. For since the points are dependent, one is a linear combination of the others, say P3 = c1P1 + c2P2. Substituting this for P3 everywhere in D will produce a determinant with dependent columns for each of the n coordinates of the points.
One can write corollaries, duals and extensions of Theorem 7.
Corollary 6A. If normalized ordinary points P1, P2, P3 are dependent and h is any hyperplane then (d2-d3)P1 + (d3-d1)P2 + (d1-d2)P3 = 0 where di = Pih.
This follows from Theorem 7 by letting h1 = h and h2 = w, the hyperplane at infinity, for then Pih2 = di2 = 1. Here the di have a geometrical interpretation for h ordinary and normalized: di = Pih is the directed distance from hyperplane h to point Pi.
Corollary 6B. If the normalized ordinary hyperplanes h1, h2, h3 are dependent and X is any point on the ideal line L through their normals, then
sin(a2 - a3)h1 + sin(a3 - a1)h2 + sin(a1 - a2)h3 = 0
where ai is the directed angle between X and hiN on L.
This can be proved by writing a dual of Theorem 7. The symbolic determinant will be: D = [ h1, h2, h3; Xh1, Xh2, Xh3; Yh1, Yh2, Yh3] where X and Y are arbitrary points. To prove Corollary 7B take X as chosen there, and point Y orthogonal to X on L (i.e. XYt = 0, see F5). Then Xhi = cos ai = di1 and Yhi = sin ai = di2 for i = 1, 2, 3. The first term of D will then be (d21d32 - d22d31)P1 = [cos a2 sin a3 - sin a2 cos a3] P1 = sin ( a3 - a2 ) P1 and so on as in Corollary 3B, with a change of signs to get positive cycling of the subscripts.
When considering ideal points in Rn and the angles between them, it may be more suitable to use vectors (with n - 1 components) and their dot products.
Corollary 6C. Say the vectors V1, V2, V3 are dependent. Then c1V1 + c2V2 + c3V3 = 0 where c1 = (d21d32 - d22d31), c2 = (d31d12 - d32d11), c3 = (d11d22 - d12d21) using dij = Vi .Vj (dot product).
This follows from the symbolic determinant D = [V1,
V2, V3; d11, d21,
d31; d12, d22, d 32].
12. A perspective matrix
If a collineation f has a hyperplane has an axis it is called a perspective. From Theorem 5 we know f has a corresponding center point, and a matrix representation of the form T = I + hC where hyperplane h is the axis and C is the center. If point C is on h (Ch = 0), f is called an elation. If C is not on h then f is called an homology. Say P and Q are distinct points and Pf = Q. Then neither P nor Q are on h since f is one-to-one and all points on h are invariant. Now if Ph = Qh then f is an elation for PT = P + (Ph)C = Q implies Ph + (Ph)Ch = Qh which, using Ph = Qh, requires Ch = 0. So if f is an homology, Ph /= Qh.
We will construct a matrix for an homology given its axis and two successive mappings. In Rn the invariant axis of perspective f fixes the image of n - 1 independent points on the axis. When f is an homology, two additional points (off the axis) and their images will determine n + 1 maps, and hence f. But these points can not be chosen freely. For transforming P1T = P1( I + hC) = P1 + (P1h)C shows the transform of a point P1 lies on the line through P1 and C. It is also easy to show (VanArsdale) that if P1T = P2, T must be of the form
(A) T = I + h[-P1/(P1h) + xP2], x any nonzero constant.
Now if we also transform P2T = P3, clearly the points P1, P2, P3 must all lie on the same line through C. This dependence of three points provides an application for the explicit expressions of dependence in the previous section.
Theorem 8. Say the homology f has hyperplane h as an axis, and for distinct ordinary points P1, P2, P3: P1f = P2 and P2f = P3 . Use the notation di = Pih. Then f is represented by the matrix
T = I + h( -P1/d1 + xP2) where x = [d3(d1 - d2)] / [d1d2(d2-d3)].
Proof: Since P1, P2, P3 are collinear we can apply Corollary 6A, using the axis h of f as the arbitrary hyperplane in the corollary. So (d2-d3)P1 + (d3-d1)P2 + (d1 - d2)P3 = 0, di as above. Thus:
(B) P1 + [(d3 - d1)/(d2 - d3)] P2 = kP3
where k is a constant we need not be concerned with as long as it is not zero. Since f is an homology, from the above discussion we know d1 /= d2 and d2 /= d3.
Now applying P2T in (A) gives P2T = P2 + d2 [-P1/d1 + xP2] or
(C) P2T = -(d2/d1)[P1 - (d1/d2)(1 + d2x)P2]
We wish the expression in the brackets in (C) to equal
a multiple of P3 as in (B). So solving P1 - (d1/d2)(1
+ d2x)P2 = P1 + (d3 -
d1)/(d2 - d3) P2 for x
gives x = d3(d1 - d2)/d1d2(d2-d3)
as in Theorem 8. Note that generally the homology of Theorem 8 will not
be affine hence Theorem 3 above for affinities does
Ayres, F. Jr., Matrices, Schaum's Outline Series, New York, 1962.
Coxeter, H.S.M., The Real Projective
Plane (2nd ed.), Cambridge, 1961.
Halmos, P.R., Finite-Dimensional Vector Spaces, (2nd ed.), Van Nostrand, New York, 1958.
Hodge, W.V.D & Pedoe, D., Methods of Algebraic Geometry (Vol. 1), Cambridge Univ. Press, 1968.
Laub, A.J. & Shiflett, G.R., A linear algebra approach to the analysis of rigid body displacement from initial and final position data. J. Appl. Mech. 49, 213-216, 1982.
Pedoe, D., Geometry, Dover, New York, 1988.
Roberts, L.G., Homogeneous Matrix Representation and Manipulation of N-dimensional Constructs. MIT Lincoln Laboratory, MS 1405, May 1965.
Semple, J.G. & Kneebone, G.T., Algebraic Projective Geometry, Clarendon Press, Oxford, 1952.
Shilov, G.E., Linear Algebra, Dover Publications, New York, 1977.
Snapper, E. & Troyer, R.J., Metric Affine Geometry. Academic Press, 1971.
Stolfi, J., Oriented Projective Geometry, Academic Press, 1991.
VanArsdale, D., Homogeneous Transformation
Matrices for Computer Graphics, Computers & Graphics, vol. 18,
no. 2, 177-191, 1994.
Uses coordinates to prove some classical theorems in plane projective geometry.
- Projective geometry
Internal links to articles on projective geometry at various levels. Useful online resource.
Elementary 2D and 3D transformations, including affine, shear, and rotation.
To top of this document
To Index page for Daniel W. VanArsdale
Corrections, references, comments or questions on this
article are appreciated, but please no unrelated homework requests.
email Daniel W. VanArsdale: email@example.com
First uploaded Oct. 2, 2000. Sections 8-10 revised October