Statics Reference

Click me to close menu!

    Vectors and Scalars

    Vectors

    A vector is an arrow with a length and a direction. Just like positions, vectors exist before we measure or describe them. Unlike positions, vectors can mean many different things, such as position vectors, velocities, etc. Vectors are not anchored to particular positions in space, so we can slide a vector around and locate it at any position.

    Change:

    Two vectors, which may or may not be the same vector. Moving a vector around does not change it: it is still the same vector.

    Notation

    Some textbooks differentiate between free vectors, which are free to slide around, and bound vectors, which are anchored in space. We will only use free vectors.

    We will use the over-arrow notation \( \vec{a} \) for vector quantities. Other common notations include bold \( \boldsymbol{a} \) and under-bars \( \underline{a} \). For unit (length one) vectors we will use an over-hat \( \hat{a} \).

    Scalars

    While a vector represents magnitude and direction, a scalar is a number that represents a magnitude, but with no directional information. Some examples of scalar quantities can be mass, length, time, speed, or temperature.

    Scaling Vectors

    Vectors can be multiplied by a scalar number, which multiplies their length.

    Vector Addition and Subtraction

    Vectors can be added or subtracted together, using the parallelogram law of addition or the head-to-tail rule.

    Unit Vectors

    A unit vector is any vector with a length of one. We use the special over-hat notation \( \hat{a} \) to indicate when a vector is a unit vector. Any non-zero vector \( \vec{a} \) gives a unit vector \( \hat{a} \) that specifies the direction of \( \vec{a} \).

    Normalization to unit vector. #rvv-eu
    $$ \begin{aligned} \hat{a} =\frac{\vec{a}}{a}\end{aligned} $$

    If we compute the length of \( \hat{a} \) then we find:

    $$ \| \hat{a} \| = \left\| \frac{\vec{a}}{a} \right\| = \frac{\|\vec{a}\|}{a} = \frac{a}{a} = 1, $$
    so \( \hat{a} \) is really a unit vector, and it is in the same direction as \( \vec{a} \) as they differ only by a scalar factor.

    Any vector can be written as the product of its length and direction:

    Vector decomposition into length and direction. #rvv-ei
    $$ \begin{aligned} \vec{a} = a\hat{a}\end{aligned} $$

    This follows from rearranging #rvv-eu.

    Three vectors and their decompositions into lengths and directional unit vectors.

    Vector Magnitude and Direction

    Vectors can be written as a magnitude (length) multiplied by the unit vector in the same direction as the original vector.

    $$ \vec{A} = \|\vec{A}\| \hat{u_A}\ $$

    The length of a vector \( \vec{a} \) is written either \( \| \vec{a} \| \) or just plain \(a\). The length can be computed using Pythagorus’ theorem:

    Pythagorus' length formula. #rvv-ey
    $$ a = \|\vec{a}\| = \sqrt{a_1^2 + a_2^2 + a_3^2} $$

    First we prove Pythagorus' theorem for right-angle triangles. For side lengths \(a\) and \(b\) and hypotenuse \(c\), the fact that \(a^2 + b^2 = c^2\) can be seen graphically below, where the gray area is the same before and after the triangles are rotated in the animation:

    Pythagorus' theorem immediately gives us vector lengths in 2D. To find the length of a vector in 3D we can use Pythagorus' theorem twice, as shown below. This gives the two right-triangle calculations:

    $$ \begin{aligned} \ell^2 &= a_1^2 + a_2^2 \\ a^2 &= \ell^2 + a_3^2 = a_1^2 + a_2^2 + a_3^2. \end{aligned} $$

    Click and drag to rotate.
    Warning: Length must be computed in a single basis. #rvv-wl
    The Pythagorean length formula can only be used if all the components are written in a single orthonormal basis.

    Computing the length of a vector using Pythagorus' theorem.

    Some common integer vector lengths are \( \vec{a} = 4\hat\imath + 3\hat\jmath \) (length \(a = 5\)) and \( \vec{b} = 12\hat\imath + 5\hat\jmath \) (length \(b = 13\)).

    Warning: Adding vectors does not add lengths. #rvv-wa

    If \( \vec{c} = \vec{a} + \vec{b} \), then \( \|\vec{c}\| \ne \|\vec{a}\| + \|\vec{b}\| \) unless \( \vec{a} \) and \( \vec{b} \) are parallel and in the same direction.

    It will always be true, however, that \( \|\vec{c}\| \le \|\vec{a}\| + \|\vec{b}\| \). This fact is known as the triangle inequality, for reasons that should be obvious.

    The direction of a vector can be written as a unit vector by dividing the vector components by the vector magnitude.

    Alternatively, the vector components can be determined geometrically via the angles of each component with respect to the Cartesian Axes.

    Dot Product

    The dot product (also called the inner product or scalar product) is defined by

    Dot product from components.
    $$ \vec{a} \cdot \vec{b}= a_1 b_1 + a_2 b_2 + a_3 b_3 $$

    An alternative expression for the dot product can be given in terms of the lengths of the vectors and the angle between them:

    Dot product from length/angle. #rvv-ed
    $$ \vec{a} \cdot \vec{b}= a b \cos\theta $$

    We will present a simple 2D proof here. A more complete proof in 3D uses the law of cosines.

    Start with two vectors \( \vec{a} \) and \( \vec{b} \) with an angle \( \theta \) between them, as shown below.

    Observe that the angle \( \theta \) between vectors \( \vec{a} \) and \( \vec{b} \) is the difference between the \( \theta_a \) and \( \theta_b \) from horizontal.

    If we use the angle sum formula for cosine, we have

    $$ \begin{aligned} a b \cos\theta &= a b \cos(\theta_b - \theta_a) \\ &= a b (\cos\theta_b \cos\theta_a + \sin\theta_b \sin\theta_a) \end{aligned} $$

    We now want to express the sine and cosine of \( \theta_a \) and \( \theta_b \) in terms of the of \( \vec{a} \) and \( \vec{b} \).

    We re-arrange the expression so that we can use the fact that \( a_1 = a \cos\theta_a \) and \( a_2 = a \sin\theta_a \), and similarly for \( \vec{b} \). This gives:

    $$ \begin{aligned} a b \cos\theta &= (a \cos\theta_a) (b \cos\theta_b) + (a \sin\theta_a) (b \sin\theta_b) \\ &= a_1 b_1 + a_2 b_2 \\ &= \vec{a} \cdot \vec{b} \end{aligned} $$

    The fact that we can write the dot product in terms of components as well as in terms of lengths and angle is very helpful for calculating the length and angles of vectors from the component representations.

    Length and angle from dot product. #rvv-el
    $$ \begin{aligned} a &= \sqrt{\vec{a} \cdot\vec{a}} \\ \cos\theta &= \frac{\vec{b}\cdot \vec{a}}{b a}\end{aligned} $$

    The angle between \( \vec{a} \) and itself is $\theta = 0$, so \( \vec{a} \cdot \vec{a} = a^2 \cos 0 = a^2 \), which gives the first equation for the length in terms of the dot product.

    The second equation is a rearrangement of #rvv-ed.

    If two vectors have zero dot product \( \vec{a} \cdot \vec{b} = 0 \) then they have an angle of \( \theta = 90^\circ = \frac{\pi}{2}\rm\ rad \) between them and we say that the vectors are perpendicular, orthogonal, or normal to each other.

    In 2D we can easily find a perpendicular vector by rotating \( \vec{a} \) counterclockwise with the following equation.

    Counterclockwise perpendicular vector in 2D. #rvv-en
    $$ \vec{a}^\perp = -a_2\,\hat\imath + a_1\hat\jmath $$

    It is easy to check that \( \vec{a}^\perp \) is always perpendicular to \( \vec{a} \):

    $$ \vec{a} \cdot \vec{a}^\perp = (a_1\,\hat\imath + a_2\,\hat\jmath) \cdot (-a_2\,\hat\imath + a_1\hat\jmath) = -a_1 a_2 + a_2 a_1 = 0. $$
    The fact that \( \vec{a}^\perp \) is a \( +90^\circ \) rotation of \( \vec{a} \) is apparent from Figure #rvv-fn.

    In 2D there are two perpendicular directions to a given vector \( \vec{a} \), given by \( \vec{a}^\perp \) and \( -\vec{a}^\perp \). In 3D there is are many perpendicular vectors, and there is no simple formula like #rvv-en for 3D.

    The perpendicular vector \( \vec{a}^\perp \) is always a \( +90^\circ \) rotation of \( \vec{a} \).

    Dot product identities

    Dot product symmetry. #rvi-ed
    $$ \vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a} $$

    Using the coordinate expression #rvv-es gives:

    $$ \vec{a} \cdot \vec{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 = b_1 a_1 + b_2 a_2 + b_3 a_3 = \vec{b} \cdot \vec{a}. $$

    Dot product vector length. #rvi-eg
    $$ \vec{a} \cdot \vec{a} = \|a\|^2 $$

    Using the coordinate expression #rvv-es gives:

    $$ \vec{a} \cdot \vec{a} = a_1 a_1 + a_2 a_2 + a_3 a_3 = \|a\|^2. $$

    Dot product bi-linearity. #rvi-ei
    $$ \begin{aligned} \vec{a} \cdot(\vec{b} + \vec{c}) &=\vec{a} \cdot \vec{b} + \vec{a}\cdot \vec{c} \\ (\vec{a} +\vec{b}) \cdot \vec{c} &=\vec{a} \cdot \vec{c} + \vec{b}\cdot \vec{c} \\ \vec{a} \cdot (\beta\vec{b}) &= \beta (\vec{a} \cdot\vec{b}) = (\beta \vec{a}) \cdot\vec{b}\end{aligned} $$

    Using the coordinate expression #rvv-es gives:

    $$ \begin{aligned} \vec{a} \cdot (\vec{b} + \vec{c}) &= a_1 (b_1 + c_1) + a_2 (b_2 + c_2) + a_3 (b_3 + c_3) \\ &= (a_1 b_1 + a_2 b_2 + a_3 b_3) + (a_1 c_1 + a_2 c_2 + a_3 c_3) \\ &= \vec{a} \cdot \vec{b} + \vec{a} \cdot \vec{c} \\ (\vec{a} + \vec{b}) \cdot \vec{c} &= (a_1 + b_1) c_1 + (a_2 + b_2) c_2 + (a_3 + b_3) c_3 \\ &= (a_1 c_1 + a_2 c_2 + a_3 c_3) + (b_1 c_1 + a_2 c_2 + a_3 c_3) \\ &= \vec{a} \cdot \vec{c} + \vec{b} \cdot \vec{c} \\ \vec{a} \cdot (\beta \vec{b}) &= a_1 (\beta b_1) + a_2 (\beta b_2) + a_3 (\beta b_3) \\ &= \beta (a_1 b_1 + a_2 b_2 + a_3 b_3) \\ &= \beta (\vec{a} \cdot \vec{b}) \\ &= (\beta a_1) b_1 + (\beta a_2) b_2 + (\beta a_3) b_3 \\ &= (\beta \vec{a}) \cdot \vec{b}. \end{aligned} $$

    Cross Product

    The cross product can be defined in terms of components by:

    Cross product in components. #rvv-ex
    $$ \vec{a} \times \vec{b} = (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} $$

    It is sometimes more convenient to work with cross products of individual basis vectors, which are related as follows.

    Cross products of basis vectors. #rvv-eo
    $$ \begin{aligned}\hat\imath \times \hat\jmath &= \hat{k}& \hat\jmath \times \hat{k} &= \hat\imath& \hat{k} \times \hat\imath &= \hat\jmath \\\hat\jmath \times \hat\imath &= -\hat{k}& \hat{k} \times \hat\jmath &= -\hat\imath& \hat\imath \times \hat{k} &= -\hat\jmath \\\end{aligned} $$

    Writing the basis vectors in terms of themselves gives the components:

    $$ \begin{aligned} i_1 &= 1 & i_2 &= 0 & i_3 &= 0 \\ j_1 &= 0 & j_2 &= 1 & j_3 &= 0 \\ k_1 &= 0 & k_2 &= 0 & k_3 &= 1. \end{aligned} $$
    These values can now be substituted into the definition #rvv-ex. For example,
    $$ \begin{aligned} \hat\imath \times \hat\jmath &= (i_2 j_3 - i_3 j_2) \,\hat{\imath} + (i_3 j_1 - i_1 j_3) \,\hat{\jmath} + (i_1 j_2 - i_2 j_1) \,\hat{k} \\ &= (0 \times 0 - 0 \times 1) \,\hat{\imath} + (0 \times 0 - 1 \times 0) \,\hat{\jmath} + (1 \times 1 - 0 \times 0) \,\hat{k} \\ &= \hat{k} \end{aligned} $$
    The other combinations can be computed similarly.

    Warning: The cross product is not associative. #rvv-wc

    The cross product is not associative, meaning that in general

    $$ \vec{a} \times (\vec{b} \times \vec{c}) \ne (\vec{a} \times \vec{b}) \times \vec{c}. $$
    For example,
    $$ \begin{aligned} \hat{\imath} \times (\hat{\imath} \times \hat{\jmath}) &= \hat{\imath} \times \hat{k} = - \hat{\jmath} \\ (\hat{\imath} \times \hat{\imath}) \times \hat{\jmath} &= \vec{0} \times \hat{\jmath} = \vec{0}. \end{aligned} $$
    This means that we should never write an expression like
    $$ \vec{a} \times \vec{b} \times \vec{c} $$
    because it is not clear in which order we should perform the cross products. Instead, if we have more than one cross product, we should always use parentheses to indicate the order.

    Rather than using components, the cross product can be defined by specifying the length and direction of the resulting vector. The direction of \( \vec{a} \times \vec{b} \) is orthogonal to both \( \vec{a} \) and \( \vec{b} \), with the direction given by the right-hand rule. The magnitude of the cross product is given by:

    Cross product length. #rvv-el2
    $$ \| \vec{a} \times \vec{b} \| = a b \sin\theta $$

    Using Lagrange's identity we can calculate:

    $$ \begin{aligned} \| \vec{a} \times \vec{b} \|^2 &= \|\vec{a}\|^2 \|\vec{b}\|^2 - (\vec{a} \cdot \vec{b})^2 \\ &= a^2 b^2 - (a b \cos\theta)^2 \\ &= a^2 b^2 (1 - \cos^2\theta) \\ &= a^2 b^2 \sin^2\theta. \end{aligned} $$
    Taking the square root of this expression gives the desired cross-product length formula.

    This second form of the cross product definition can also be related to the area of a parallelogram.

    The area of a parallelogram is the length of the base multiplied by the perpendicular height, which is also the magnitude of the cross product of the side vectors.

    A useful special case of the cross product occurs when vector \( \vec{a} \) is in the 2D \( \hat\imath,\hat\jmath \) plane and the other vector is in the orthogonal \( \hat{k} \) direction. In this case the cross product rotates \( \vec{a} \) by \( 90^\circ \) counterclockwise to give the perpendicular vector \( \vec{a}^\perp \), as follows.

    Cross product of out-of-plane vector \( \hat{k} \) with 2D vector \( \vec{a} = a_1\,\hat\imath + a_2\,\hat\jmath \). #rvv-e9
    $$ \hat{k} \times \vec{a} = \vec{a}^\perp $$

    Using #rvv-eo we can compute:

    $$ \begin{aligned} \hat{k} \times \vec{a} &= \hat{k} \times (a_1\,\hat\imath + a_2\,\hat\jmath) \\ & a_1 (\hat{k} \times \hat\imath) + a_2 (\hat{k} \times \hat\jmath) \\ &= a_1\,\hat\jmath - a_2\,\hat\imath \\ &= \vec{a}^\perp. \end{aligned} $$

    Cross product identities

    Cross product anti-symmetry. #rvi-ea
    $$ \begin{aligned} \vec{a} \times\vec{b} = - \vec{b} \times\vec{a}\end{aligned} $$

    Writing the component expression #rvv-ex gives:

    $$ \begin{aligned} \vec{a} \times \vec{b} &= (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \\ &= -(a_3 b_2 - a_2 b_3) \,\hat{\imath} - (a_1 b_3 - a_3 b_1) \,\hat{\jmath} - (a_2 b_1 - a_1 b_2) \,\hat{k} \\ &= -\vec{b} \times \vec{a}. \end{aligned} $$

    Cross product self-annihilation. #rvi-ez
    $$ \begin{aligned}\vec{a} \times \vec{a} = 0\end{aligned} $$

    From anti-symmetry #rvi-ea we have:

    $$ \begin{aligned} \vec{a} \times \vec{a} &= - \vec{a} \times \vec{a} \\ 2 \vec{a} \times \vec{a} &= 0 \\ \vec{a} \times \vec{a} &= 0. \end{aligned} $$

    Cross product bi-linearity. #rvi-eb2
    $$ \begin{aligned}\vec{a} \times (\vec{b} + \vec{c})&= \vec{a} \times \vec{b} + \vec{a} \times \vec{c} \\(\vec{a} + \vec{b}) \times \vec{c}&= \vec{a} \times \vec{c} + \vec{b} \times \vec{c} \\\vec{a} \times (\beta \vec{b})&= \beta (\vec{a} \times \vec{b})= (\beta \vec{a}) \times \vec{b}\end{aligned} $$

    Writing the component expression #rvv-ex for the first equation gives:

    $$ \begin{aligned} \vec{a} \times (\vec{b} + \vec{c}) &= (a_2 (b_3 + c_3) - a_3 (b_2 + c_2)) \,\hat{\imath} \\ &\quad + (a_3 (b_1 + c_1) - a_1 (b_3 + c_3)) \,\hat{\jmath} \\ &\quad + (a_1 (b_2 + c_2) - a_2 (b_1 + c_1)) \,\hat{k} \\ &= \Big((a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \Big) \\ &\quad + \Big((a_2 c_3 - a_3 c_2) \,\hat{\imath} + (a_3 c_1 - a_1 c_3) \,\hat{\jmath} + (a_1 c_2 - a_2 c_1) \,\hat{k} \Big) \\ &= \vec{a} \times \vec{b} + \vec{a} \times \vec{c}. \\ \end{aligned} $$
    The second equation follows similarly, and for the third equation we have:
    $$ \begin{aligned} \vec{a} \times (\beta \vec{b}) &= (a_2 (\beta b_3) - a_3 (\beta b_2)) \,\hat{\imath} + (a_3 (\beta b_1) - a_1 (\beta b_3)) \,\hat{\jmath} + (a_1 (\beta b_2) - a_2 (\beta b_1)) \,\hat{k} \\ &= \beta \Big( (a_2 b_3 - a_3 b_2) \,\hat{\imath} + (a_3 b_1 - a_1 b_3) \,\hat{\jmath} + (a_1 b_2 - a_2 b_1) \,\hat{k} \Big) \\ &= \beta (\vec{a} \times \vec{b}). \end{aligned} $$
    The last part of the third equation can be seen with a similar derivation.

    Calculating the cross product

    The cross products of 3D vectors can be calculated by taking the determinant of specific components of the two vectors. It's best to start by writing a 3x3 matrix with the \( \hat{\imath} \), \( \hat{\jmath} \), and \( \hat{k} \) vectors in the first row and the two vectors you are taking the cross product of in the next two rows. See the example below of the cross product between \( \vec{A} \) and \( \vec{B} \):

    Vector Projection

    The projection and complementary projection are:

    Projection of \(\vec{a}\) onto \(\vec{b}\). #rvv-ep
    $$ \operatorname{Proj}(\vec{a},\vec{b})= (\vec{a} \cdot \hat{b}) \hat{b}= (a \cos\theta) \, \hat{b} $$
    Complementary projection of \(\vec{a}\) with respect to \(\vec{b}\). #rvv-em
    $$ \begin{aligned}\operatorname{Comp}(\vec{a}, \vec{b})&= \vec{a} -\operatorname{Proj}(\vec{a}, \vec{b}) =\vec{a} - (\vec{a} \cdot \hat{b}) \hat{b} \\\left\|\operatorname{Comp}(\vec{a}, \vec{b}) \right\|&= a \sin\theta\end{aligned} $$

    Adding the projection and the complementary projection of a vector just give the same vector again, as we can see on the figure below.

    Projection of \( \vec{a} \) onto \( \vec{b} \) and the complementary projection.

    As we see in the diagram above, the complementary projection is orthogonal to the reference vector:

    Complementary projection is orthogonal to the reference. #rvv-er
    $$ \operatorname{Comp}(\vec{a}, \vec{b}) \cdot \vec{b} = 0 $$

    Using the definitions of the complementary projection rvv-em and projection rvv-ep, we compute:

    $$ \begin{aligned} \operatorname{Comp}(\vec{a}, \vec{b}) \cdot \vec{b} &= \Big(\vec{a} - (\vec{a} \cdot \hat{b}) \hat{b}\Big) \cdot \vec{b} \\ &= \vec{a} \cdot \vec{b} - (\vec{a} \cdot \hat{b}) (\hat{b} \cdot \vec{b}) \\ &= a b \cos\theta - (a\cos\theta) b \\ &= 0. \end{aligned} $$