In this post, we shall see how the resultant, a determinant containing coefficients of two polynomials is useful in determining the existence of their common roots.

Suppose we are working in a ring ${A}$ and ${f(y), g(y)}$ are two polynomials in ${A[y]}$:

$\displaystyle f(y) = a_0 y^m + a_1 y^{m-1} + \cdots + a_m, \qquad a_i \in A \text{ and } a_0 \neq 0.$

$\displaystyle g(y) = b_0 y^n + b_1 y^{n-1} + \cdots + b_n, \qquad b_i \in A \text{ and } b_0 \neq 0.$

Consider the system of equations obtained by multiplying the first equation by ${y^{n-1}, y^{n-2}, \cdots, y, 1}$ and the second by ${y^{m-1}, y^{m-2}, \cdots, y, 1}$. Make the following change of variables:

$\displaystyle z_0 = 1,$

$\displaystyle z_1 = x,$

$\displaystyle z_2 = x^2,$

$\displaystyle \vdots$

$\displaystyle z_{m+n-1} = x^{m+n-1}.$

The ${(m+n)}$ equations: ${f=0, \;yf=0, \cdots, \;y^{n-1}f=0, \; g=0, \;yg=0, \cdots, \;y^{m-1}g=0}$ can be denoted in matrix form as:

$\displaystyle \begin{bmatrix} a_0 & a_1 & \cdots & \cdots & a_{m-1} & a_m & 0 & 0 & \cdots & 0\\ 0 & a_0 & a_1 & \cdots & \cdots & a_{m-1} & a_m & 0 & \cdots & 0\\ 0 & 0 & a_0 & a_1 & \cdots & \cdots & a_{m-1} & a_m & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & a_0 & a_1 & \cdots & \cdots & \cdots & a_m\\ b_0 & b_1 & \cdots & \cdots & b_{n-1} & b_n & 0 & 0 & \cdots & 0\\ 0 & b_0 & b_1 & \cdots & \cdots & b_{n-1} & b_n & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & b_0 & b_1 & \cdots & \cdots & \cdots & b_n\\ \end{bmatrix} \begin{bmatrix} z_0 \\ z_1 \\ z_2\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ z_{m+n} \end{bmatrix} = \begin{bmatrix} \;\;0\;\; \\ 0 \\ 0\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 0 \end{bmatrix}.$

Call the above matrix as ${R}$. If there was a common solution ${x = \alpha}$ so that ${f(\alpha) = g(\alpha) = 0}$, then above system of equations would have a non-zero solution (non-zero since ${z_0 = 1}$), so the determinant of ${R}$ would have to be zero. If the ring ${A}$ is an integral domain, then the vanishing of the determinant of ${R}$ is equivalent to ${f}$ and ${g}$ having a common root.

This determinant is known as the resultant of ${f}$ and ${g}$. It has some interesting consequences that I state below:

Theorem 1 For two non-constant irreducible polynomials ${f(x), g(x) \in k[x]}$ to have a common root, it is necessary and sufficient that their resultant vanishes.

Assume that ${f(x,y), g(x,y) \in A[x,y]}$ are irreducible and consider ${f(y) , g(y) \in B[y]}$ where ${B = A[x]}$. Thus the coefficients of ${f}$ and ${g}$ are polynomials in ${x}$. The matrix similar to above has coefficients polynomials in ${x}$, so here the resultant is a polynomial in ${x}$, say ${R(x)}$. If ${(\alpha, \beta)}$ is a common root of ${f}$ and ${g}$, then ${R(\alpha) = 0}$. Hence the ${x}$-coordinate of any common root must be a root of ${R(x)}$. In particular, there can be at most finitely many ${\alpha}$‘s so that ${(\alpha, \beta)}$ is a common root.

Further, for every value of ${\alpha}$ so that ${R(\alpha) = 0}$, there are only finitely many values of ${\beta}$ so that ${\beta}$ is a root of ${f(\alpha, y) = 0}$. Hence ${f}$ and ${g}$ can intersect in finitely many points! We record this as one version of Bezout’s theorem:

Theorem 2 (Bezout) If ${f}$ and ${g}$ are irreducible polynomials of degrees ${m, n>1}$ in ${k[x,y]}$, then they can intersect in at most ${mn}$ points.

Here, we have proved only finiteness of common points. For the refinement ${\leq mn}$, see Abhyankar‘s book Lectures in Algebra.

Theorem 3 Any curve ${f(x,y) \in k[x,y]}$ can have at most finitely many singular points.

Here, we define a point ${(\alpha, \beta)\in k^2}$ to be singular if it satisfies ${f(x,y) = f_x(x,y) = f_y(x,y) = 0}$. The proof follows from Bezout’s theorem when we set ${g=f_x}$ or ${f_y}$.

Resultants can also be used to compute the discriminant of a polynomial. The discriminant of a polynomial

$\displaystyle f(x) = a_0 x^n + a_1 x^{n-1} + \cdots + a_n = a_0 \displaystyle\prod_{i=1}^n (x-\alpha_i)$
is given by

$\displaystyle \text{Disc}(f) = \displaystyle \prod_{i

Upto a sign factor ${(-1)^{\frac{n(n-1)}{2}}}$, we have

$\displaystyle \text{Disc}(f) = \text{Resultant of } f \text{ and } f'.$

The Wikipedia article on Resultants takes the Theorem 1 to be the definition and states different results.