Rediscovering Galois Theory: Part-1
Table of Contents
- Prerequisites
- Solving Polynomials upto degree 4
- Lagrange Resolvent
- Fields
- Root Permutations
- Groups
- Factorization of the Resolvent Polynomial
- Solvability
Part 1: Prerequisites
Introduction
This writeup presents Galois Theory, the way it started. By Galois’s time, we knew how to solve polynomials of degree $2, 3$, and $4$. However, all attempts to solve degree $5$ polynomials failed. Galois and his contemporaries were working on why polynomials of degree 5 was hard to solve. Galois came up with a criterion which describes when can a polynomial be solved in terms of basic algebraic operations (like addition, subtraction, multiplication, division) and root operations (like square roots, cube roots, etc.). With this theory, Galois was able to establish that polynomials of degree 5 and above cannot be solved with basic algebraic operations and roots.
Galois submitted his memoir containing his theory for peer review. However, Galois died an untimely death soon after this. His theory was slow to be accepted and understood by the mathematical community. In the years that followed, many mathematicians cleaned up his theory and made it more accessible. These efforts led to the modern version of Galois Theory, which is far more abstract than Galois’s original writing. The modern theory has its advantages, but the way it is presented is disconnected from the orginal problem of solving polynomials.
This writeup tries to rediscover Galois Theory from first principles, following the trail of thoughts of the predescessors of Galois who developed methods to solve various polynomials. It then follows Galois’ attempt at extending these methods to come up with his theory of polynomial solvability. This writeup is based on my notes from reading Harold Edwards’ book on Galois Theory, which tries to explain Galois’ original work. It does not assume any prior experience in Algebra beyond what is typically covered at high school level.
Polynomial
Consider a polynomial with coefficients $(a_0, a_1, \ldots, a_{n-1}, a_n)$ that has roots $(r_1, r_2, \ldots r_{n-1}, r_n)$:
\[\begin{align*} f(x) &= a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0\\ &= a_n (x - r_1) (x-r_2)\ldots(x-r_{n-1})(x-r_n) \end{align*}\]Let us assume that the coefficients are all rational numbers. We call the coefficients the “known” values. We would like to find the roots of the polynomial, in terms of the known values.
By multiplying out the product of the $(x-r_i)\text{s}$, we can get a relation between the roots of the polynomial and its coefficients:
\[\begin{align*} r_1 + r_2 + r_3 \ldots + r_{n-1} + r_n &= -\frac{a_{n-1}}{a_n}\\ r_1r_2 + r_1r_3 + \ldots + r_{n-1}r_n &= \frac{a_{n-2}}{a_n}\\ \vdots\\ r_1r_2 \ldots r_{n-1}r_n &= \frac{a_{0}}{a_n} \end{align*}\]We observe that the $n$ elementary symmetric polynomial expressions in roots are all known in terms of the coefficients. This is called the Vieta’s Formula. Due to a theorem by Newton, we can express any symmetric polynomial in $n$ variables using the $n$ elementary symmetric polynomial in $n$ variables. Using this theorem, we can say that any symmetric polynomial in the roots of $f(x)$ can be evaluated by known values.
We do not prove Newton’s Theorem here. In the next section, we provide a rough sketch of proof.
Newton’s Theorem on Symmetric Polynomials
The proof uses induction on the number of variables. We use an example polynomial:
\[p(a,b,c) = a^2 + b^2 + c^2\]This polynomial has $3$ variables. If we were to ignore the third variable, we get $a^2 + b^2$. Suppose, we know how to express any symmetric polynomial in two variables in terms of the elementary symmetric polynomials in two variables:
\[\begin{align*} t_1 &= a + b\\ t_2 &= ab \end{align*}\]Using these, we can express $a^2 + b^2$ in terms of $(t_1, t_2)$:
\[\begin{align*} p(a,b,c) &= c^2 + (a^2 + b^2)\\ &= c^2 + (t_1^2 -2t_2) \end{align*}\]We know that the elementary symmetric polynomials in 3 variables are:
\[\begin{align*} e_1 &= a+b+c &&= t_1 + c\\ e_2 &= ab+bc+ca &&= t_2 + ct_1\\ e_3 &= abc &&= ct_2 \end{align*}\]These can also be written as:
\[\begin{align*} t_1 &= e_1 - c\\ t_2 &= e_2 - ct_1 = e_2 - ce_1 + c^2 \end{align*}\]and
\[0 = e_3 - ct_2 = e_3 -ce_2 + c^2e_1 - c^3\]The first two equations help us express $t_1, t_2$ in terms of $e_1,e_2,e_3$ and $c$. The third equation is the familiar equation of the polynomial with roots $(a,b,c)$. We can use these equations in $p(a,b,c) = c^2 + (a^2 + b^2) = c^2 + (t_1^2 -2t_2)$ to eliminate $t_1, t_2$ and get a polynomial in $c$ whose coefficients are in terms of $(e_1, e_2, e_3)$. Using the third equation, the degree of $c$ in the resulting polynomial can be reduced to $2$.
Since $p(a,b,c)$ is known to be a symmetric polynomial in $a,b,c$, the variable $c$ can be replaced with any one of $a, b$, or $c$. Thus, the polynomial expression is a degree $2$ equation with $3$ roots. Hence, it is a constant polynomial, without any terms with powers of $c$. What remains is an expression independent of $c$, only containing $e_1, e_2, e_3$.
Symmetric Polynomial of Roots
Here is a summary of what we know:
-
Given a polynomial $f(x)$, the elementary symmetric polynomial in roots are known values.
-
Any symmetric polynomial in roots can be expressed in terms of known values.
Continue Reading: Part 2: Solving Polynomials upto degree 4