You are currently browsing the category archive for the ‘TolerableMaths’ category.

In the coffee hour discussion today, Nick gave me an interesting explicit example of an exceptional isomorphism of Lie groups.

$G = \mathbb{SL}(4, \mathbb C) \to \mathbb{SO}(6, \mathbb C)$

is an isomorphism via the $\bigwedge^2$ map. Let me elaborate. The group $G$ acts naturally on $V = \mathbb C^4$. If $\{e_1, e_2, e_3, e_4\}$ is a basis of $V$, then a basis of $\bigwedge^2 V$ could be taken as $\{ e_1 \wedge e_2, e_1 \wedge e_3, \cdots, e_3 \wedge e_4 \}$. Thus, $\mathbb{GL}(\bigwedge^2 V)$ is $\mathbb {GL}(6, \mathbb C)$ and our $G$-action gives a monomorphism of groups:

$\bigwedge^2 : G=\mathbb{SL}(4,\mathbb C) \to \mathbb{GL}(\bigwedge^2 V) = \mathbb {GL}(6,\mathbb C)$.

The image must actually be inside $\mathbb{SL}(6,\mathbb C)$.

$(\; , \; ) : \bigwedge^2 V \times \bigwedge^2 V \to \mathbb C$,

$(e_i \wedge e_j , e_k \wedge e_l ) \mapsto e_i \wedge e_j \wedge e_k \wedge e_l$

is symmetric (two permutations) and preserves the $G$– action, i.e.,

$(g . \left(e_i \wedge e_j\right) , \left( g . e_k \wedge e_l \right) ) = (e_i \wedge e_j, e_k \wedge e_l)$.

Hence the image of $G$ in $\mathbb{GL}(6, \mathbb C) \simeq \mathbb{GL}(\bigwedge^2 V)$ must also preserve this symmetric bilinear form. Thus, $\bigwedge^2(G) \subseteq \mathbb{SO}(\bigwedge^2 V)$. By dimension consideration, they must be equal.

I was wondering the rationale behind naming parabolic subgroups of linear algebraic groups. The answer, interestingly, comes from the action of $SL(2,\mathbb R)$ on the upper half plane. (I came up with this little discovery on my own.)

The orbit of the point $i$ under the action of the standard parabolic subgroup of $SL(2,\mathbb R)$ is a parabola.

The upper half-plane is an object that comes up in many parts of mathematics, hyperbolic geometry, complex analysis and number theory to name a few. The group $SL(2,\mathbb R)$ acts on it by fractional linear transformations:

$\begin{pmatrix} a & b \\ c & d \end{pmatrix} .\ z \mapsto \displaystyle \left( \frac{az+b}{cz+d} \right) .$

A parabolic subgroup is it’s subgroup $P$ such that quotienting by $P$ gives a compact variety. Upto conjugation, the only parabolic subgroup of $SL(2,\mathbb R)$ is $\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}.$ It’s action on the point $i$ is given by:

$\begin{pmatrix} x & y \\ 0 & x^{-1} \end{pmatrix} . i = \displaystyle \left( \frac{xi+y}{0i+x^{-1}} \right) = y + i x^2$

whose locus is a parabola.

I wonder why textbooks in algebraic groups don’t mention this!

EDIT: (21 April, 2015) The above calculation is WRONG. I don’t know the answer to “parabolic”.

I came across a simple statement in finite group theory that I’m almost upset no one told me earlier. The source is Serre’s book ‘Linear Representations of Finite Groups’. Serre used this statement below to define / prove Brauer’s theorem on induced representations of finite groups, using which one proves the meromorphicity of Artin-L-functions. Here it goes.

$G$ is a finite group and $p$ is a fixed prime. An element $x$ of $G$ is called $p$unipotent if $x$ has order a power of $p$ and $p$regular if it’s order is prime to $p$.

Cool result:  Every element $x$ in $G$ can be uniquely written as

$x = x_u x_r;$

where

• $x_u$ is $p$-unipotent, $x_r$ is $p$-regular,
• $x_u$ and $x_r$ commute and
• they are both powers of $x$.

The proof is really easy. Just replace $G$ by the (finite) cyclic group generated by $x$!

I call it the Jordan decomposition because we have a similar decomposition for endomorphisms (among other things).

Let $V$ be a finite dimensional vector space over an algebraically closed field of characteristic zero (just in case!). Each $x \in \text{End }(V)$ can be uniquely written as

$x = x_s x_u;$

where

• $x_s$ is semisimple (diagonalizable), $x_u$ is nilpotent,
• they both commute and
• they are polynomials in $x$ without a constant term.

Pretty cool, huh!

Gian Carlo Rota’s Indiscrete Thoughts is a must-read for every budding mathematician. He’s highly opinionated and among articles like “Ten things I should have learnt as a graduate student”, one can also find short biographies of biggies like Emil Artin, Stan Ulam and Solomon Lefshetz. Below is a paragraph taken from the book.

His advisor Jack Schwartz gives Rota the task of cleaning up the tome “Linear Operators” by Dunford – Schwartz for errors, solving exercises, correcting semicolons etc. Here is Rota’s description about one of the questions he wasn’t able to solve.

It took me half the summer to finish checking the problems in Chapter Three. There were a few that I had trouble with, and worst of all, I was unable to work out Problem Twenty of Section Nine. One evening Dunford and several other members of the group got together to discuss changes in the exercises. Jack was in New York City. It was a warm summer evening and we sat on the hard wooden chairs of the corner office of Leet Oliver Hall. Pleasant sounds of squawking crickets and frogs along with mosquitoes came through the open gothic windows. After I admitted my failure to work out Problem Twenty, Dunford tried one trick after another on the blackboard in an effort to solve the problem or to find a counterexample. No one remembered where the problem came from, or who had inserted it.

After a few hours, feeling somewhat downcast, we all got up and left. The next morning I met Jack, who patted me on the back and told me, “Don’t worry, I could not do it either.” I did not hear about Problem Twenty of Section Nine for another three years. A first-year graduate student had taken Dunford’s course in linear operators. Dunford had assigned him the problem, the student solved it, and developed an elegant theory around it. His name is Robert Langlands.

In my recent number theory seminar on “Hilbert’s 90 and generalizations” (notes here), Professor Goins asked the following interesting question.

Let ${K}$ be a field and ${d\in K^*}$. Define ${T_d}$ to be the torus

$\displaystyle \left\{ \begin{pmatrix} x & dy \\ y & x \end{pmatrix} : x,y \in L, x^2-dy^2=1\right\}.$

What values of ${d}$ give ${K}$-isomorphic tori?

(The question was perhaps motivated by the observation that over the reals, the sign of ${d}$ determines completely whether ${T_d}$ would be split (i.e., isomorphic to ${\mathbb R^*}$) or anisotropic (i.e., isomorphic to ${S^1}$).

Here are two ways of looking at the answer.

• For ${d,e \in K^*}$, we determine when two matrices ${\displaystyle\begin{pmatrix} x & dy \\ y & x \end{pmatrix}}$ and ${\displaystyle\begin{pmatrix} u & ev \\ v & u \end{pmatrix}}$ are conjugate in ${\text{SL}_2(K)}$. Solving the system

$\displaystyle \begin{pmatrix} x & dy \\ y & x \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} . \begin{pmatrix} u & ev \\ v & u \end{pmatrix} . \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}$

gives ${\displaystyle \frac{e}{d} = \left(\frac{b}{c}\right)^2, de = \left(\frac{d}{a}\right)^2}$.

Thus ${e \in d.(K^*)^2}$ i.e., the ${T_d}$‘s are classified by ${\displaystyle \frac{K^*}{(K^*)^2}}$. (For ${K=\mathbb R}$, this is isomorphic to ${\{\pm 1\}}$ so the sign of ${d}$ determines ${T_d}$ upto conjugation.) By Kummer theory, ${\displaystyle \frac{K^*}{(K^*)^2} \cong H^1(\text{Gal}(\overline K / K), \mu_2)}$, where ${\mu_2 = \{\pm 1\}}$ are the second roots of unity. Thus there is a correspondence between isomorphism classes of tori ${T_d \; (d \in K^*)}$ and quadratic extensions of ${K}$.

• Another way to look at the same thing is as follows. Fix ${d \in K^*}$. Let ${L}$ be an extension of ${K}$ wherein ${T_d}$ splits. Now ${T_d(L)}$ is a split torus of rank 1. For an algebraic group ${G}$ over an algebraically closed field, we have the exact sequence

$\displaystyle 1 \rightarrow \text{Inn}(G) \rightarrow \text{Aut}(G) \rightarrow \text{Aut}(\Psi_0(G)) \rightarrow 1,$

where ${\Psi_0(G)}$ is the based root datum ${(X,\Delta,X\;\check{}, \Delta\,\check{})}$ associated to ${G}$. (Here, ${X = X^*(G)}$ and ${\Delta}$ is the set of simple roots of ${X}$ corresponding to a choice of a Borel subgroup of ${G}$.) For details, see Corollary 2.14 of Springer’s paper “Reductive Groups” in Corvallis.

In our case, ${G = T_d}$ so ${\Psi_0(G) = ( \mathbb Z, \emptyset, \mathbb Z \;\check{}, \emptyset)}$.

$\displaystyle \text{Aut}(\Psi_0(G)) \cong \text{Aut}(\mathbb Z) \cong \{ \pm 1\}.$

Now ${L/K}$ forms of ${T_d}$ are in bijective correspondence with

$\displaystyle H^1(\text{Gal}(L/K), \text{Aut}(\Psi_0(G))) = H^1(\text{Gal}(L/K), \{\pm 1\}) \cong \text{Hom}_{\mathbb Z}(\text{Gal}(L/K), \{\pm 1\});$

the last isomorphism because the Galois group acts trivially on the split torus ${T_d(L)}$. ${\blacksquare}$

Below is Gauss’ quadratic reciprocity, found in most elementary texts in number theory. In this post, we’ll see how the Hecke operators originate from this theorem.

Theorem (Gauss). Let $\varepsilon(n) = (-1)^{\frac{n-1}{2}}$ and $\omega(n) = (-1)^{\frac{n^2-1}{8}}$. For distinct odd primes $p, q$,

$\displaystyle \left(\frac{p}{q}\right) = \varepsilon(p) .\varepsilon(q). \left(\frac{q}{p}\right),$

$\displaystyle \left(\frac{-1}{p}\right) = \varepsilon(p),$

$\displaystyle \left(\frac{2}{p}\right) = \omega(p).$

Consider the equation

$\textbf{(Q)}: \qquad x^2 = d; \qquad \qquad d\in \mathbb Z \backslash \{0\}.$

Let $a_p(Q)$ be the number of solutions to Q modulo $p$, minus one. Then by definition of the Legendre symbol, $a_p(Q) = \left( \frac{d}{p} \right).$ By property of the Lengendre symbol (or rather, the Jacobi symbol), we have

$a_{mn}(Q) = a_m(Q) . a_n(Q).$                       (*)

Let $N = 4 |d|$. Then it follows from the reciprocity law that $a_p(Q)$ depends only on the value of $p$ modulo $N$. Furthermore, the finite sequence ${a_2(Q), a_3(Q), a_5(Q), \cdots }$ arises as a set of eigenvalues of a linear operator (the Hecke operator) on a finite dimensional complex vector space. We’re going to construct the space.

$V_N := \{ f: (\mathbb Z/n\mathbb Z)^* \to \mathbb C \}$

$T_p : V_N \to V_N, \quad T_p(f)(n) = f(pn) \quad \text{if } p \nmid n \text{ and } 0 \text{ otherwise }.$

Verify that $T_p$ is a linear operator on $V_N$. Now all these operators for varying primes commute with each other. So what is a common eigenvector?

Define $\phi (n) = a_n(Q)$. Then use (*) to show that

$T_p(\phi) = a_p(Q) \phi,$

for all $p$ prime. So, this $\phi$ is indeed, a common eigenvector for all the $T_p$‘s!

I will explain more about Hecke operators on modular forms in a future post. (Edit July 10, 2013: Link to the said post).

Sohei YASUDA gave me a proof of a lemma whose proof is instructive and interesting, using many little facts from group theory and topology. I thought I should blog it.

Lemma: G is a compact topological group and H is an open subgroup. Then H is also closed (!) and of finite index.

Proof. G can be written as a union of cosets, each of which is open in G thanks to continuity of the group product. Thus the complement of H is a union of open (co)sets so is open. H is thus closed.

Consider now the quotient map from G to G/H. (Here, I’m not assuming H is normal in G but rather thinking of G/H as the quotient topological group with it’s topology coming from the quotient map). By definition of the quotient map, H cl-open in G implies it’s image, the identity coset of G/H too is cl-open.

Compactness of G inherits to the topology on G/H so the identity element is discrete on a compact set. The set G/H better be finite now, thus proving the second assertion. $\blacksquare$

Today was the first day after my prelims that I met Dr. Shahidi. He earlier advised me to take a break after the advanced topics so I could start research afresh. I also recently found his Wikipedia page (albeit a stub) – http://en.wikipedia.org/wiki/Freydoon_Shahidi . A more relevant Wiki-link about his work is – http://en.wikipedia.org/wiki/Langlands%E2%80%93Shahidi_method I hope one day I’ll edit these pages to add sources and more relevant material.

Our conversation was quite pleasant. He had some things in mind he wanted me to work on. He told me to go over “L-packets” and “A-packets” and other technical stuff; gave references where I could read about it. We discussed about my general PhD goal. “Your (research) problem should not be too difficult to be unable to solve”, he said. On asking if I’d like my work to be “Algebra or Analysis”, I instantly replied “Algebra!” although I should know that Number Theory uses tools from all branches of mathematics (including PDE 🙂 )

I have known Shahidi for his dry sarcastic wit and today’s conversation ended with a remarkable quip. He told me, “Keep me informed, don’t run away!”

In their book, Singer & Thorpe say, “At the present time, the average undergraduate Mathematics major finds math heavily compartmentalized.” One learns many things but does not see the connections between seemingly different things. Indeed, as the great Poincare says, Mathematics is the art of giving the same name to different things. In this and the subsequent post, we shall see the connections in algebra and topology with respect the Galois theory.

Galois theory in Algebra

This topic is covered in most standard algebra texts. It deals with studying the roots of polynomials and their relations. Given a field F and an irreducible polynomial p(x) with coefficients in F, we look at the smallest field K containing F and which has all roots of p(x). The set of all permutations of the roots of p  correspond to automorphisms of K which fix F element-wise. These automorphisms form a group known as the Galois group. There is a beautiful correspondence between subgroups of this group and subfields of K fixing F.

The 19-th century mathematicians Galois and Abel studied this group and Galois came up with useful characterization on this group as to when its roots could be expressed in terms of the coefficients. Of course the theory has now been much more generalized, abstracted and is used indispensably in many parts of mathematics – number theory, algebraic geometry and more.

Galois theory in Topology

(Even this can be found in any standard text on algebraic topology. But here I am talking about Riemann surfaces – and the connections between the two Galois theories — the too-fascinating-to-be-true connection, although not very difficult, its not found in lower-level texts. I will explain that in a follow-up post to this soon).

Given a point P on a topological space S, one talks about the equivalence class (modulo continuous deformations) of paths starting and ending at P. They form a group, the fundamental group. Also, given two (path)-connected spaces R and S, one says that R is a covering of S if there is a continuous surjective map from R to S such that every point of S has a neighbourhood U whose inverse image is a disjoint union of copies each resembling U. One may imagine the real line to be a covering of the unit circle via the map t being mapped to (cos t, sin t).

Now comes the Galois correspondence. For a `nice’ topological space, there is a natural one-to-one correspondence between subgroups of the fundamental group of that space and its covering spaces (rather, isomorphism classes of covering spaces, to be pedantic). Further, the fundamental groups of these spaces have fundamental group isomorphic to the subgroup we started with!

More analogy

The analogy between Galois groups of algebraic objects and fundamental groups and covering spaces of topological spaces goes beyond just one-to-one correspondence. A field extension is normal if it has enough automorphisms. A covering map too is normal (or Galois) if it has enough automorphisms!

Normal field extension $\longleftrightarrow$ Normal subgroups of the Galois group

Normal covering $\longleftrightarrow$ Normal subgroup of the fundamental group

In the next post, we shall see a deeper connection between the four objects above. Namely, we shall take a polynomial, construct its Galois group, get a covering map for this field extension and see that the two groups are the same!

Galois group = fundamental group.

Amazing stuff!

In this post, we shall see how the resultant, a determinant containing coefficients of two polynomials is useful in determining the existence of their common roots.

Suppose we are working in a ring ${A}$ and ${f(y), g(y)}$ are two polynomials in ${A[y]}$:

$\displaystyle f(y) = a_0 y^m + a_1 y^{m-1} + \cdots + a_m, \qquad a_i \in A \text{ and } a_0 \neq 0.$

$\displaystyle g(y) = b_0 y^n + b_1 y^{n-1} + \cdots + b_n, \qquad b_i \in A \text{ and } b_0 \neq 0.$

Consider the system of equations obtained by multiplying the first equation by ${y^{n-1}, y^{n-2}, \cdots, y, 1}$ and the second by ${y^{m-1}, y^{m-2}, \cdots, y, 1}$. Make the following change of variables:

$\displaystyle z_0 = 1,$

$\displaystyle z_1 = x,$

$\displaystyle z_2 = x^2,$

$\displaystyle \vdots$

$\displaystyle z_{m+n-1} = x^{m+n-1}.$

The ${(m+n)}$ equations: ${f=0, \;yf=0, \cdots, \;y^{n-1}f=0, \; g=0, \;yg=0, \cdots, \;y^{m-1}g=0}$ can be denoted in matrix form as:

$\displaystyle \begin{bmatrix} a_0 & a_1 & \cdots & \cdots & a_{m-1} & a_m & 0 & 0 & \cdots & 0\\ 0 & a_0 & a_1 & \cdots & \cdots & a_{m-1} & a_m & 0 & \cdots & 0\\ 0 & 0 & a_0 & a_1 & \cdots & \cdots & a_{m-1} & a_m & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & a_0 & a_1 & \cdots & \cdots & \cdots & a_m\\ b_0 & b_1 & \cdots & \cdots & b_{n-1} & b_n & 0 & 0 & \cdots & 0\\ 0 & b_0 & b_1 & \cdots & \cdots & b_{n-1} & b_n & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 & b_0 & b_1 & \cdots & \cdots & \cdots & b_n\\ \end{bmatrix} \begin{bmatrix} z_0 \\ z_1 \\ z_2\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ z_{m+n} \end{bmatrix} = \begin{bmatrix} \;\;0\;\; \\ 0 \\ 0\\ \vdots \\ \vdots \\ \vdots \\ \vdots \\ 0 \end{bmatrix}.$

Call the above matrix as ${R}$. If there was a common solution ${x = \alpha}$ so that ${f(\alpha) = g(\alpha) = 0}$, then above system of equations would have a non-zero solution (non-zero since ${z_0 = 1}$), so the determinant of ${R}$ would have to be zero. If the ring ${A}$ is an integral domain, then the vanishing of the determinant of ${R}$ is equivalent to ${f}$ and ${g}$ having a common root.

This determinant is known as the resultant of ${f}$ and ${g}$. It has some interesting consequences that I state below:

Theorem 1 For two non-constant irreducible polynomials ${f(x), g(x) \in k[x]}$ to have a common root, it is necessary and sufficient that their resultant vanishes.

Assume that ${f(x,y), g(x,y) \in A[x,y]}$ are irreducible and consider ${f(y) , g(y) \in B[y]}$ where ${B = A[x]}$. Thus the coefficients of ${f}$ and ${g}$ are polynomials in ${x}$. The matrix similar to above has coefficients polynomials in ${x}$, so here the resultant is a polynomial in ${x}$, say ${R(x)}$. If ${(\alpha, \beta)}$ is a common root of ${f}$ and ${g}$, then ${R(\alpha) = 0}$. Hence the ${x}$-coordinate of any common root must be a root of ${R(x)}$. In particular, there can be at most finitely many ${\alpha}$‘s so that ${(\alpha, \beta)}$ is a common root.

Further, for every value of ${\alpha}$ so that ${R(\alpha) = 0}$, there are only finitely many values of ${\beta}$ so that ${\beta}$ is a root of ${f(\alpha, y) = 0}$. Hence ${f}$ and ${g}$ can intersect in finitely many points! We record this as one version of Bezout’s theorem:

Theorem 2 (Bezout) If ${f}$ and ${g}$ are irreducible polynomials of degrees ${m, n>1}$ in ${k[x,y]}$, then they can intersect in at most ${mn}$ points.

Here, we have proved only finiteness of common points. For the refinement ${\leq mn}$, see Abhyankar‘s book Lectures in Algebra.

Theorem 3 Any curve ${f(x,y) \in k[x,y]}$ can have at most finitely many singular points.

Here, we define a point ${(\alpha, \beta)\in k^2}$ to be singular if it satisfies ${f(x,y) = f_x(x,y) = f_y(x,y) = 0}$. The proof follows from Bezout’s theorem when we set ${g=f_x}$ or ${f_y}$.

Resultants can also be used to compute the discriminant of a polynomial. The discriminant of a polynomial

$\displaystyle f(x) = a_0 x^n + a_1 x^{n-1} + \cdots + a_n = a_0 \displaystyle\prod_{i=1}^n (x-\alpha_i)$
is given by

$\displaystyle \text{Disc}(f) = \displaystyle \prod_{i

Upto a sign factor ${(-1)^{\frac{n(n-1)}{2}}}$, we have

$\displaystyle \text{Disc}(f) = \text{Resultant of } f \text{ and } f'.$

The Wikipedia article on Resultants takes the Theorem 1 to be the definition and states different results.

Abhishek Parab

I? An Indian. A mathematics student. A former engineer. A rubik's cube addict. A nature photographer. A Pink Floyd fan. An ardent lover of Chess & Counter-Strike.

View my complete profile

Quotable Quotes

ABHISHEK PARAB
“Do not think; let the equation think for you”

PAUL HALMOS
”You cannot be perfect, but if you won’t try, you won’t be good enough”

ALBERT EINSTEIN
“Don’t worry about your maths problems; I assure you, mine are greater”

THE BEST MATH JOKE
"A comathematician is a device for turning cotheorems into ffee"

More quotes