Here is an interesting problem that has surprisingly far-reaching consequences.
Consider a permutation \(\sigma\colon S\to S'\) of some list, say \(S=\{1,2,\dots,n\}\) that maps \(S\mapsto\{k_1,k_2,\dots,k_n\}\), where \(k_i\in S\). Define a transposition to be a permutation that only swaps two elements of the list it acts on. It is clear that every permutation can be decomposed into transpositions. Seriously. It is intuitively obvious. But if we wanted to be a bit more rigorous, observe that we can give a state to every element \(j_i\in S\). Let \(j_i\) be correct if \(j_i=\sigma(j_i)\), that is, \(j_i\) is a fixed point under the permutation. Let \(j_i\) be incorrect otherwise. For every incorrect element in \(S\), there must exist another incorrect element in \(S\) such that the transposition of the two elements makes the former element correct. So, we can iteratively exclude correct elements, find the unique elements that are in the correct spot of each incorrect element, and perform the transpositions. Since \(S\) has finite cardinality, this algorithm terminates, and in fact leaves every element correct. So any permutation can be decomposed into transpositions. But this does not say anything about the decomposition itself. In fact, there are an infinite number of sequences of transpositions that when composed, yield a given permutation. This is trivial, since we don't have to necessarily stop after the aforementioned algorithm terminates and everything is in the correct spot. You could perform another transposition to "mess things up" and then reverse it, and then continue that an arbitrary number of times. However, a powerful invariant hides behind all of these decompositions. For a given permutation, the number of transpositions in any decomposition of that permutation always has the same parity. This intuitively makes a lot of sense, though it's not very clear how to prove it. I suppose it would perhaps be possible to do so by performing a case analysis with my states (by considering transpositions of the form "incorrect/correct to correct/incorrect", "correct/correct to incorrect/incorrect", etc.). But this is a bit ugly. There is a much more elegant way, though it is arguably more complicated if you are required to prove the preliminaries from scratch as well. This parity of a permutation is called its signature, and we are concerned with its existence and uniqueness. The signature of a permutation is \(1\) when a permutation can only be decomposed into an even number of transpositions and \(-1\) otherwise. But there is actually an equivalent definition of signature that we can give with which it is much easier to probe the questions of existence and uniqueness. Let \(P_n\) denote the set of all permutations that act on lists with cardinality \(n\). Then the signature is defined as the function \[\varsigma\colon P_n\to\{-1,1\}\] such that \(\varsigma(\sigma_1\circ\sigma_2)=\varsigma(\sigma_1)\varsigma(\sigma_2)\) for all \(\sigma_1,\sigma_2\in P_n\) and \(\varsigma(\tau)=-1\) if \(\tau\) is a transposition. The signature is usually denoted by "sgn", but due to MathJax limitations, I will denote it with \(\varsigma\). We need to show that this definition is equivalent to our parity definition, and also that the signature exists and is unique (using this definition). Showing the equivalence of the definitions is trivial. Since the signature of the composition is the product of the signatures, and the signature of a transposition is \(-1\), we must have that the signature is \(\varsigma(\sigma)=(-1)^{T(\sigma)}\), where \(T(\sigma)\) gives the number of transpositions in any one of the decompositions of \(\sigma\). Now we must prove that this function exists and is unique. Proving existence is the hard part. For any permutation \(\sigma\in P_n\), define the permutation matrix \(M_{\sigma}\) as the matrix that maps \[M_{\sigma}\vec{e}_i=\vec{e}_{\sigma(i)},\] where \(\vec{e}_i\) is just a standard basis vector in \(\mathbb{R}^n\). It follows that \(M_{\sigma}\) is some \(n\times n\) matrix which only 0's and 1's, which I won't compute explicitly (though it isn't hard to). It is easy to see for \(\sigma_1,\sigma_2\in P_n\), we have \(M_{\sigma_1\circ\sigma_2}=M_{\sigma_1}M_{\sigma_2}\). Here comes our leap of faith. Define \[\varsigma(\sigma)=\det{M_{\sigma}}.\] This fits our first property, since the determinant of a product is the product of the determinants! Furthermore, the second property is satisfied as well, since one can easily show that for some transposition \(\tau\), the matrix \(M_{\tau}\) is found by transposing two columns in the identity matrix (and so the determinant is negated, due to the antisymmetry of the determinant – this argument is sometimes phrased in terms of the determinant of an elementary matrix representing the transposition of two columns). Hence, the function \(\varsigma\) exists. But, if \(\varsigma\), a function, exists, then each element in the domain can only map to one element in the codomain, so it is also unique. By equivalence of definitions, this means that the number of transpositions of every decomposition of \(\sigma\) into transpositions always has the same parity! The signature ends up being quite important for various definitions in the theory of forms in vector calculus, such as the wedge product. All of this from just viewing a permutation as a series of transpositions.
0 Comments
My linear class is weird.
Consider two continuous functions \(f,g\colon [0,1]\to\mathbb{R}\). Prove that \[\left|\int_{0}^{1}{f(x)g(x)\textrm{ d}x}\right|\leq\sqrt{\int_{0}^{1}{f(x)^2\textrm{ d}x}}\sqrt{\int_{0}^{1}{g(x)^2\textrm{ d}x}}.\] This problem is pretty trivial. Simply apply Cauchy-Schwarz to the Riemann sum approximation of the LHS and take the limit as \(n\rightarrow\infty\). This was a starred (challenge) problem in my linear homework following the week where we discussed Cauchy-Schwarz. Strange. Anyway, I've encountered a lot of cute problems lately. Among these is the following. Problem (HMMT): 10 students take a test. Each problem is solved by precisely 7 people and nine of the students solve exactly 4 of the problems. How many problems does the 10th student solve? Solution: Let the number of problems on the test be \(P\). Consider a set of \(P\) bins, with each corresponding to a particular problem, and a set of 10 bins, with each corresponding to a particular student. For every student that solves a certain problem, we place a stone in that problem's bin and for every problem that a student solves, we place a stone in that student's bin. Note that if we place a stone in a problem's bin, we must also place a stone in a student's bin. That is, there exists a bijection between the stones in the problem bins and the student bins. The total number of stones in the problem bins is given to be \(7P\) and the total number of stones in the student bins is \(x+4\cdot9\), where \(x\) is the number of problems that the 10th person solves. Hence, \[x=7P-36.\] Since \(x\geq0\), we must have \(P\geq6\). Furthermore, since the 10th person cannot solve more problems than are on the test, we have \(x=7P-36<P\), which means \(P\leq6\). Therefore, \(P=6\) and \(x=\boxed{6}\). Let's solve a cool problem.
Suppose there is an \(n\) person committee. Let an arrangement of this committee be a derangement if none of the committee members are sitting where they are supposed to be sitting. Let \(P(n)\) be the probability that a randomly chosen arrangement of the \(n\) person committee is a derangement. What is \(\lim_{n\rightarrow\infty}{P(n)}\)? That is, what does this probability approach as the number of members grows arbitrarily large? First, let us define some notation. Let \(D_n\) be the number of derangements of an \(n\) person committee. How can we count \(D_n\)? There are \(n!\) possible arrangements. Perhaps we can use complementary counting by finding the total number of arrangements where at least one member is sitting in his or her assigned seat, and then subtract that number from \(n!\). It turns out that it is pretty much impossible to count how many different ways we can have an arrangement with exactly \(1\leq k\leq n\) members in their correct seat. Try it for yourself and see what problems you run into. There is no simple algorithm to derange a set. Note that I say simple. It is certainly possible to construct an algorithm based on swapping elements that are in the correct spots. Good luck keeping track of that for counting purposes. Instead, we take a step back and think. We only need the sum of the number of arrangements with exactly \(k\) members in their correct seat from \(k=1\) to \(k=n\). Observe that while it is difficult to count the number of arrangements with exactly \(k\) correct seatings, it is much easier to count the number of arrangements with at least \(k\) correct seatings. We can then use the principle of inclusion-exclusion to deduce the sum of the number of arrangements with exactly \(k\) correctly seated members without counting a single such case! So we calculate the number of possible arrangements where at least \(k\) people are sitting in his or her own assigned seat. There are \(\binom{n}{k}\) ways of choosing the number of ways to choose which people are guaranteed to sit in their correct seats. This leaves \((n-k)!\) ways to arrange the remaining people however we wish. It is totally possible that while arranging the these remaining people, someone gets seated where they are supposed to, hence why \(\binom{n}{k}(n-k)!=\frac{n!}{k!}\) is only the number of arrangements where at least \(k\) people are seated correctly. By the principle of inclusion-exclusion, the alternating sum of these must be the desired value. Hence, \[\begin{split} D_n&=n!-\left[\frac{n!}{1!}-\frac{n!}{2!}+...+(-1)^{n+1}\frac{n!}{n!}\right]\\ &=n!\sum_{k=0}^{n}{\frac{(-1)^{k}}{k!}}. \end{split}\] Since the total number of arrangements is \(n!\), we can easily calculate the probability: \[P(n)=\frac{D_n}{n!}=\sum_{k=0}^{n}{\frac{(-1)^{k}}{k!}}.\] Taking the limit, \[\lim_{n\rightarrow\infty}{P(n)}=\sum_{k=0}^{\infty}{\frac{(-1)^{k}}{k!}}.\] But now the RHS is the Taylor series of \(e^x\) evaluated at \(x=-1\)! Hence our requested limit is simply \(\boxed{\frac{1}{e}}\). This is approximately 36.8%. The function \(D_n\) enjoys a number of cool properties. One of them is: \[\sum_{k=0}^{n}{\binom{n}{k}D_{n-k}}=n!.\] See if you can find a combinatorial proof of this. Hint: The construction is similar in spirit to the one we used to find the sum of the number of arrangements with exactly \(k\) correctly seated members. Suppose every point in \(\mathbb{R}^2\) is colored red, green, or blue. Let us prove that there exists a rectangle in this space whose vertices are all the same color.
The standard solution is to consider a \(4\times82\) grid. By Pigeonhole, there must exist at least two common colors in each column. Since there are \(3^4=81\) ways unique colorings of a column, once again by Pigeonhole, one of the columns must also be repeated in this grid. So there must be a rectangle whose vertices are the same color from these two columns. However, we may strengthen the construction to a \(4\times19\) grid as follows. Observe that there are \(\binom{4}{2}=6\) ways to choose the two points in a column that are guaranteed to share a color and there are \(3\) ways to color those two points. This gives a maximum of \(18\) unique columns that do not necessarily share repeated colors in the same rows. Hence, by Pigeonhole, a \(4\times19\) grid must contain two columns with their minimum of two common colors in the same rows. Neat. Feeling bad, need a quick post, here it is.
Problem (HMMT): Eight knights are placed on a chessboard. What is the probability that every square on the chessboard is attacked? Solution (Me): Each knight can attack a maximum of eight squares. So all eight knights must all attack the maximum of eight squares each for the entire board of 64 squares to be attacked. But for a corner square to be attacked, say h8, a knight must be on g6 or f7. But such a knight would attack less than eight squares, contradiction. The answer is 0. Problem (HMMT): A true-false test has 10 questions. Suppose that if you answer any five questions "true" and the remaining five questions "false," then your score is guaranteed to be at least four. How many answer keys are there for which this is true? Solution (Me): Suppose the answer key has 5 true and 5 false answers. Then, it is possible to attain a 0. WLOG (due to symmetry), let us increase the number of true answers to 6. Now, the worst we can do is a 1, as we place 4 T's where there should be F's, and we have a remaining T that is forced to be correct. Extrapolating this reasoning, we guarantee at least a 4 with 9 true and 1 false and at least a 5 with 10 true and none false. These two cases that work are counted by \(\binom{10}{9}+\binom{10}{10}=11\). Multiplying by two to account for symmetry, the answer is 22. Combo is pretty hit or miss and I hit the next one pretty quickly. Problem: Is there a 2000-element subset of the set \(\{1,2,3,...,3000\}\) such that no element in the subset is exactly double another element of the subset? Solution (Me): Suppose the set is \(A\). Choose all the odds (obviously). This gives us 1500 elements. The remaining elements of \(A\) must be multiples of powers of \(4\). If we allow a multiples of powers of \(2\) that are not powers of \(4\), doubles will exist. The computation shows that we end up with \(1999\) terms, just short of the required \(2000\). I have a chess tournament tomorrow. Hopefully that's fun. Problem (Canada 1979): A walk consists of a sequence of steps of length \(1\) taken in directions north, south, east, or west. A walk is self-avoiding if it never passes through the same point twice. Let \(f(n)\) denote the number of \(n\)-step self-avoiding walks which begin at the origin. Show that \(2^n<f(n)\leq4\cdot3^{n-1}\).
Solution (Bored Andrew Paul): Initially, I thought about this algebraically. I wrote: \[(0,0)\rightarrow(x_1,y_1)\rightarrow...\rightarrow (x_n,y_n)\] Where consecutive \(x_i\) and \(y_i\) could be equal. Furthermore: \[x_i\neq x_{i+1}\Rightarrow |x_{i+1}-x_i|=1\] And a similar expression for \(y\). This told me precisely nothing. All I had done was encoded the words of the problem into mathematical symbols. So I decided to quickly get the obvious out of the way: \[f(n)<4^n\] But \(4\cdot3^{n-1}\)? How on earth was I going to improve the bound to that? The lower bound of \(2^n\) seemed a lot more friendly, so I decided to approach that first. The upper bound of \(4^n\) resulted from counting the number of all paths with \(4\) possible directions. Likewise, \(2^n\) counted the number of all paths when only two directions were allowed. Then it hit me: choosing a horizontal and a vertical direction guaranteed that all paths generated would be self-avoiding (think about it: the net movement would be diagonal and you could not cancel out your progress in either dimension). But there were exactly \(2^2=4\) ways I could choose the pairings of the directions (one for each quadrant the path would venture into). I immediately thought that this meant that not only did we have \(f(n)>2^n\), but also \(f(n)>4\cdot2^n=2^{n+2}\). However, this is false because this path construction technique of choosing a horizontal and vertical direction results in identical paths being constructed for paths that went along the axes. For instance, the path \((0,0)\rightarrow(1,0)\rightarrow(2,0)\) can be constructed by choosing either north and east or south and east because the only direction that matters for the construction of this path is east. Since each axis is bordered by two quadrants, each path along the axes will be counted \(8\) times. Since there are only \(4\) axes, we are over-counting the axes paths by exactly \(4\). Hence, we have established the following stronger result for \(n>2\): \[f(n)\geq2^{n+2}-4>2^n\] I observed that equality held for \(n\in\{0,1,2\}\). Now all I had left was to show that \(f(n)\leq4\cdot3^{n-1}\). Suddenly, it struck me that it could be possible that the \(4\) in the expression could represent the number of ways I could choose the first step. If I could show that I had a maximum of \(3\) ways to choose the subsequent steps, I would be done. This would establish the upper bound of \(4\cdot3^{n-1}\). To do this, I thought like an ant. If I were a little ant that was trying to generate self-avoiding paths, how would I go about doing so? I realized that after my first step, no matter what, I could not backtrack. This automatically discounted one of the directions from the set of possible directions I could go in to ensure that my path remained self-avoiding, leaving at most \(3\) directions left to choose from. The equality condition occurred when \(n\in\{1,2\}\). Hence, I concluded, for \(n\geq2\): \[\boxed{2^n<2^{n+2}-4\leq f(n)\leq4\cdot3^{n-1}<4^n}\] The equality conditions are satisfied at \(n=2\). Fun stuff! Canadian problems always tickle my fancy. |
Categories
All
Archives
July 2023
|