It is easy to see that the set of all \(3\times3\) real matrices, which I will denote by \(\mathbb{M}_{3\times3}\) is a vector space. Consider \(H\subset\mathbb{M}_{3\times3}\) defined by
\[H=\{A\in\mathbb{M}_{3\times3}\ \text{such that }\det{(A+A^T)}=0\}.\] Is \(H\) a subspace of \(\mathbb{M}_{3\times3}\)? It turns out, the answer is no. \(H\) has a zero element and is closed under scalar multiplication, but it is not closed under vector addition. It is sufficient to find \(A,B\in H\) with \(A+B\notin H\). Easy, right? Just take \[A=\begin{bmatrix} 1 & 2 & 1\\ 2 & 1 & 2\\ 1 & 2 & 1 \end{bmatrix}\] \[B=\begin{bmatrix} \sqrt{2} & \sqrt{3} & 2\sqrt{2}\\ \sqrt{3} & \sqrt{2} & \sqrt{3}\\ 2\sqrt{2} & \sqrt{3} & \sqrt{2} \end{bmatrix},\] and we are done. \(\square\) As satisfying it would be to just leave these matrices without any trace of where they come from, it contradicts the mantra: a slick write-up is not an instructional one. The first thing to observe is that a matrix plus its transpose is always a symmetric matrix. This is easy to show (addition on real numbers is commutative). So the condition \(\det{(A+A^T)}=0\) is really saying that a certain symmetric matrix related to \(A\) is singular. Moreover, if a matrix is already symmetric, then it is equal to its transpose. This means that the sum of itself with its transpose is equal to two times itself. That is, if \(A\) is a symmetric, \(A+A^T=2A\). But determinants are multilinear (linear on each column vector). So \(\det{2A}=2^n\det{A}\), if \(A\) is an \(n\times n\) matrix. This means that if we choose a symmetric \(A\in\mathbb{M}_{3\times3}\) that is singular, then \(A\in H\), since \(\det{(A+A^T)}=\det{2A}=8\det{A}=0\). In other words, if we choose a symmetric matrix from \(H\), instead of worrying about the sum of that matrix with its transpose, we can just focus on the matrix itself (that is, if \(A\) is symmetric, then \(\det{(A+A^T)}=0\) if and only if \(\det{A}=0\)). So let's investigate symmetric \(3\times3\) matrices. They look like this \[S=\begin{bmatrix} a & b & c\\ b & d & e\\ c & e & f \end{bmatrix}.\] Full disclosure: I stumbled upon this SE post which inspired me to consider the special case of a \(3\times3\) symmetric matrix where \(d=f=a\) and \(e=b\): \[S=\begin{bmatrix} a & b & c\\ b & a & b\\ c & b & a \end{bmatrix}.\] We compute the determinant of such a matrix. \[\begin{split} \det{S}&=a(a^2-b^2)-b(ab-bc)+c(b^2-ac)\\ &=a(a^2-b^2)-b^2(a-c)+b^2c-ac^2\\ &=a^3-b^2a+b^2c-ac^2-b^2(a-c)\\ &=a(a+c)(a-c)-b^2(a-c)-b^2(a-c)\\ &=(a-c)(a^2+ac-2b^2). \end{split}\] In essence, there are three ways that this special case symmetric matrix can be singular. Either only the first factor is equal to zero, only the second factor is equal to zero, or both factors are equal to zero. The sum of two symmetric matrices is also a symmetric matrix, so applying our previous reasoning, instead of worrying about the determinant \(\det{(A+B+(A+B)^T)}\), if we select \(A\) and \(B\) to be symmetric, we need to only worry about \(\det{(A+B)}\). Now watch what happens if we let \(A\) and \(B\) be special-case symmetric matrices in \(H\) with \(A\) having only the first factor in its determinant equal to zero and \(B\) having only the second factor in its determinant equal to zero. In other words, write \[A=\begin{bmatrix} a & b & c\\ b & a & b\\ c & b & a \end{bmatrix}\] \[B=\begin{bmatrix} x & y & z\\ y & x & y\\ z & y & x \end{bmatrix}\] with \(a-c=0\neq a^2+ac-2b^2\), and \(x^2-xz-2y^2=0\neq x-z\). In that case, \[A+B=\begin{bmatrix} p & q & r\\ q & p & q\\ r & q & p \end{bmatrix},\] where \(p=a+x\), \(q=b+y\), and \(r=c+z\). Clearly then it follows that \[p-r=a+x-c-z=x-z\neq0,\] \[p^2-pr-2q^2=(a+x)^2-(a+x)(c+z)-2(b+y)^2=a^2+ac-2b^2+2ax-az-cx-4by=\xi.\] So as long as we choose \(a,b,c,x,y,z\) such that \(\xi\) is nonzero, the determinant of \(A+B\) is guaranteed to be nonzero! This is easy to accomplish. Setting \(a=c=1\) and \(b=2\) satisfies our conditions for the matrix \(A\). Setting \(x=\sqrt{2}\), \(y=\sqrt{3}\), and \(z=2\sqrt{2}\) satisfies our conditions for matrix \(B\), and some quick algebra verifies that \(\xi\neq0\). The result follows! Definitely a fun problem. I reckon there is a simpler solution. My apartment-mate found a counterexample against the closure of \(H\) under addition by trying random things on Matlab. I haven't yet looked into why that solution works, and its corresponding generalization. Stick around for updates!
0 Comments
Here is an interesting problem that has surprisingly far-reaching consequences.
Consider a permutation \(\sigma\colon S\to S'\) of some list, say \(S=\{1,2,\dots,n\}\) that maps \(S\mapsto\{k_1,k_2,\dots,k_n\}\), where \(k_i\in S\). Define a transposition to be a permutation that only swaps two elements of the list it acts on. It is clear that every permutation can be decomposed into transpositions. Seriously. It is intuitively obvious. But if we wanted to be a bit more rigorous, observe that we can give a state to every element \(j_i\in S\). Let \(j_i\) be correct if \(j_i=\sigma(j_i)\), that is, \(j_i\) is a fixed point under the permutation. Let \(j_i\) be incorrect otherwise. For every incorrect element in \(S\), there must exist another incorrect element in \(S\) such that the transposition of the two elements makes the former element correct. So, we can iteratively exclude correct elements, find the unique elements that are in the correct spot of each incorrect element, and perform the transpositions. Since \(S\) has finite cardinality, this algorithm terminates, and in fact leaves every element correct. So any permutation can be decomposed into transpositions. But this does not say anything about the decomposition itself. In fact, there are an infinite number of sequences of transpositions that when composed, yield a given permutation. This is trivial, since we don't have to necessarily stop after the aforementioned algorithm terminates and everything is in the correct spot. You could perform another transposition to "mess things up" and then reverse it, and then continue that an arbitrary number of times. However, a powerful invariant hides behind all of these decompositions. For a given permutation, the number of transpositions in any decomposition of that permutation always has the same parity. This intuitively makes a lot of sense, though it's not very clear how to prove it. I suppose it would perhaps be possible to do so by performing a case analysis with my states (by considering transpositions of the form "incorrect/correct to correct/incorrect", "correct/correct to incorrect/incorrect", etc.). But this is a bit ugly. There is a much more elegant way, though it is arguably more complicated if you are required to prove the preliminaries from scratch as well. This parity of a permutation is called its signature, and we are concerned with its existence and uniqueness. The signature of a permutation is \(1\) when a permutation can only be decomposed into an even number of transpositions and \(-1\) otherwise. But there is actually an equivalent definition of signature that we can give with which it is much easier to probe the questions of existence and uniqueness. Let \(P_n\) denote the set of all permutations that act on lists with cardinality \(n\). Then the signature is defined as the function \[\varsigma\colon P_n\to\{-1,1\}\] such that \(\varsigma(\sigma_1\circ\sigma_2)=\varsigma(\sigma_1)\varsigma(\sigma_2)\) for all \(\sigma_1,\sigma_2\in P_n\) and \(\varsigma(\tau)=-1\) if \(\tau\) is a transposition. The signature is usually denoted by "sgn", but due to MathJax limitations, I will denote it with \(\varsigma\). We need to show that this definition is equivalent to our parity definition, and also that the signature exists and is unique (using this definition). Showing the equivalence of the definitions is trivial. Since the signature of the composition is the product of the signatures, and the signature of a transposition is \(-1\), we must have that the signature is \(\varsigma(\sigma)=(-1)^{T(\sigma)}\), where \(T(\sigma)\) gives the number of transpositions in any one of the decompositions of \(\sigma\). Now we must prove that this function exists and is unique. Proving existence is the hard part. For any permutation \(\sigma\in P_n\), define the permutation matrix \(M_{\sigma}\) as the matrix that maps \[M_{\sigma}\vec{e}_i=\vec{e}_{\sigma(i)},\] where \(\vec{e}_i\) is just a standard basis vector in \(\mathbb{R}^n\). It follows that \(M_{\sigma}\) is some \(n\times n\) matrix which only 0's and 1's, which I won't compute explicitly (though it isn't hard to). It is easy to see for \(\sigma_1,\sigma_2\in P_n\), we have \(M_{\sigma_1\circ\sigma_2}=M_{\sigma_1}M_{\sigma_2}\). Here comes our leap of faith. Define \[\varsigma(\sigma)=\det{M_{\sigma}}.\] This fits our first property, since the determinant of a product is the product of the determinants! Furthermore, the second property is satisfied as well, since one can easily show that for some transposition \(\tau\), the matrix \(M_{\tau}\) is found by transposing two columns in the identity matrix (and so the determinant is negated, due to the antisymmetry of the determinant – this argument is sometimes phrased in terms of the determinant of an elementary matrix representing the transposition of two columns). Hence, the function \(\varsigma\) exists. But, if \(\varsigma\), a function, exists, then each element in the domain can only map to one element in the codomain, so it is also unique. By equivalence of definitions, this means that the number of transpositions of every decomposition of \(\sigma\) into transpositions always has the same parity! The signature ends up being quite important for various definitions in the theory of forms in vector calculus, such as the wedge product. All of this from just viewing a permutation as a series of transpositions. Consider all of the residues modulo 20. Given any one of those residues, can you always find a multiple of 7 that leaves that residue modulo 20?
The solution to this is fairly simple. In fact, the simple solution to this did not occur to me until I proved a generalization of the statement. Regardless, I will show that solution here first for demonstration purposes. In the case of 7, it is immediately apparent that \(21\equiv1\pmod{20}\). So if we desire to produce the residue \(k\), just take \(3k\cdot 7\). There it is. Simple, right? In fact, not only is it simple, but it's a bit cheap. It doesn't really tell us what's going on under the hood. We just conveniently found a multiple of 7 that was congruent to 1 modulo 20. So let's walk away from the specificity of the numbers 7 and 20 and instead pose the following problem. Let \(m,n\in\mathbb{N}\) be coprime, with \(m<n\). Given any residue modulo \(n\), can one always find a multiple of \(m\) that leaves that residue modulo \(n\)? Obviously, if we can show that there exists a multiple of \(m\) congruent to 1 modulo \(n\), we're done. But it wasn't obvious to me how to proceed with this idea that worked very quickly for 7 and 20, for general \(m\) and \(n\). And anyway, the idea with 21 only came to me after I had already solved this problem. Instead, the first thing I saw here is that we only need to work with multiples of \(m\) up to \(mn\). This is because \(m(n+r)\equiv mr\pmod{n}\). This is actually the first solution I presented to the original problem with 7 and 20. Just check all of the multiples of 7 up to 140. Of course this isn't as efficient as the observation of \(3k\cdot7\). So let \(A=\{m,2m,\ldots,mn\}\). From this set, the first thing I observed was that the first half of the set cancelled with partners reflected across the middle into the last half of the set, modulo \(n\). For instance, \(m+m(n-1)\equiv0\pmod{n}\). But this still didn't answer the question. Because there was no indication that when we look through the pairings, every residue was covered. This made me wonder how much room I had for overlapping residues. It turns out, conveniently, that the answer to that question is "none". Observe that the set of residues modulo \(n\) is \(B=\{0,1,\ldots,n-1\}\). More importantly, \(|A|=|B|=n\). Now we're going somewhere! By the pigeonhole principle, the answer to our question is "yes" if and only if each multiple in \(A\) gives a unique residue modulo \(n\). Obviously \(mn\) takes care of the residue 0. Since \(m\) and \(n\) are coprime, no other element of \(A\) leaves that residue of 0. So now, excluding \(mn\), let us take two elements of \(A\), say \(km\) and \(jm\). Suppose, to the contrary, that they do leave the same residue modulo \(n\). Then \(km\equiv jm\pmod{n}\). Since \(m\) and \(n\) are coprime, we have \(\gcd{(m,n)}=1\) so we can divide the congruence by \(m\) to obtain \(k\equiv j\pmod{n}\). But this is a contradiction! Because \(0<k<j<n\) or \(0<j<k<n\). So the answer to our question is "yes". The summer has arrived and I'm back in Florida! Let us begin with a cutie.
Problem: Consider the number \[k(n)=1\underbrace{4...4}_{\text{$n$ digit $4$'s}}.\] For what values \(n\) is \(k(n)\) a perfect square? Solution: \(k(0)\) is obviously a perfect square and \(k(1)\) is obviously not. For \(n>1\), we have that \(k(n)\) is even, since it ends with the digit 4. Any even square is divisible by 4. By long division, we find that for \(n>1\), \[\frac{k(n)}{4}=36\underbrace{1...1}_{\text{$n-2$ digit $1$'s}}.\] So it suffices to find the \(n\) for which this number is a square. Observe that squares are congruent to either 0 or 1 modulo 4 (if you've stuck around here for some time, you'd notice that we LOVE taking squares modulo 4). But, by long division, we see that \(\frac{k(n)}{4}\) is congruent to 3 modulo 4 whenever \(n>3\). So, we need to only check \(n=2\) and \(n=3\). Indeed, \(144=12^2\) and \(1444=38^2\). So \(k(n)\) is square precisely when \(n\in\{0,2,3\}\). \(\square\) I have a few ideas about what I plan on doing next here. I want to talk about Lebesgue integration. The theory of Riemann integration that we thoroughly developed in MATH 31CH is a powerful conceptual tool, but it has several weaknesses. For example, the Riemann integral is not well-defined over \(\mathbb{R}^n\) for functions with unbounded supports. The Lebesgue integral allows us to integrate over unbounded supports. This is just one of the advantages it gives. The Lebesgue integral is also considerably more well-behaved under limits, and we'll see that this gives rise to the dominated convergence theorem (DCT), regarding the interchangeability of integrals and limits (which is something that many of us take for granted *cough*MAO heuristics*cough*). This can be extended even further to talk about the interchangeability of differentiation and integration when they are composed on a function. Yes, that's right. The theory of Lebesgue integration is what forms the rigorous underpinnings of Feynman's trick! I will be going through these topics soon, in a main page post. I like this idea, because it sort of pulls together a bunch of stuff that I've seen or worked on for the past few weeks (my final project in MATH 31CH was on Lebesgue integration and Feynman's trick, which a homework problem in physics happened to use, and I also recently stumbled into a problem where one needs to apply DCT). I don't expect that I'll be proving certain things, such as DCT itself and the existence of the Lebesgue integral, because the proofs are quite technical, and honestly you'd be better off reading a textbook if you're looking for those. Instead, I will try my best to show the motivation behind the Lebesgue integral and its utility in approaching problems that are pretty much intractable with Riemann's theory. I should have more time to work on this now. After all, UCSD decided to cancel one of my summer classes (MATH 120A).... My first Putnam is over and I solved two of the problems. They are probably not the best written solutions. I am usually quite elaborate when I am able to type solutions, but the problem is the Putnam is timed. And written. Here is one of the problems that I solved.
2019 A1: Determine all possible values of the expression \[A^3+B^3+C^3-3ABC\] where \(A\), \(B\), and \(C\) are nonnegative integers. Solution: My first thoughts:
I messed around with a few representations of the expression with \((A+B+C)^3\). But then I asked myself what is the point? What am I trying accomplish? I needed to find the types of numbers that the expression could be. How could I even characterize that? Clearly, they must be integers, and by AM-GM, they must be nonnegative integers. What characteristics were they expecting me to use to describe the range? Prime or composite? Number of factors? Divisibility? Divisibility seemed to be the natural answer, and I prepared myself for some modular arithmetic. But now what? Time to generate numbers I suppose. I won't pain you through all the computations but what was very insightful is the manner in which I generated the triples. I fixed \(A\), and then let \(B\) and \(C\) run through the nonnegative integers less than or equal to \(A\). Once I ran through all unique combinations (permutations don't matter because the expression is symmetric in all three variables), I incremented \(A\) and repeated the process. I noticed that I was sometimes hitting on the same number twice, but it seemed that letting \(B\) and \(C\) be numbers that were either equal to \(A\) or one less than \(A\) always generated a new number. Let's try this out. Suppose that WLOG \(A=B=n\), and \(C=n-1\). Then \[\begin{split} A^3+B^3+C^3-3ABC&=2n^3+(n-1)^3-3n^2(n-1)\\ &=2n^3+n^3-3n^2+3n-1-3n^3+3n^2\\ &=3n-1 \end{split}\] Aha! So we can construct every positive integer congruent to 2 modulo 3. Likewise, we find that the expression yields \(3n-2\) when we set \(A=n\) and \(B=C=n-1\), so we can also construct every positive integer congruent to 1 modulo 3. Now we know that every positive integer that is not a multiple of 3 is in the range. How about multiples of 3? Strangely, I was not able to construct 3 or 6 with the triples that I tried, and it did not seem like higher triples would generate these numbers, as the repeat hits that I was getting occurred relatively quickly after the first triple I found that generated that particular number. I was, however, able to construct 9 and 18 with any permutations of the triples \((0,1,2)\) and \((1,2,3)\), respectively. This strongly suggested that I try out a permutation of \((n,n-1,n-2)\). Doing the algebra, I indeed found that the expression simplified to \(9(n-1)\). So now I know that I can construct any positive multiple of 9. At this point, I conjecture that I cannot construct a multiple of 3 that is not a multiple of 9. Another way to frame this is, if there is a multiple of 3 in the range, then it must also be a multiple of 9. If the expression was a multiple of 3, since obviously \(3ABC\equiv0\pmod{3}\), I required \(A^3+B^3+C^3\equiv0\pmod{3}\). I had also observed (a bit earlier if I remember correctly) that \(n^3\equiv n\pmod{3}\) in all three possible cases. So now, we simply required \(A+B+C\equiv0\pmod{3}\). There are precisely four cases where this occurs. Case 1: \((A,B,C)\equiv (0,0,0)\pmod{3}\) In this case, all three numbers have at least one factor of 3, so their cubes have at least three factors of 3. Furthermore, the product \(3ABC\) must have at least four factors of 3. Hence, we have in this case \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] Case 2: \((A,B,C)\equiv (1,1,1)\pmod{3}\) We can write \(A=3x+1\), \(B=3y+1\), and \(C=3z+1\) for \(x,y,z\in\mathbb{Z}\). Plugging these into the expression, we find that the expansion consists of coefficients that are all divisible by 9. Hence, once again we obtain \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] Case 3: \((A,B,C)\equiv (2,2,2)\pmod{3}\) Similar to the previous case, we bash out the expression, but setting \(A=3x+2\), \(B=3y+2\), and \(C=3z+2\) this time. Once again, the coefficients end up being divisible by 9. Hence in this case, \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] Case 4: \((A,B,C)\equiv \textrm{some permutation of }(0,1,2)\pmod{3}\) In this case, WLOG let the permutation of the ordered triple above be \((0,1,2)\). Then, by our arguments in the first case, both \(A^3\) and \(3ABC\) are divisible by 9. For the remaining terms, we have \[\begin{split} B^3+C^3&=(B+C)(B^2+C^2-BC)\\ &\equiv (1+2)(1+4-2)\pmod{3} \end{split}\] Observe that both factors are congruent to 0 modulo 3, so the sum \(B^3+C^3\) is also divisible by 9. Hence we have in this case \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] So we can see that we can construct 0 (duh), every positive integer that is not a multiple of 3, every positive integer that is a multiple of 9, and that whenever we construct a multiple of 3 it must also be a multiple of 9. So the range is all nonnegative integers except for multiples of 3 which are not multiples of 9. \(\square\) Phew! 1994 Wisconsin Mathematics Science & Engineering Talent Search PSET 3 Problem 3: Find all positive integer solutions of the equation
\[x^3-y^3=xy+61.\] Solution: My first instinct is that the LHS factors: \[(x-y)(x^2+xy+y^2)=xy+61.\] I see that there is an \(xy\) term on both sides of the equation. Perhaps if I distribute the \(x-y\) on the LHS and then subtract \(xy\). Then, we obtain \[(x-y)x^2+(x-y-1)xy+(x-y)y^2=61.\] This doesn't do me any good, but I notice that I can factor out \(x-y\), I can obtain \[(x-y)\left(x^2+\left(1-\frac{1}{x-y}\right)xy+y^2\right)=61.\] Aha! Now we can equate the two factors on the left to the integer factors of \(61\). Luckily, \(61\) is prime so there are only four cases to test. The only one that ends up working is \(x-y=1\), which yields \[x^2+(x-1)^2=61\Rightarrow 2x^2-2x-60=0\Rightarrow (x-6)(x+5)=0.\] So our two integer solutions are \((6,5)\) and \((-5,-6)\). The only positive integer solution is thus \(\boxed{(6,5)}\). 1994 Wisconsin Mathematics Science & Engineering Talent Search PSET 3 Problem 4: Without using a calculator or computer, determine which of the two numbers \(31^{11}\) or \(17^{14}\) is larger. Solution 1: First, I observed that \(31=2^5-1\) and \(17=2^4+1\). So the ratio between the two given numbers is \[\begin{split} \frac{31^{11}}{17^{14}}&=\frac{(2^5-1)^{11}}{(2^4+1)^{14}}\\ &=\frac{(2^5-1)^{11}}{(2^4+1)^{14}}\cdot\frac{2^{-56}}{2^{-56}}\\ &=\frac{1}{2}\cdot\frac{(1-2^{-5})^{11}}{(1+2^{-4})^{14}} \end{split}\] Since \(\frac{(1-2^{-5})^{11}}{(1+2^{-4})^{14}}<\frac{(1-2^{-5})^{11}}{1}<\frac{1}{1}\), we must have \(17^{14}>31^{11}\). Solution 2: This time, we proceed with the difference instead of the ratio. \[\begin{split} 17^{14}-31^{11}&=(2^4+1)^{14}-(2^5-1)^{11}\\ &=\sum_{k=0}^{14}{\binom{14}{k}2^{56-4k}}-\sum_{j=0}^{11}{(-1)^j\binom{11}{j}2^{55-5j}}\\ &=\sum_{k=0}^{11}{\binom{14}{k}2^{56-4k}+(-1)^{k+1}\binom{11}{k}2^{55-5k}}+\sum_{j=12}^{14}{\binom{14}{j}2^{56-4j}} \end{split}\] The second summation in the last expression above is positive since each term is positive. At this point, I wonder if I could decompose \(\binom{14}{k}\) into a linear combination of terms of the form \(\binom{11}{r}\). I realize that this is actually possible with Pascal's identity: \[\begin{split} \binom{14}{k}&=\binom{13}{k}+\binom{13}{k-1}\\ &=\binom{12}{k}+2\binom{12}{k-1}+\binom{12}{k-1}\\ &=\binom{11}{k}+3\binom{11}{k-1}+3\binom{11}{k-2}+\binom{11}{k-3} \end{split}\] Curiously, the coefficients themselves are binomial coefficients! The proof of that follows pretty naturally. I bet there is a counting argument for it too. Anyway, our first summation can now be written as: \[\begin{split} \sum_{k=0}^{11}{\binom{14}{k}2^{56-4k}+(-1)^{k+1}\binom{11}{k}2^{55-5j}}&=\sum_{k=0}^{11}{\left[\binom{11}{k}+3\binom{11}{k-1}+3\binom{11}{k-2}+\binom{11}{k-3}\right]2^{56-4k}+(-1)^{k+1}\binom{11}{k}2^{55-5k}}\\ &=\sum_{k=0}^{1}{\binom{11}{k}\left(2^{56-4k}+(-1)^{k+1}2^{55-5k}\right)+\left[3\binom{11}{k-1}+3\binom{11}{k-2}+\binom{11}{k-3}\right]2^{56-4k}} \end{split}\] Observe that for due to the definition of generalized binomial coefficients, \(\binom{11}{r}=0\) for negative integers \(r\). Furthermore, we observe that to have \(2^{56-4k}>2^{55-5k}\), we must have \(56-4k>55-5k\) or \(k>-1\). But this is obviously always true for our index \(k\) which starts at \(0\). Hence, every term in the summations is positive. So we must have \(\boxed{17^{14}>31^{11}}\). Here is a fun one.
1994 Wisconsin Mathematics Science & Engineering Talent Search PSET 3 Problem 2: Suppose that each of the three main diagonals \(\overline{AD}\), \(\overline{BE}\), and \(\overline{CF}\) divide the hexagon \(ABCDEF\) into two regions of equal area. Prove that the three diagonals meet at a common point. Solution: Interesting. First, I drew the proper configuration. Where the diagonals concurred at point \(P\) and each split the hexagon into regions of equal area. In this case, six triangles were formed. I let \(T_1=\triangle APB\), and likewise labelled the rest of the triangles, proceeding clockwise with the labeling. Writing our three area conditions in terms of the areas of the \(T_i\)'s, I obtained the system of equations \[\left\{ \begin{array}{ll} [T_1]+[T_2]+[T_3]=[T_4]+[T_5]+[T_6]\\ [T_2]+[T_3]+[T_4]=[T_1]+[T_5]+[T_6]\\ [T_1]+[T_2]+[T_6]=[T_3]+[T_4]+[T_5]. \end{array}\right.\] Wow. There is clearly a massive amount of symmetry in these equations. My instinct is to add them all together. \[3[T_2]+2[T_1]+2[T_3]+[T_4]+[T_6]=3[T_5]+2[T_6]+2[T_4]+[T_1]+[T_3].\] This rearranges to \[2[T_2]+([T_1]+[T_2]+[T_3])=2[T_5]+([T_4]+[T_5]+[T_6]).\] Aha! The sums in the parentheses are equivalent! So \([T_2]=[T_5]\). By symmetry, we must also have \([T_1]=[T_4]\) and \([T_3]=[T_6]\). These results can be obtained explicitly by flipping an equation in our original system before adding all of the equations together. Conveniently, the triangles with equal areas happen to have equal angles since they are formed by the same two diagonals in each instance. Since the area of a triangle is given by \([ABC]=\frac{1}{2}ab\sin{C}\), and the enclosed angles were the same between the pairs of triangles with equal area, the product of the lengths of the enclosing sides must also be the same. So I figured, let \(AP=a\), \(BP=b\), etc. Then, we must have \[\left\{ \begin{array}{ll} af=cd\\ de=ab\\ bc=ef. \end{array}\right.\] At this point, I couldn't really immediately see anything more insightful about the correct configuration. I wondered how to even prove concurrency. Perhaps, I could work with a configuration that was not correct and show that the configuration could not be correct. So then, suppose that the three diagonals were not concurrent. Then there exists a seventh triangle, \(T_7=\triangle RST\), such that \(R=\overline{AD}\cap\overline{BE}\), \(S=\overline{AD}\cap\overline{CF}\), and \(T=\overline{BE}\cap\overline{CF}\). In this case, let \(T_1=\triangle ASF\), \(T_2=\triangle ARB\), \(T_3=\triangle BTC\), \(T_4=\triangle CSD\), \(T_5=\triangle DRE\), and \(T_6=\triangle ETF\). Incredibly, when we write the area conditions out in this configuration, \([T_7]\) vanishes entirely! The resulting system is identical to what we obtained in the correct configuration, and once again we obtain \([T_1]=[T_4]\), \([T_2]=[T_5]\), and \([T_3]=[T_6]\). The manipulations required to obtain these equalities are slightly trickier, however, as we have to strategically subtract \([T_7]\) from both sides of our simplified sum of three equations. Anyway, we let \(RS=\alpha\), \(ST=\beta\), and \(RT=\gamma\). We define the length from a vertex \(V\) of the hexagon to a vertex of \(T_7\) to be the lowercase of \(V\). Then, turning our area conditions into length conditions using the product of side lengths, we obtain \[\left\{ \begin{array}{ll} af=(c+\beta)(d+\alpha)\\ de=(a+\alpha)(b+\gamma)\\ bc=(e+\gamma)(f+\beta). \end{array}\right.\] Again, I see massive symmetry. This time, my instinct is to multiply the three equations together: \[abcdef=(a+\alpha)(b+\gamma)(c+\beta)(d+\alpha)(e+\gamma)(f+\beta).\] Aha! The product on the RHS is \(\geq abcdef\) since every variable is a length, and thus nonnegative. The equality condition is thus only achieved when \(\alpha=\beta=\gamma=0\). In other words, \(T_7\) must be a degenerate triangle whose vertices, the pairwise intersections of the main diagonals, must coincide at the same point. \(\square\) My linear class is weird.
Consider two continuous functions \(f,g\colon [0,1]\to\mathbb{R}\). Prove that \[\left|\int_{0}^{1}{f(x)g(x)\textrm{ d}x}\right|\leq\sqrt{\int_{0}^{1}{f(x)^2\textrm{ d}x}}\sqrt{\int_{0}^{1}{g(x)^2\textrm{ d}x}}.\] This problem is pretty trivial. Simply apply Cauchy-Schwarz to the Riemann sum approximation of the LHS and take the limit as \(n\rightarrow\infty\). This was a starred (challenge) problem in my linear homework following the week where we discussed Cauchy-Schwarz. Strange. Anyway, I've encountered a lot of cute problems lately. Among these is the following. Problem (HMMT): 10 students take a test. Each problem is solved by precisely 7 people and nine of the students solve exactly 4 of the problems. How many problems does the 10th student solve? Solution: Let the number of problems on the test be \(P\). Consider a set of \(P\) bins, with each corresponding to a particular problem, and a set of 10 bins, with each corresponding to a particular student. For every student that solves a certain problem, we place a stone in that problem's bin and for every problem that a student solves, we place a stone in that student's bin. Note that if we place a stone in a problem's bin, we must also place a stone in a student's bin. That is, there exists a bijection between the stones in the problem bins and the student bins. The total number of stones in the problem bins is given to be \(7P\) and the total number of stones in the student bins is \(x+4\cdot9\), where \(x\) is the number of problems that the 10th person solves. Hence, \[x=7P-36.\] Since \(x\geq0\), we must have \(P\geq6\). Furthermore, since the 10th person cannot solve more problems than are on the test, we have \(x=7P-36<P\), which means \(P\leq6\). Therefore, \(P=6\) and \(x=\boxed{6}\). Here is a cute problem from Sweden. It is simply stated. Solve
\[(\sqrt{2}-1)^x+(\sqrt{2}+1)^x=6.\] Before reading on, I strongly encourage you to give this a shot yourself. This is one of those problems that make you slap your forehead in disgust when you see the solution because it's so dumb simple. Ok. Here's my solution. We multiply through with, say, \((\sqrt{2}+1)^x\). This gives us \[1^x+(\sqrt{2}+1)^{2x}=6(\sqrt{2}+1)^x.\] But \(1^x=x\forall x\in\mathbb{R}\) so \[1+(\sqrt{2}+1)^{2x}=6(\sqrt{2}+1)^x.\] Aha! Now this is quadratic in \((\sqrt{2}+1)^x\). Letting \(y=(\sqrt{2}+1)^x\), we have \[y^2-6y+1=0,\] from which we find the roots to be \[y=3\pm2\sqrt{2}.\] Hence, \[(1+\sqrt{2})^x=3\pm2\sqrt{2}\] and we have our two solutions \[\boxed{x=\log_{1+\sqrt{2}}{(3\pm2\sqrt{2})}=\pm2}\] Haha... Suppose every point in \(\mathbb{R}^2\) is colored red, green, or blue. Let us prove that there exists a rectangle in this space whose vertices are all the same color.
The standard solution is to consider a \(4\times82\) grid. By Pigeonhole, there must exist at least two common colors in each column. Since there are \(3^4=81\) ways unique colorings of a column, once again by Pigeonhole, one of the columns must also be repeated in this grid. So there must be a rectangle whose vertices are the same color from these two columns. However, we may strengthen the construction to a \(4\times19\) grid as follows. Observe that there are \(\binom{4}{2}=6\) ways to choose the two points in a column that are guaranteed to share a color and there are \(3\) ways to color those two points. This gives a maximum of \(18\) unique columns that do not necessarily share repeated colors in the same rows. Hence, by Pigeonhole, a \(4\times19\) grid must contain two columns with their minimum of two common colors in the same rows. Neat. |
Categories
All
Archives
July 2023
|