I have finished my undergraduate degree at UCSD. I will begin my PhD at UNC Chapel Hill next month. My interests are roughly in geometry, topology, and mathematical physics. My last year at UCSD was crucial in shaping these interests. At UNC, I am tentatively planning on working with Justin Sawon. Overall, I am glad that I was able to learn math for the last four years at UCSD and I look forward to continuing the journey at UNC.
As for the rest of this summer, I plan on reviewing some stuff that I've learned in the past few years to be relatively prepared for some of the comprehensive exams. Time-permitting, I will write up some important results here.
0 Comments
The summer has progressed quite a bit. My REU at UC Santa Barbara has ended. I still cannot say much about the work that I did (I will have to wait until a preprint is up on arXiv before I can really talk about technical details). However, I can say that broadly, the research was in Lie theory. I have presented the work at the Southern California Math REU Conference and at the YMC. I also do plan to apply to a few other conferences.
Overall, the experience was great. I learned a lot about Lie theory and how math research is conducted in general. I found by lack of experience in algebra to be the main bottleneck. Often times, I was left gawking at some basic algebraic facts (such as the isomorphism theorems, universal properties, etc.). This was very reminiscent of the struggles I faced when studying algebraic topology this past spring quarter. Not only was the subject conceptually very difficult, but I was often stuck on the algebra. My deficiencies in algebra should be resolved this upcoming academic year. I have decided to drop my physics major. Superficially, this may seem like pretty major news, but honestly, not much has changed. Since early last year, I had been intending to pursue a PhD program in mathematics anyway. This is not to say that I am no longer interested in physics—much of my mathematical interests are from mathematical physics. The main issue is that the physics major at UCSD is structured in a very annoying way. It forces me to take a very particular sequence of classes in a very particular order. All things considered, the sequence ends up being far too slow for my liking. Students are denied access to the serious physics classes until their third year—but that is the time that one should really be thinking about grad school! When I was constructing my schedule for my upcoming fall quarter, I realized that I had hit a crossroads. If I continued with physics, I would have to take various lab classes with first and second year students. Alternatively, if I focused entirely on math, I would be able to immediately take some graduate courses. I decided that the opportunity cost of staying in physics was just too great. My schedule for the fall quarter consists of abstract algebra, logic, graduate-level complex analysis, and graduate-level real analysis. I will also be taking the Putnam seminar. I am especially excited about logic: this is a class that I really wanted to take at some point in my life, and now I am able to since I am no longer a physics major! I will also be taking the Putnam seminar with Daniel Kane once again. As this will be my final year before graduate school applications, I do wish to perform well this year. After resting for the past two weeks, I have finally begun preparations for that. My schedule should keep me busy this fall. I do not think I will be doing much else other than what I have indicated. I may do a little bit of reading on the side. Graduate school applications seem to be looming right around the corner and that worries me a bit, but I have about a year left. Hopefully, I can make it count. Yeah, I know. Long time no see. I've been busy. I do intend to add all the stuff that has been on my mind eventually. Anyway, a quick update.
Overall, good progress I would say. Not much more I could ask for during a pandemic. As you can tell, there has been a bit of a hiatus. Haven't really been productive (beyond just working on my classes) in the meanwhile. But I'm back now! I suppose this will be a bit of a meta post, followed by some electromagnetic theory.
First of all, I'm working on a research project (along with principal investigator Dr. Thomas Siegert). The project pertains the impact of small solar system bodies (SSSBs) on gamma ray data from INTEGRAL. I'll probably talk a bit more about this once I move farther into the project and have a better grasp of things. I'm also giving a lecture on linear algebra at the San Diego Math Circle (SDMC). I plan on drawing inspiration from some of the stuff that I've exposited on here. In particular, I really want to show the students that linear algebra is a lot more than just Euclidean vectors and solving simultaneous linear equations. That lecture will be happening this Saturday (11/14). Some of the heuristic arguments in physics rub me in the wrong way (due to the lack of mathematical rigor), but I cannot deny the importance of being able to reason quickly with heuristics in physics (and in general, using ad hoc methods that aren't entirely rigorous is, in my opinion, both natural and fine in the path to a more rigorous solution). Here are some examples. The electric field and potential are related by \(\vec{E}=-\nabla\phi\). It follows that the divergence of the electric field is related to the Laplacian of the potential: \[\nabla\cdot\vec{E}=-\nabla^2\phi.\] Gauss' law tells us that \(\int_{S}{\vec{E}\cdot\mathrm{d}\vec{a}}=\frac{Q}{\epsilon_0}=\frac{1}{\epsilon_0}\int_{V}{\rho\ \mathrm{d}v}\), where \(\rho\) is the charge density. However, Gauss' theorem says that the integral of the flux over the surface is equal to the integral of the divergence in the volume. Equating the integrands gives us \(\nabla\cdot\vec{E}=\frac{\rho}{\epsilon_0}\), hence \[\nabla^2\phi=-\frac{\rho}{\epsilon_0}.\] But when \(\rho=0\), as is the case in empty space where there are no charges, the potential function must satisfy Laplace's equation \[\nabla^2\phi=0.\] This equation is pretty important, and there is quite a bit of theory behind it, as it not only pops up here, but in other areas (such as heat transfer). I don't even remember much of the basic theory we learned about it in MATH 110, but there is one property of functions that satisfy Laplace's equation (called harmonic functions) that is relevant here. Theorem: Let \(f\colon U\to\mathbb{R}\) be a harmonic function over the set \(U\subset\mathbb{R}^3\). Then, the average value of \(f\) over any sphere contained in \(U\) is equal to the value of \(f\) at the center of that sphere. An analogous result holds in two dimensions as well, if \(U\subset\mathbb{R}^2\) and we replace spheres with circles. Proof (sketch): I know of two ways of thinking about this. Ironically, neither of them are entirely rigorous. One of them is something you'd expect to encounter from an applied math class (like MATH 110), and the other is a physical argument that applies just to electric potentials (provided in Purcell). The applied math-y way is to consider the Taylor expansions of \(f\) incremented and decremented by \(h\) in each variable, one at a time. Then, you add each pair (for instance, \(f(x+h,y,z)+f(x-h,y,z)\)). This eliminates mixed partials (at least the lower-order ones). Then, you can isolate the repeated second partial derivative terms (partial with respect to \(x\) and then \(x\), etc.) in each pair-sum. Plugging these expressions into Laplace's equation, one finds that neglecting the higher-order terms, the value of \(f\) is equal to the to the average of the values of function at the points that are incremented and decremented by \(h\) away from the center, in each direction. Heuristically, we can extend this to every direction in a full sphere around the center point. Physically, we consider a point charge \(P\) with charge \(Q\) and a sphere \(\Omega\) that has a charge \(q\) distributed uniformly over it, such that the center of \(\Omega\) is a distance \(R\) from \(P\). We can compute how much work it takes to construct this configuration, in two different ways. The first way is to note that due to the shell theorems, outside of \(\Omega\), it is electromagnetically indistinguishable from a point charge with charge \(q\) at its center. Hence, the work required to assemble the configuration is the same as the work required to bring two point charges together (or more precisely, bring \(P\) in from infinity). This is simply \(\frac{Qq}{4\pi\epsilon_0R}\). On the other hand, instead of bringing \(P\) in from infinity, we can bring \(\Omega\) in from infinity. Then, the work done must be equal to the average potential over \(\Omega\) times the total charge of \(\Omega\). But clearly, the work done doesn't change from us thinking about it this way! So we still have a work of \(\frac{Qq}{4\pi\epsilon_0R}\). This means that the average potential over \(\Omega\) is simply \(\frac{q}{4\pi\epsilon_0R}\), which is precisely the potential at the center of \(\Omega\) due to \(P\). So the electric potential function satisfies the theorem in empty space that does not contain charges, where there is only one point charge in the universe. Superposition gives the general result for arbitrary charge distributions in the universe. A simple corollary of this result is that no harmonic function can have local extrema (or at least, extrema will be on the boundary of the domain over which the function is harmonic). This leads itself naturally into Earnshaw's theorem, which states that there exists no configuration that provides a stable equilibrium for a point charge. To see this, observe that if there was a point in space where there was a stable equilibrium, the potential there must be a local minimum, a contradiction. Alternatively, the electric field at such a point would have to have a negative divergence. Gauss' law then says a negative must exist at that point, contradicting our assumption that we were considering empty space. There's quite a bit more to talk about, but I'll end it here for now. I've procrastinated quite a bit and I need to catch up on other stuff! The summer has arrived and I'm back in Florida! Let us begin with a cutie.
Problem: Consider the number \[k(n)=1\underbrace{4...4}_{\text{$n$ digit $4$'s}}.\] For what values \(n\) is \(k(n)\) a perfect square? Solution: \(k(0)\) is obviously a perfect square and \(k(1)\) is obviously not. For \(n>1\), we have that \(k(n)\) is even, since it ends with the digit 4. Any even square is divisible by 4. By long division, we find that for \(n>1\), \[\frac{k(n)}{4}=36\underbrace{1...1}_{\text{$n-2$ digit $1$'s}}.\] So it suffices to find the \(n\) for which this number is a square. Observe that squares are congruent to either 0 or 1 modulo 4 (if you've stuck around here for some time, you'd notice that we LOVE taking squares modulo 4). But, by long division, we see that \(\frac{k(n)}{4}\) is congruent to 3 modulo 4 whenever \(n>3\). So, we need to only check \(n=2\) and \(n=3\). Indeed, \(144=12^2\) and \(1444=38^2\). So \(k(n)\) is square precisely when \(n\in\{0,2,3\}\). \(\square\) I have a few ideas about what I plan on doing next here. I want to talk about Lebesgue integration. The theory of Riemann integration that we thoroughly developed in MATH 31CH is a powerful conceptual tool, but it has several weaknesses. For example, the Riemann integral is not well-defined over \(\mathbb{R}^n\) for functions with unbounded supports. The Lebesgue integral allows us to integrate over unbounded supports. This is just one of the advantages it gives. The Lebesgue integral is also considerably more well-behaved under limits, and we'll see that this gives rise to the dominated convergence theorem (DCT), regarding the interchangeability of integrals and limits (which is something that many of us take for granted *cough*MAO heuristics*cough*). This can be extended even further to talk about the interchangeability of differentiation and integration when they are composed on a function. Yes, that's right. The theory of Lebesgue integration is what forms the rigorous underpinnings of Feynman's trick! I will be going through these topics soon, in a main page post. I like this idea, because it sort of pulls together a bunch of stuff that I've seen or worked on for the past few weeks (my final project in MATH 31CH was on Lebesgue integration and Feynman's trick, which a homework problem in physics happened to use, and I also recently stumbled into a problem where one needs to apply DCT). I don't expect that I'll be proving certain things, such as DCT itself and the existence of the Lebesgue integral, because the proofs are quite technical, and honestly you'd be better off reading a textbook if you're looking for those. Instead, I will try my best to show the motivation behind the Lebesgue integral and its utility in approaching problems that are pretty much intractable with Riemann's theory. I should have more time to work on this now. After all, UCSD decided to cancel one of my summer classes (MATH 120A).... My first Putnam is over and I solved two of the problems. They are probably not the best written solutions. I am usually quite elaborate when I am able to type solutions, but the problem is the Putnam is timed. And written. Here is one of the problems that I solved.
2019 A1: Determine all possible values of the expression \[A^3+B^3+C^3-3ABC\] where \(A\), \(B\), and \(C\) are nonnegative integers. Solution: My first thoughts:
I messed around with a few representations of the expression with \((A+B+C)^3\). But then I asked myself what is the point? What am I trying accomplish? I needed to find the types of numbers that the expression could be. How could I even characterize that? Clearly, they must be integers, and by AM-GM, they must be nonnegative integers. What characteristics were they expecting me to use to describe the range? Prime or composite? Number of factors? Divisibility? Divisibility seemed to be the natural answer, and I prepared myself for some modular arithmetic. But now what? Time to generate numbers I suppose. I won't pain you through all the computations but what was very insightful is the manner in which I generated the triples. I fixed \(A\), and then let \(B\) and \(C\) run through the nonnegative integers less than or equal to \(A\). Once I ran through all unique combinations (permutations don't matter because the expression is symmetric in all three variables), I incremented \(A\) and repeated the process. I noticed that I was sometimes hitting on the same number twice, but it seemed that letting \(B\) and \(C\) be numbers that were either equal to \(A\) or one less than \(A\) always generated a new number. Let's try this out. Suppose that WLOG \(A=B=n\), and \(C=n-1\). Then \[\begin{split} A^3+B^3+C^3-3ABC&=2n^3+(n-1)^3-3n^2(n-1)\\ &=2n^3+n^3-3n^2+3n-1-3n^3+3n^2\\ &=3n-1 \end{split}\] Aha! So we can construct every positive integer congruent to 2 modulo 3. Likewise, we find that the expression yields \(3n-2\) when we set \(A=n\) and \(B=C=n-1\), so we can also construct every positive integer congruent to 1 modulo 3. Now we know that every positive integer that is not a multiple of 3 is in the range. How about multiples of 3? Strangely, I was not able to construct 3 or 6 with the triples that I tried, and it did not seem like higher triples would generate these numbers, as the repeat hits that I was getting occurred relatively quickly after the first triple I found that generated that particular number. I was, however, able to construct 9 and 18 with any permutations of the triples \((0,1,2)\) and \((1,2,3)\), respectively. This strongly suggested that I try out a permutation of \((n,n-1,n-2)\). Doing the algebra, I indeed found that the expression simplified to \(9(n-1)\). So now I know that I can construct any positive multiple of 9. At this point, I conjecture that I cannot construct a multiple of 3 that is not a multiple of 9. Another way to frame this is, if there is a multiple of 3 in the range, then it must also be a multiple of 9. If the expression was a multiple of 3, since obviously \(3ABC\equiv0\pmod{3}\), I required \(A^3+B^3+C^3\equiv0\pmod{3}\). I had also observed (a bit earlier if I remember correctly) that \(n^3\equiv n\pmod{3}\) in all three possible cases. So now, we simply required \(A+B+C\equiv0\pmod{3}\). There are precisely four cases where this occurs. Case 1: \((A,B,C)\equiv (0,0,0)\pmod{3}\) In this case, all three numbers have at least one factor of 3, so their cubes have at least three factors of 3. Furthermore, the product \(3ABC\) must have at least four factors of 3. Hence, we have in this case \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] Case 2: \((A,B,C)\equiv (1,1,1)\pmod{3}\) We can write \(A=3x+1\), \(B=3y+1\), and \(C=3z+1\) for \(x,y,z\in\mathbb{Z}\). Plugging these into the expression, we find that the expansion consists of coefficients that are all divisible by 9. Hence, once again we obtain \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] Case 3: \((A,B,C)\equiv (2,2,2)\pmod{3}\) Similar to the previous case, we bash out the expression, but setting \(A=3x+2\), \(B=3y+2\), and \(C=3z+2\) this time. Once again, the coefficients end up being divisible by 9. Hence in this case, \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] Case 4: \((A,B,C)\equiv \textrm{some permutation of }(0,1,2)\pmod{3}\) In this case, WLOG let the permutation of the ordered triple above be \((0,1,2)\). Then, by our arguments in the first case, both \(A^3\) and \(3ABC\) are divisible by 9. For the remaining terms, we have \[\begin{split} B^3+C^3&=(B+C)(B^2+C^2-BC)\\ &\equiv (1+2)(1+4-2)\pmod{3} \end{split}\] Observe that both factors are congruent to 0 modulo 3, so the sum \(B^3+C^3\) is also divisible by 9. Hence we have in this case \[A^3+B^3+C^3-3ABC\equiv0\pmod{9}.\] So we can see that we can construct 0 (duh), every positive integer that is not a multiple of 3, every positive integer that is a multiple of 9, and that whenever we construct a multiple of 3 it must also be a multiple of 9. So the range is all nonnegative integers except for multiples of 3 which are not multiples of 9. \(\square\) Phew! The Axiom of Choice really grinds my gears.
Given a set of nonempty sets, the set of Cartesian products of all of those sets is obviously also nonempty. That is, we can always choose an element from a nonempty set, even when we can't explicitly define a "choosing function" for an arbitrary set. To make this more clear, we use Bertrand Russell's analogy. Given a bunch of pairs of shoes it is easy to define a function that can take in those pairs and output a particular shoe from each pair. Just let the function choose the left shoe of each pair. But what if we had a bunch of pairs of socks? Socks are indistinguishable so it is no longer clear how we can define a function that can take in a pair of socks and output one from that pair. But this doesn't mean that we can't take a pair of socks and output one from that pair. The Axiom of Choice asserts that despite the fact that we can't explicitly define a function that takes in a pair of socks and spits out a single sock, such a function must still exist because it is obviously not impossible to take an element from a nonempty set. Seriously, who sat down one day and thought about this? Anyway, another topic in the back of my head is figuring out the eigenvector of a two-dimensional rotation. Such a vector must be nonreal since no real vector will yield a multiple of itself when subjected to a rotation that is not a multiple of \(\pi\). This sort of computation could be a blog post on its own. I also wonder about the oscillation of a dipole in an electric field. That could make an interesting paper for the physics section. My old research question of what force fields satisfy the shell theorem also still stands, though I am a little stuck on that mathematically. See here for the result. As far as the math section goes, I don't really know. I plan on uploading all my Putnam stuff once the seminar ends, but beyond that, I have no clue what I should work on next, apart from my running solutions of 100 Geometry Problems. I also have a new number theory book, so once I start working on there, maybe I can make more elaborate number theory things. I'm moving at a snail's pace right now :/ Select Problems from Introductory Classical Mechanics has been published. Just a few so far. Will be a constant work in progress like Solutions to 100 Geometry Problems.
It's almost 3:00 AM and I've just solved problem 17 in 100 Geometry Problems using only synthetic techniques! And boy is it pretty!
I emphasize using synthetic techniques because it's prettier and also the squares seem suggest that there's some sort of analytic method. I'll write up the solution in the morning! Feels good to be back in a groove. Geez. Both gravity and electric force satisfy an inverse square law, and thus, both forces satisfy the shell theorem. That is, in a uniformly massive (or uniformly charged) spherical shell, there is no net force on any massive (or charged) object at any location within the shell.
Is this a unique property of inverse square functions? Or are there other functions that obey this? I reckon that I'll have to solve a differential equation of some sort (or perhaps an integral equation). My gut tells me that this is a unique property of inverse square functions (what sort of differential equation would be satisfied by inverse square functions and another class of functions?). I'll be investigating this further soon. School's out; the fun begins. |
Categories
All
Archives
July 2023
|