It's a shame that quantum and relativistic corrections are necessary. Though perhaps it's self-entitled of me to wish that the universe would be simple enough for me to understand it.
It turns out that current flows in such a way that power loss in resistors is minimized in parallel. This claim comes from problem 2 from a pset here. Consider two resistors in parallel. The current into a terminal is \(I\). Let the current the resistors be \(I_1\) and \(I_2\), and let the resistances be \(R_1\) and \(R_2\), respectively. Then by conservation of charge: $$I=I_1+I_2,$$ and since power dissipated through a resistor is \(I^2R\), we have a total power loss of: $$P(I_1,I_2)=I_1^2R_1+I_2^2R_2.$$ Now we have a simple optimization problem. We must minimize the function above subject to the constraint \(S(I_1,I_2)=I_1+I_2=I\). We proceed with Lagrange multipliers: $$\nabla P=\lambda\nabla S\Rightarrow\left<2I_1R_1,2I_2R_2\right>=\left<\lambda,\lambda\right>.$$ From this, we immediately obtain: $$I_1R_1=I_2R_2,$$ which if Ohm's law holds, is simply \(V\), the potential difference across the resistors! This physically makes sense. Potential difference is constant over resistors in parallel since all of the resistors of coinciding terminals and potential difference is path-independent. The calculation above easily generalizes to an arbitrary number of resistors. I am awaiting a response on StackExchange for a greater physical insight into why currents arrange themselves such that power dissipation is minimized. This seems to be a manifestation of a common theme throughout physics. For instance, Fermat's principle from optics states that light will always choose a path that minimizes travel time. This is an interesting principle, but it is actually due to the Huygens principle, in which waves are framed as propagating by new wavelets emanating from each wavefront. I am simply wondering if there is a similar underlying principle behind why currents travel so that power dissipation is minimized in parallel resistors.
0 Comments
Problem (IMO 1959): Prove that for natural \(n\) the fraction \(\frac{21n+4}{14n+3}\) is irreducible.
Solution: Suppose to the contrary that the fraction is reducible. Then, there must exist a prime \(p|21n+4\) and \(p|14n+3\). In other words, \(21n+4\equiv0\pmod p\) and \(14n+3\equiv0\pmod p\). Adding these two congruences: \[7(5n+1)\equiv0\pmod p.\] Observe that \(p\neq7\), since \(7\) divides neither \(21n+4\) nor \(14n+3\). Hence: \[5n+1\equiv0\pmod p.\] Suppose \(p=2\). Then to satisfy this congruence, \(n\) must be odd. But then \(21n+4\) would be odd and \(p=2\) would not be able to divide it, contradiction. Hence \(p\neq2\) and it must instead be an odd prime. Subtracting the two original congruences yields: \[7n+1\equiv0\pmod p.\] Now subtracting this congruence with the previous one, we obtain \[2n\equiv0\pmod p.\] Since \(p\neq2\), we must have \[n\equiv0\pmod p.\] But now, \(21n+4\equiv4\pmod p\), and \(4\equiv 0\pmod p\) if and only if \(p=2\), contradiction. Hence there cannot be a prime that divides both \(21n+4\) and \(14n+3\) and the fraction \(\frac{21n+4}{14n+3}\) must be irreducible. \(\square\) Edit: Yes, I know the result immediately follows from the Euclidean algorithm. For some reason, I could only remember the extended Euclidean algorithm when I was solving this problem. Here are some inequalities I solved today/yesterday.
Problem (USAMTS): Find the ordered pair of real numbers \((x,y)\) that satisfies the equation below, and demonstrate that it is unique: \[\frac{36}{\sqrt{x}}+\frac{9}{\sqrt{y}}=42-9\sqrt{x}-\sqrt{y}.\] Solution: The equation rearranges as: \[\frac{36}{\sqrt{x}}+9\sqrt{x}+\frac{9}{\sqrt{y}}+\sqrt{y}=42.\] Next, observe that by AM-GM: \[\frac{36}{\sqrt{x}}+9\sqrt{x}\geq2\sqrt{36\cdot9}=36,\] and: \[\frac{9}{\sqrt{y}}+\sqrt{y}\geq2\sqrt{9}=6.\] Adding these two inequalities yields: \[\frac{36}{\sqrt{x}}+9\sqrt{x}+\frac{9}{\sqrt{y}}+\sqrt{y}\geq42,\] so our equation is actually just the equality condition of AM-GM applied to \(x\) and \(y\) terms separately. Equality in AM-GM holds only when the terms are equal, hence the solution to the equation is unique. We have: \[\frac{36}{\sqrt{x}}=9\sqrt{x}\Rightarrow\boxed{x=\frac{36}{9}},\] and: \[\frac{9}{\sqrt{y}}=\sqrt{y}\Rightarrow\boxed{y=9}.\] Problem: Show that for all positive real numbers \(x\neq1\) and nonnegative integers \(n\), we have \[\frac{x^{2n+1}-1}{x^{n+1}-x^n}\geq2n+1.\] Solution: Observe that the LHS can be rewritten: \[\begin{split} \frac{x^{2n+1}-1}{x^{n+1}-x^n}&=\frac{x^{2n+1}-1}{x^n(x-1)}\\ &=\frac{x^{n+1}-\frac{1}{x^n}}{x-1}\\ &=\frac{\frac{1}{x^n}(1-x^{2n+1})}{1-x}. \end{split}\] This is the sum of a finite geometric series with first term \(\frac{1}{x^n}\), common ratio \(x\), and \(2n+1\) terms. Therefore, our inequality can be written as \[x^{-n}+x^{-n+1}+...+x^n\geq2n+1.\] But this is true by AM-GM on the LHS. Since all of our steps are reversible, we are done. \(\square\) Problem: Show that for all positive integers \(n\) with \(n\geq2\), we have \[\frac{1}{n}+\frac{1}{n+1}+...+\frac{1}{2n-1}>n(2^{1/n}-1).\] Solution: We rearrange the inequality to: \[\frac{\frac{1}{n}+\frac{1}{n+1}+...+\frac{1}{2n-1}}{n}+1>2^{1/n}.\] This is simply: \[\frac{n+\frac{1}{n}+\frac{1}{n+1}+...+\frac{1}{2n-1}}{n}>2^{1/n}.\] Now, we break up the \(n\) in the numerator into \(n\) ones and allocate the ones to every remaining term in the numerator: \[\frac{\left(1+\frac{1}{n}\right)+\left(1+\frac{1}{n+1}\right)+...+\left(1+\frac{1}{2n-1}\right)}{n}>2^{1/n}.\] But now, this becomes: \[\frac{\frac{n+1}{n}+\frac{n+2}{n+1}+...+\frac{2n}{2n-1}}{n}>2^{1/n},\] which is true by AM-GM. Observe that the inequality is strict since obviously none of the terms in the numerator are equal (which is the equality condition for AM-GM). Since all our steps are reversible, we are done. \(\square\) I didn't find (or solve) any interesting Cauchy-Schwarz problems. :(. Tomorrow/today I think I will work on combo. Here are two problems in a pset from MIT 18.02 (multivariable calculus).
Problem 2: Let \(f(x,y,z,t)\) be a smooth function, and let \(\nabla f=\left<f_x,f_y,f_z\right>\) be the gradient in space variables only. Let \(\mathbf{r}=\mathbf{r}(t)=\left<x(t),y(t),z(t)\right>\) be a smooth curve, and \(\mathbf{v}=\mathbf{r}'(t)\); and suppose we use the notation \(\frac{\textrm{D}f}{\textrm{D}t}=\frac{\textrm{d}}{\textrm{d}t}f(\mathbf{r}(t),t)\). Use the Chain Rule to show that \(\frac{\textrm{D}f}{\textrm{D}t}=\frac{\partial f}{\partial t}+\mathbf{v}\cdot\nabla f\). Solution: We have: \[f(\mathbf{r}(t),t)=f(x(t),y(t),z(t),t)\] By the Chain Rule: \[\frac{\textrm{d}}{\textrm{d}t}f(x(t),y(t),z(t),t)=\frac{\partial f}{\partial x}\frac{\textrm{d}x}{\textrm{d}t}+\frac{\partial f}{\partial y}\frac{\textrm{d}y}{\textrm{d}t}+\frac{\partial f}{\partial z}\frac{\textrm{d}z}{\textrm{d}t}+\frac{\partial f}{\partial t}\] Which becomes: \[\frac{\textrm{d}}{\textrm{d}t}f(x(t),y(t),z(t),t)=\frac{\partial f}{\partial t}+\left<\frac{\textrm{d}x}{\textrm{d}t},\frac{\textrm{d}y}{\textrm{d}t},\frac{\textrm{d}z}{\textrm{d}t}\right>\cdot\left<\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}\right>\] Which is: \[\frac{\partial f}{\partial t}+\mathbf{v}\cdot\nabla f\] As desired. \(\square\) This function, \(\frac{\textrm{D}f}{\textrm{D}t}\), is called the convective derivative or the material derivative. In fact, there are quite a few names for this. It is important to realize that \(\mathbf{r}\) defines a path or trajectory through space. The function \(f\) then describes something that is changing along a trajectory with time. The next problem makes this clear, letting \(f=\rho\), the density of a fluid. When \(\rho\) is constant in \(t\), the flow is termed steady. Unsteady flow, as one can imagine by this definition, must be enormously complicated, and it includes phenomena such as turbulence. In the case of steady flow, each trajectory is called a streamline. A fluid flow is called incompressible if the convective derivative of \(\rho\) is zero. In steady flow, this means that there is no pressure change along a streamline (which makes sense!). Problem 3a: Suppose that the density function depends only on time \(t\) but is constant in the space variables \((x,y,z)\), that is, \(\rho=\rho(t)\). Then show that the flow is incompressible if and only if the density \(\rho(t)\) is constant in all the variables \((x,y,z,t)\) (in other words, the flow must be steady). Solution: We want: \[\frac{\partial \rho}{\partial t}+\mathbf{v}\cdot\nabla\rho=0\] But since \(\rho\) does not depend on spatial variables, \(\nabla\rho=0\) and \(\frac{\partial \rho}{\partial t}=\frac{\textrm{d}\rho}{\textrm{d}t}\). Hence: \[\frac{\textrm{d}\rho}{\textrm{d}t}=0\] Integrating both sides WRT \(t\): \[\int{\frac{\textrm{d}\rho}{\textrm{d}t}\textrm{ d}t}=\rho(t)=C\] Hence, the flow is steady. \(\square\) Problem 3b: Next suppose instead that the density depends only on the space variables \((x,y,z)\) but not (explicitly) on \(t\), so that \(\rho=\rho(x,y,z)\). An incompressible flow in this case is called stratified. Use the result of problem 2 to give the condition on \(\rho\) and \(\mathbf{v}\) for stratified flow. Solution: In this case: \[\mathbf{v}\cdot\nabla\rho=0\] So the velocity is always orthogonal to direction in which \(\rho\) changes the most. But recall that \(\nabla\rho\) is always orthogonal to the contour surfaces of \(\rho\). It follows that the velocity must always be parallel, and hence tangential to, surfaces of equal density. \(\square\) Select Problems from Introductory Classical Mechanics has been published. Just a few so far. Will be a constant work in progress like Solutions to 100 Geometry Problems.
It's almost 3:00 AM and I've just solved problem 17 in 100 Geometry Problems using only synthetic techniques! And boy is it pretty!
I emphasize using synthetic techniques because it's prettier and also the squares seem suggest that there's some sort of analytic method. I'll write up the solution in the morning! Feels good to be back in a groove. Geez. Here are some problems from Multivariable and Vector Calculus by David Santos.
Varignon's Theorem: The quadrilateral formed by midpoints of sides of any quadrilateral is a parallelogram. Proof: We proceed by vectors. Let our arbitrary quadrilateral be \(ABCD\), and let \(W\) be the midpoint of \(\overline{AB}\), \(X\) be the midpoint of \(\overline{BC}\), etc. Observe that it suffices to show that \(\mathbf{ZW}+\mathbf{ZY}=\mathbf{ZX}\). We see that \(\mathbf{ZW}=\mathbf{ZA}+\mathbf{AW}\). Furthermore, \(\mathbf{ZY}=\mathbf{ZD}+\mathbf{DY}=-\mathbf{ZA}+\mathbf{DY}\). Adding these two equations yields: \[\mathbf{ZW}+\mathbf{ZY}=\mathbf{AW}+\mathbf{DY}\] We take a look at \(\mathbf{AW}\) and find \(\mathbf{AW}=\mathbf{WB}\) and \(\mathbf{WB}=\mathbf{WX}-\mathbf{BX}\), hence \(\mathbf{AW}=\mathbf{WX}-\mathbf{BX}\). In a similar manner we may show that \(\mathbf{DY}=\mathbf{YX}-\mathbf{CX}=\mathbf{YX}+\mathbf{BX}\). Therefore: \[\mathbf{AW}+\mathbf{DY}=\mathbf{YX}+\mathbf{WX}\] And so: \[\mathbf{ZW}+\mathbf{ZY}=\mathbf{YX}+\mathbf{WX}\] We're almost done. We add \(\mathbf{YX}+\mathbf{WX}\) to both sides to obtain: \[\mathbf{ZW}+\mathbf{WX}+\mathbf{ZY}+\mathbf{YX}=2\left(\mathbf{YX}+\mathbf{WX}\right)\] This simplifies to: \[2\mathbf{ZX}=2\left(\mathbf{YX}+\mathbf{WX}\right)=2\left(\mathbf{ZW}+\mathbf{ZY}\right)\] And dividing by \(2\), we obtain the desired result. \(\square\) Problem: Let \(X\), \(Y\), and \(Z\) be points on the plane with \(X\neq Y\). Demonstrate that the point \(A\) belongs to \(\overleftrightarrow{XY}\) if and only if there exists scalars \(\alpha,\beta\) with \(\alpha+\beta=1\) such that: \[\mathbf{ZA}=\alpha\mathbf{ZX}+\beta\mathbf{ZY}\] Solution: We define \(\overleftrightarrow{XY}\) to be the standard horizontal axis. Then, the component of any vector from \(Z\) to \(\overleftrightarrow{XY}\) orthogonal to \(\overleftrightarrow{XY}\) must be a constant. Hence we may write: \[\mathbf{ZX}=\left<a,k\right>\] \[\mathbf{ZY}=\left<b,k\right>\] Suppose \(\exists\alpha,\beta\) such that \(\mathbf{ZA}=\alpha\mathbf{ZX}+\beta\mathbf{ZY}\) and \(\alpha+\beta=1\). Then it follows that we may write: \[\mathbf{ZA}=\left<\alpha a+\beta b,(\alpha+\beta)k\right>\] But since \(\alpha+\beta=1\), this becomes: \[\mathbf{ZA}=\left<\alpha a+\beta b,k\right>\] Since the component orthogonal to \(\overleftrightarrow{XY}\) of \(\mathbf{ZA}\) is equal to the constant \(k\), \(A\) must lie on \(\overleftrightarrow{XY}\). The converse of the above is also true. Suppose that \(A\) does lie on \(\overleftrightarrow{XY}\). Then: \[\mathbf{ZA}=\left<c,k\right>\] It is possible to uniquely determine \(\alpha,\beta\) that satisfy: \[\left\{\begin{array}{ll}\alpha a+\beta b=c\\\ \alpha+\beta=1\\\end{array}\right.\] By solving the system of equations above. A unique solution exists because we are given \(X\neq Y\) and thus \(a\neq b\), so the system is independent. Those values of \(\alpha\) and \(\beta\) permit the relation: \[\mathbf{ZA}=\alpha\mathbf{ZX}+\beta\mathbf{ZY}\] Hence, \(A\) lies on \(\overleftrightarrow{XY}\) iff \(\alpha\) and \(\beta\) exist as described. \(\square\) Both gravity and electric force satisfy an inverse square law, and thus, both forces satisfy the shell theorem. That is, in a uniformly massive (or uniformly charged) spherical shell, there is no net force on any massive (or charged) object at any location within the shell.
Is this a unique property of inverse square functions? Or are there other functions that obey this? I reckon that I'll have to solve a differential equation of some sort (or perhaps an integral equation). My gut tells me that this is a unique property of inverse square functions (what sort of differential equation would be satisfied by inverse square functions and another class of functions?). I'll be investigating this further soon. School's out; the fun begins. |
Categories
All
Archives
July 2023
|