## The Tennis Racket Theorem

or, the Intermediate Axis Theorem
or, Testing a Universally-Used Physics Engine for a Basic Mechanical Result

Kerbal Space Program is pretty cute. You build rockets with an increasing number of available parts, either with little aliens onboard or computer cores for control, and send them off to various planets in the Kerbol star system, with simplified gravitational interactions. The model is always 1-body, with planets on rail orbits. A spacecraft is only attracted to a single astronomical body at a time, chosen by being within its sphere of influence determined by the body’s distance and mass. So, no Lagrange points, no single-body orbit capturing (but, you can get captured by a planet without expending fuel by encountering a moon during the fly-by). Planetary orbits are not exactly co-planar, which provides some realism and frustration. All planetary bodies intrinsically rotate around parallel axes. There appears to have been an unexpectedly difficult technical problem for the creators to overcome in order to do so. The result does simplify takeoff and landings, which is nice. Atmospheric interactions include a few regimes for engine response to pressure and speed, drag, and reentry heating. The most accessible lessons come from the takeoff stage, which any player sees from the start. This includes finding the proper ratios between engine power, fuel storage and payload mass. Later, if one seeks to remain in space, one must understand the classic lesson, “orbiting involves moving sideways more than upward.”. Finagling with multiple stages is easy and forgiving to encourage experimentation — you don’t have to do any math to succeed. But you sure as hell can, if you’re the kind of person who appreciates True Fun With Fractions.

Even in the unusual circumstance that calculatin’ isn’t your jam (and, are you sure about that? Really?), KSP manages to keep your attention by featuring endearing little aliens who seem to revel in the hope of being involved in a nightmarish disaster.

My students brought the game up in class a few times in the past. One year, I’d see a daily crowd scamper off to the maker lab to play it on the PCs kept in there. There were free 3D printers in the room with them, and they were playing video games instead! I feel like I’m at a unique point in my life when I can both relate to the grumpy old man who laments children of today missing out on “real learning”, and also the little kid who would be first in line at the computer lab. Heck, I’d be first in line as a teacher, if they let me.

I truly believe there is a ton of educational opportunity here, but I am not sure how much of it requires an initial push to apply and achieve physical understanding. How much can someone figure out for themselves without having seen an example or read about the principles first? The principles for this kind of thing are often so close to the examples as to be indistinguishable.

Regardless, let’s demonstrate something with it.

Kerbal Space Program is written in Unity, which uses the PhysX engine developed by Nvidia. This thing is shared by Unity’s competitors, including Unreal Engine. I’d say that qualifies PhysX as an industrywide standard. So it had better accurately demonstrate some physical results of its model.

One of the tough things to get right in a mechanics lab is the illusion of a lack gravity and minimization of friction with the air and ground. Intro physics students have an easy time blaming their crappy lab reports on these, in addition to “human error.” A trusted physics simulation gives you the option to cut these out. Or never include them in the first place. KSP provides an easy way to assemble a test object, and apply forces and torques in the comfort of cold, inhospitable simulated outer space. Let’s look at something that would be pretty tough to show in a classroom or mechanics lab: The tennis racket theorem, or Dzhanibekov effect, or the intermediate axis theorem.

An object spinning in space will undergo variations in its orientation, with or without applied forces and torques. The spinning object above appears to change its rotational direction. Depending on how you define the direction of rotation, this is an illusion. The axis of rotation relative to the object can change without applied torques. The angular momentum does not. Some of this weirdness can be worked through by making a comparison to linear motion.

When you try to push something, the rate at which it speeds up depends on its mass. This is, hopefully, one of the first things you learned in physics class. F = m a; the applied force results in an acceleration a. The object accelerates more quickly if the mass m is small, less quickly if m is large. You can treat everything as a scalar in one dimension, or include vectors in 2 or 3 dimensions. The force changes the momentum, p = m v. Most of the time, you keep the mass of an object constant, and see how the velocity v changes with an applied force.

The bridge into “rocket science” is the admission that m may change as well. A rocket burns fuel, decreasing m with time. All three of the things in F=ma are changing, which can often be confusing. It’s confusing in the same way that the ideal gas or Ohm’s laws are confusing: they often appear to be self-contradictory, when one forgets the implicit assumptions about what is being held constant. This comes up “How can power be both directly and inversely proportional to resistance???” The implication being that either V or I, whichever one isn’t present in the equation are being held contant.

Rotation in 2 dimensions involves the same math. When you’ve only got one dimension to rotate around (an axis pointing into or out of “the page”), you can treat it all as a bunch of scalars. The angular momentum $L = I \omega$. In 2 dimensions, there’s no way for a solid body’s moment of inertia I to change, and without torque there is no change for L, either. Everything stays constant, and whatever object you’re thinking of spins like a top.

Well, no. NOT like a top, because in 3 dimensions, a top precesses and nutates unless it is rotating exactly along one of three perpendicular directions, the principal axes. The direction of rotation $\omega$ can change, even in a torque-free regime, because I can change. That’s right. Maybe you’ll never change, but the moment of inertia sure can. I swear, this time I will change. You just have to give it a chance, baby.

The momentum of a rocket changes because it loses mass that carries the momentum away. So the m in F = ma changes value as the rocket burns. But, if you consider the entire rocket-fuel system, even the exhaust left behind after the fuel has been burned, the total mass stays constant. The total momentum is constant. But if we kept track of m not only as a single value, but as a distribution through space, we would see it change. The shape of the mass distribution function changes with time, and the momentum density (how much is in the rocket, how much is in the space behind it in the exhaust) changes with time, too. We could keep track of a list of velocities or a broad velocity field as well — including the different speeds for the exhaust and rocket separately. If this sounds difficult, it’s because it can be, but I’m sure some simple models are good enough when all we really care about is the rocket’s path. In fact, a short list of masses is the way you solve your first problems in momentum conservation, by assuming a small number of rigid bodies:

$p_\text{net} = m_1 v_1 + m_2 v_2 = (m_1 + m_2) v_3 = m_3v_3$

The velocity $v_3$ is often that of a combined object. The rocket case is a poor example here, because of how they continuously burn fuel, but it does apply with the right trickery. Usually an introductory problem treats that value as the result after a collision, or a merger, or another instantaneous event. Regardless, even after a collision event, you can still treat $m_1$ and $m_2$ as separate values that are a part of a multivalued distribution of mass that applies to the entire system. The next step is to say, well, the exhaust isn’t made up of a single object. It’s made up of individual puffy clouds, each with their own masses and velocities. Maybe $m_1 = m_{1,1} + m_{1,2} + m_{1,3} + m_{1,4} + ...$. Each of those little clouds could themselves be broken into a multitude of gas molecules, either with their own particular mass and velocity contributing to the whole. At that stage, you’ve got to start integrating infinitesimal masses and their velocities throughout space to get the full picture.

And this is the procedure for angular momentum. You pick an axis of rotation on an object and you integrate infinitesimal masses throughout its volume to get a single value for the moment of intertia. This works well for objects whose axis of rotation does not change with time. Rigid objects. That’s the typical introductory style problem, where oftentimes rotations are presented in only two dimensions, and so there’s only one possible axis of rotation anyway. Your angular momentum looks a lot like linear momentum — it’s the product of an effective mass (I) and a rate of change ($\omega$).

$L = I \omega$

In 3 dimensions, though, the goofy stuff happens. With a solid object spinning freely in space, the direction of rotation and moment of inertia can change (while conserving the product when under no torques). In the realm of linear momentum in 3d, this is maybe like… an object speeding up and slowing down as it travels in a single direction, with its intrinsic mass changing as it goes? It’s perhaps more like an object’s direction of travel changing on its own, with effective masses assigned to different directions, surging between one and the other. This doesn’t really happen as far as I know, which is why rotation is counterintuitive.

Ultimately, you need to describe the moment of inertia as a tensor — basically a matrix (as far as you care, which you may not, but if you do then you’re already fine). The moment of inertia tensor by definition is symmetric ($I_{ij} = I_{ji}$).

$I = \left(\begin{array}{ccc} I_{11} & I_{12} & I_{13}\\ I_{12} & I_{22} & I_{23}\\I_{31} & I_{32} & I_{33} \end{array}\right) = \left(\begin{array}{ccc} I_{11} & I_{12} & I_{13}\\ I_{12} & I_{22} & I_{23}\\I_{13} & I_{23} & I_{33} \end{array}\right)$

The symmetry is a reflection of the fact that spinning the thing clockwise will be just as hard as counterclockwise on the same axis. So here’s a fun thing: any symmetric matrix can be diagonalized. Diagonalization is a key technique in physics and, I’m quite sure, has applications in any field involving quantitative data. If you’ve got any equations that are at least as complicated as adding or multiplying, you’re probably looking at a linear system or one that is linear within a certain regime. If you’re a student and have made it this far, I’d recommend taking the time to understand this concept, understand linear regression cold, take a linear algebra course, etc. I often wonder if matrix manipulation with applications would be more useful or enlightening to the average citizen than most concepts in calculus, which seems to be the “default” advanced math course for high schoolers.

The point here is, through diagonalization, you can rotate the object (or change your point of view) so that only the diagonal terms are left in the tensor, each corresponding to the difficulty in rotating the object around orthogonal (mutually perpendicular) axes, such as our standard x, y, z. It’s just a change of coordinates.

$I = \left(\begin{array}{ccc} I_{11} & I_{12} & I_{13}\\ I_{12} & I_{22} & I_{23}\\I_{13} & I_{23} & I_{33} \end{array}\right) \rightarrow \text{diagonalize}\rightarrow I = \left(\begin{array}{ccc} I_{1} & 0 & 0\\ 0 & I_{2} &0\\0 & 0 & I_{3} \end{array}\right)$

The three perpendicular directions that line up with these numbers are the principal axes. If the moments of inertia $I_1, I_2, I_3$ are different, we can make sure we’ve defined our axes so that $I_2$ is the middle value, with $I_1> I_2 > I_3$. We can then call its corresponding axis the intermediate axis.

The intermediate axis theorem is this: When spun around axis 2 in free space, an object’s angular velocity will be initially aligned with that axis but drift away from it. If it starts close enough, the drift away can happen quite suddenly, resulting in the flipping effect you can see in the video above. If initially spun around the axes with largest or smallest moments of inertia, the angular velocity will seek to drift toward them. In this way, spinning around these is stable, and the intermediate axis instead acts as an unstable equilibrium.

One of the best ways to visualize this is not with a tennis racket, but with a phone! I’ll get to the video game eventually, don’t worry. Phones are great because they are very common and easy to handle. It turns out that boxy objects like phones have very clear principal axes due to their symmetry, which are parallel to their standard dimensions. In addition, phones tend to be very expensive, so it’s very exciting to throw one up in the air over and over to test this effect. Let’s get our phones out.

A cuboid has principal axes perpendicular to its faces.

It’s easiest to spin it around the “long” axis — in this case, $I_3$ — and hardest to spin around the center of the screen, $I_1$ here. This is because there is more material in the phone further from the axis of rotation.

I’m using a phone as an example because I want to encourage you to try this for yourself. If you don’t have a phone onhand, or aren’t feeling like tossing your \$1000 iPhone up in the air, you should still be able to find a small, rigid box to try this with. If you throw the phone up in the air so they spin like these, it ought to continue doing so until you catch it or it collides with the ground.

If you try to toss it along the intermediate axis, though, it will flip. My instincts expect this:

The reality is closer to this:

Again, I stress, try this for yourself. Hold your phone like a playing card, and flip it up into the air, with the top falling toward you. If you get good, you can catch it and toss it repeatedly and it will look super cool. Extremely cool. It’ll be a hit at parties.

## “Simulating” in KSP

I figured, since the game uses a Real Physics Engine®, I could build a phone-like ship in Kerbal Space Program, apply some spin, and see the effect happen. So, I made an accurate 200:1 model of the Cool Phone 2000 and launched it into orbit.

It was attached to six rocket motor arrays, two along each of the (supposed) principal axes, which would be used used to quickly spin it up in that dimension.

The first one I launched spun up a little too easily. The engines were solid rocket types, which can’t be turned off. This means they had to fire at full blast until they were out of fuel, which gave way more spin than this thing needed. So, I cheat-menu’ed a reduced thrust, higher mass group of spinners to get it up to a more manageable speed.

The new model also had a better shape for starting the spins. The first one I sent up had a few too many heavy decorative parts, which skewed the mass symmetry more than I expected.

It spins nice and stably in the “toughest” dimension:

Take a look at the navball in the lower left. For these short lengths of time, the navball is the view of a non-rotating ball that someone attached to the ship would see. When the ball is all blue, it means the antenna is pointed directly “up.” If it’s all brown, the attenna is pointing “down.” Sure, you can define up and down in space. Here, you can see the view is moving pretty much along a single great circle on the navball. The antenna spins around in a stable circle.

The phone’s also stable when spinning in the long dimension.

You can see the yellow view marker is pretty much stationary on the navball, with the navball spinning around that point. Or, you can just look and see that the phone is not flipping around.

For the intermediate axis, it’s all wonky, as expected!

Watch it flip and flop! There is no external torque on this once the spinners detach. Compare the navball behavior to the other two cases.

Not a real shock, but fun to demonstrate.

Actually, it might be fun to track the path of the orientation. I suppose this could be simulated by solving the Euler equations, or I could just read it off the navball! The axis of rotation follows a path called a polhode, which is the sort of taco or criss-cross shape formed by the intersection between a sphere and an ellipse.

I’ll consider it.

The real takeaway here is to double-check our understanding of angular momentum.

$L = I \omega$

When learning about the concept, 2-dimensional models can give a sense that the angular momentum is always in the direction of the angular velocity vector, and the moment of inertia is a scalar. But, a moment of inertia tensor in the inertial frame will evolve as a three dimensional object rotates, and the angular velocity must evolve in turn to result in an unchanging angular momentum.

## Constructing the digits of pi from conservation of energy and momentum in collisions

A short one, just to get the ol’ bloggin’ fingers movin’ again’. It’s starting to get chilly outdoors and they’re simply covered in cobwebs!! Where did they come from? How did I get spider webs on my hands…..?

Last year, 3Blue1Brown produced a series of lovely videos (but really, they all are) on a strange little thing. An object is sent bouncing back and forth between a rigid wall and another moving object with a particular (integer!) mass ratio. If this ratio is a power of 100, the number of collisions will be some of the first digits of pi!! Isn’t that nuts.

Anyone who has taken a high school senior level physics class or above would recognize the need to implement two conservation laws: that of momentum (linear with velocities) and that of energy (quadratic with speeds). Someone with just a bit of programming experience should be able to set up a series of solutions for the speeds after successive collisions, until the “no-collide” condition is met: the two blocks are traveling in the same direction, away from the wall, and gaining distance between them.

Hey, I’ve got a bit of programming experience! I whipped this up to see if I could get the same results.

It looks like they’re doing it right! A fun little exercise. In the spirit of the original prompt I’ll let you find or figure out why it works for yourself. You can always watch Sanderson’s explanation video.

One hint I can give is it is a bit related to the Buffon’s needle technique of finding pi. This is a fun idea that understanding could open some avenues of thought for more difficult estimation problems. Check out the description and explanation on Numberphile:

OK ENJOY PI PLEASE AND THANKS PEACE OUT DUDERINOS