The Tennis Racket Theorem

or, the Intermediate Axis Theorem
or, Testing a Universally-Used Physics Engine for a Basic Mechanical Result

Let’s replicate a strange mechanical effect in a popular video game.

Kerbal Space Program is pretty cute. You build rockets with an increasing number of available parts, either with little aliens onboard or computer cores for control, and send them off to various planets in the Kerbol star system, with simplified gravitational interactions. The model is always 1-body, with planets on rail orbits. A spacecraft is only attracted to a single astronomical body at a time, chosen by being within its sphere of influence determined by the body’s distance and mass. So, no Lagrange points, no single-body orbit capturing (but, you can get captured by a planet without expending fuel by encountering a moon during the fly-by). Planetary orbits are not exactly co-planar, which provides some realism and frustration. All planetary bodies intrinsically rotate around parallel axes. There appears to have been an unexpectedly difficult technical problem for the creators to overcome in order to do so. The result does simplify takeoff and landings, which is nice. Atmospheric interactions include a few regimes for engine response to pressure and speed, drag, and reentry heating. The most accessible lessons come from the takeoff stage, which any player sees from the start. This includes finding the proper ratios between engine power, fuel storage and payload mass. Later, if one seeks to remain in space, one must understand the classic lesson, “orbiting involves moving sideways more than upward.”. Finagling with multiple stages is easy and forgiving to encourage experimentation — you don’t have to do any math to succeed. But you sure as hell can, if you’re the kind of person who appreciates True Fun With Fractions.

Even in the unusual circumstance that calculatin’ isn’t your jam (and, are you sure about that? Really?), KSP manages to keep your attention by featuring endearing little aliens who seem to revel in the hope of being involved in a nightmarish disaster.

My students brought the game up in class a few times in the past. One year, I’d see a daily crowd scamper off to the maker lab to play it on the PCs kept in there. There were free 3D printers in the room with them, and they were playing video games instead! I feel like I’m at a unique point in my life when I can both relate to the grumpy old man who laments children of today missing out on “real learning”, and also the little kid who would be first in line at the computer lab. Heck, I’d be first in line as a teacher, if they let me.

First in line to play for the educational value.

I truly believe there is a ton of educational opportunity here, but I am not sure how much of it requires an initial push to apply and achieve physical understanding. How much can someone figure out for themselves without having seen an example or read about the principles first? The principles for this kind of thing are often so close to the examples as to be indistinguishable.

Regardless, let’s demonstrate something with it.

Kerbal Space Program is written in Unity, which uses the PhysX engine developed by Nvidia. This thing is shared by Unity’s competitors, including Unreal Engine. I’d say that qualifies PhysX as an industrywide standard. So it had better accurately demonstrate some physical results of its model.

One of the tough things to get right in a mechanics lab is the illusion of a lack gravity and minimization of friction with the air and ground. Intro physics students have an easy time blaming their crappy lab reports on these, in addition to “human error.” A trusted physics simulation gives you the option to cut these out. Or never include them in the first place. KSP provides an easy way to assemble a test object, and apply forces and torques in the comfort of cold, inhospitable simulated outer space. Let’s look at something that would be pretty tough to show in a classroom or mechanics lab: The tennis racket theorem, or Dzhanibekov effect, or the intermediate axis theorem.

An object spinning in space will undergo variations in its orientation, with or without applied forces and torques. The spinning object above appears to change its rotational direction. Depending on how you define the direction of rotation, this is an illusion. The axis of rotation relative to the object can change without applied torques. The angular momentum does not. Some of this weirdness can be worked through by making a comparison to linear motion.

When you try to push something, the rate at which it speeds up depends on its mass. This is, hopefully, one of the first things you learned in physics class. F = m a; the applied force results in an acceleration a. The object accelerates more quickly if the mass m is small, less quickly if m is large. You can treat everything as a scalar in one dimension, or include vectors in 2 or 3 dimensions. The force changes the momentum, p = m v. Most of the time, you keep the mass of an object constant, and see how the velocity v changes with an applied force.

The bridge into “rocket science” is the admission that m may change as well. A rocket burns fuel, decreasing m with time. All three of the things in F=ma are changing, which can often be confusing. It’s confusing in the same way that the ideal gas or Ohm’s laws are confusing: they often appear to be self-contradictory, when one forgets the implicit assumptions about what is being held constant. This comes up “How can power be both directly and inversely proportional to resistance???” The implication being that either V or I, whichever one isn’t present in the equation are being held contant.

Rotation in 2 dimensions involves the same math. When you’ve only got one dimension to rotate around (an axis pointing into or out of “the page”), you can treat it all as a bunch of scalars. The angular momentum L = I \omega . In 2 dimensions, there’s no way for a solid body’s moment of inertia I to change, and without torque there is no change for L, either. Everything stays constant, and whatever object you’re thinking of spins like a top.

Well, no. NOT like a top, because in 3 dimensions, a top precesses and nutates unless it is rotating exactly along one of three perpendicular directions, the principal axes. The direction of rotation \omega can change, even in a torque-free regime, because I can change. That’s right. Maybe you’ll never change, but the moment of inertia sure can. I swear, this time I will change. You just have to give it a chance, baby.

The momentum of a rocket changes because it loses mass that carries the momentum away. So the m in F = ma changes value as the rocket burns. But, if you consider the entire rocket-fuel system, even the exhaust left behind after the fuel has been burned, the total mass stays constant. The total momentum is constant. But if we kept track of m not only as a single value, but as a distribution through space, we would see it change. The shape of the mass distribution function changes with time, and the momentum density (how much is in the rocket, how much is in the space behind it in the exhaust) changes with time, too. We could keep track of a list of velocities or a broad velocity field as well — including the different speeds for the exhaust and rocket separately. If this sounds difficult, it’s because it can be, but I’m sure some simple models are good enough when all we really care about is the rocket’s path. In fact, a short list of masses is the way you solve your first problems in momentum conservation, by assuming a small number of rigid bodies:

p_\text{net} = m_1 v_1 + m_2 v_2 = (m_1 + m_2) v_3 = m_3v_3

The velocity v_3 is often that of a combined object. The rocket case is a poor example here, because of how they continuously burn fuel, but it does apply with the right trickery. Usually an introductory problem treats that value as the result after a collision, or a merger, or another instantaneous event. Regardless, even after a collision event, you can still treat m_1 and m_2 as separate values that are a part of a multivalued distribution of mass that applies to the entire system. The next step is to say, well, the exhaust isn’t made up of a single object. It’s made up of individual puffy clouds, each with their own masses and velocities. Maybe m_1 = m_{1,1} + m_{1,2} + m_{1,3} + m_{1,4} + .... Each of those little clouds could themselves be broken into a multitude of gas molecules, either with their own particular mass and velocity contributing to the whole. At that stage, you’ve got to start integrating infinitesimal masses and their velocities throughout space to get the full picture.

And this is the procedure for angular momentum. You pick an axis of rotation on an object and you integrate infinitesimal masses throughout its volume to get a single value for the moment of intertia. This works well for objects whose axis of rotation does not change with time. Rigid objects. That’s the typical introductory style problem, where oftentimes rotations are presented in only two dimensions, and so there’s only one possible axis of rotation anyway. Your angular momentum looks a lot like linear momentum — it’s the product of an effective mass (I) and a rate of change (\omega).

L = I \omega

In 3 dimensions, though, the goofy stuff happens. With a solid object spinning freely in space, the direction of rotation and moment of inertia can change (while conserving the product when under no torques). In the realm of linear momentum in 3d, this is maybe like… an object speeding up and slowing down as it travels in a single direction, with its intrinsic mass changing as it goes? It’s perhaps more like an object’s direction of travel changing on its own, with effective masses assigned to different directions, surging between one and the other. This doesn’t really happen as far as I know, which is why rotation is counterintuitive.

Ultimately, you need to describe the moment of inertia as a tensor — basically a matrix (as far as you care, which you may not, but if you do then you’re already fine). The moment of inertia tensor by definition is symmetric (I_{ij} = I_{ji}).

I = \left(\begin{array}{ccc} I_{11} & I_{12} & I_{13}\\ I_{12} & I_{22} & I_{23}\\I_{31} & I_{32} & I_{33} \end{array}\right) = \left(\begin{array}{ccc} I_{11} & I_{12} & I_{13}\\ I_{12} & I_{22} & I_{23}\\I_{13} & I_{23} & I_{33} \end{array}\right)

The symmetry is a reflection of the fact that spinning the thing clockwise will be just as hard as counterclockwise on the same axis. So here’s a fun thing: any symmetric matrix can be diagonalized. Diagonalization is a key technique in physics and, I’m quite sure, has applications in any field involving quantitative data. If you’ve got any equations that are at least as complicated as adding or multiplying, you’re probably looking at a linear system or one that is linear within a certain regime. If you’re a student and have made it this far, I’d recommend taking the time to understand this concept, understand linear regression cold, take a linear algebra course, etc. I often wonder if matrix manipulation with applications would be more useful or enlightening to the average citizen than most concepts in calculus, which seems to be the “default” advanced math course for high schoolers.

The point here is, through diagonalization, you can rotate the object (or change your point of view) so that only the diagonal terms are left in the tensor, each corresponding to the difficulty in rotating the object around orthogonal (mutually perpendicular) axes, such as our standard x, y, z. It’s just a change of coordinates.

I = \left(\begin{array}{ccc} I_{11} & I_{12} & I_{13}\\ I_{12} & I_{22} & I_{23}\\I_{13} & I_{23} & I_{33} \end{array}\right) \rightarrow \text{diagonalize}\rightarrow I = \left(\begin{array}{ccc} I_{1} & 0 & 0\\ 0 & I_{2} &0\\0 & 0 & I_{3} \end{array}\right)

The three perpendicular directions that line up with these numbers are the principal axes. If the moments of inertia I_1, I_2, I_3 are different, we can make sure we’ve defined our axes so that I_2 is the middle value, with I_1> I_2 > I_3. We can then call its corresponding axis the intermediate axis.

The intermediate axis theorem is this: When spun around axis 2 in free space, an object’s angular velocity will be initially aligned with that axis but drift away from it. If it starts close enough, the drift away can happen quite suddenly, resulting in the flipping effect you can see in the video above. If initially spun around the axes with largest or smallest moments of inertia, the angular velocity will seek to drift toward them. In this way, spinning around these is stable, and the intermediate axis instead acts as an unstable equilibrium.

One of the best ways to visualize this is not with a tennis racket, but with a phone! I’ll get to the video game eventually, don’t worry. Phones are great because they are very common and easy to handle. It turns out that boxy objects like phones have very clear principal axes due to their symmetry, which are parallel to their standard dimensions. In addition, phones tend to be very expensive, so it’s very exciting to throw one up in the air over and over to test this effect. Let’s get our phones out.

Introducing: The Cool Phone 2000.

A cuboid has principal axes perpendicular to its faces.

The three principal axes of the roughly cuboid shaped Cool Phone 2000. I1 has the largest moment of inertia, and I3 the least. I2 is the intermediate axis.

It’s easiest to spin it around the “long” axis — in this case, I_3 — and hardest to spin around the center of the screen, I_1 here. This is because there is more material in the phone further from the axis of rotation.

Spinning around the principal axis with the smallest moment of intertia (I_3). This is the easiest way to get it spinning faster.
Spinning around the principal axis with the greatest moment of inertia (I_1). This is the toughest way to get it spinning, in terms of torque and energy expenditure. Actually flipping a phone this way may feel easier to you, depending on your dexterity.

I’m using a phone as an example because I want to encourage you to try this for yourself. If you don’t have a phone onhand, or aren’t feeling like tossing your $1000 iPhone up in the air, you should still be able to find a small, rigid box to try this with. If you throw the phone up in the air so they spin like these, it ought to continue doing so until you catch it or it collides with the ground.

If you try to toss it along the intermediate axis, though, it will flip. My instincts expect this:

My expectation for spinning around the intermediate axis.

The reality is closer to this:

Closer to the reality. This is just an animation, not a simulation. The actual movement is not so periodic, but gives the key result that the phone can face away, but be upright, after spinning.
Closer to the reality, with pauses to let you get your bearings. Hopefully you can convince yourself this is not simply spinning around a tilted axis; the axis itself is changing as the phone rotates.

Again, I stress, try this for yourself. Hold your phone like a playing card, and flip it up into the air, with the top falling toward you. If you get good, you can catch it and toss it repeatedly and it will look super cool. Extremely cool. It’ll be a hit at parties.

“Simulating” in KSP

I figured, since the game uses a Real Physics Engine®, I could build a phone-like ship in Kerbal Space Program, apply some spin, and see the effect happen. So, I made an accurate 200:1 model of the Cool Phone 2000 and launched it into orbit.

Almost done designing.

It was attached to six rocket motor arrays, two along each of the (supposed) principal axes, which would be used used to quickly spin it up in that dimension.

Launching the Cool Phone 2000. I could have used the developer cheat menu to put it in orbit. But, I figured, if I’m playing this game I might as well play the game.
It’s pretty easy when you don’t have to worry about money, life support, Kessler syndrome, …
…the harsh, imposing coldness of empty space, the-

The first one I launched spun up a little too easily. The engines were solid rocket types, which can’t be turned off. This means they had to fire at full blast until they were out of fuel, which gave way more spin than this thing needed. So, I cheat-menu’ed a reduced thrust, higher mass group of spinners to get it up to a more manageable speed.

The 6000 model was scrapped…
…and replaced with the 7000.

The new model also had a better shape for starting the spins. The first one I sent up had a few too many heavy decorative parts, which skewed the mass symmetry more than I expected.

It spins nice and stably in the “toughest” dimension:

Rolling about I1.

Take a look at the navball in the lower left. For these short lengths of time, the navball is the view of a non-rotating ball that someone attached to the ship would see. When the ball is all blue, it means the antenna is pointed directly “up.” If it’s all brown, the attenna is pointing “down.” Sure, you can define up and down in space. Here, you can see the view is moving pretty much along a single great circle on the navball. The antenna spins around in a stable circle.

The phone’s also stable when spinning in the long dimension.

Yawing.

You can see the yellow view marker is pretty much stationary on the navball, with the navball spinning around that point. Or, you can just look and see that the phone is not flipping around.

For the intermediate axis, it’s all wonky, as expected!

Featuring the instability of rotation around the intermediate axis.

Watch it flip and flop! There is no external torque on this once the spinners detach. Compare the navball behavior to the other two cases.

Not a real shock, but fun to demonstrate.

Actually, it might be fun to track the path of the orientation. I suppose this could be simulated by solving the Euler equations, or I could just read it off the navball! The axis of rotation follows a path called a polhode, which is the sort of taco or criss-cross shape formed by the intersection between a sphere and an ellipse.

Some polhodes drawn in blue. The angular momentum (from the phone’s point of view) is restricted to the surface of a sphere (orange) by conservation of angular momentum, and restricted to an ellipse (purple) by conservation of energy. It must lie on the intersection. The shape and size of the sphere and ellipse depend on the dimensions of the spinning object, and how fast and in which direction it spins.

I’ll consider it.

The real takeaway here is to double-check our understanding of angular momentum.

L = I \omega

When learning about the concept, 2-dimensional models can give a sense that the angular momentum is always in the direction of the angular velocity vector, and the moment of inertia is a scalar. But, a moment of inertia tensor in the inertial frame will evolve as a three dimensional object rotates, and the angular velocity must evolve in turn to result in an unchanging angular momentum.

Ludum Dare 48: Deeper and Deeper

For the first time, I’ve participated in Ludum Dare. It’s a long-running “game-jam” and competition, in which participants make a (video) game in a weekend. This is what we came up with.

The competition involves making games solo, and the jam allows you to have teams of any size. The division seems like a simple solution for competetive people, but it’s hard for me to imagine the mindset for treating the event competetively. The rules are a tiny bit uncertain when it comes to using already-made material, and mostly rely on the honor system anyway.

The event runs three times a year, starting with the announcement of a theme. I was on a team along with Ben, Declan, and Zach. We named our game after the theme, “Deeper and deeper,” and went with classic isometric dungeon exploration.

Ben and Zach did all the visual art, which rules. Declan made the awesome score, which changes as you get… further and further in the cave. I was able to practice with Godot’s logic and signal system. Ben and Zach made all the enemies and NPCs, and were able to glue everything together.

banner.png
Deeper and Deeper.

I have been learning Godot recently for another project, and it’s proven to be a real blast. It is an open-source alternative to Unity, providing a ton of classes for 2D and 3D 0bjects whose lighting and physics can be automatically handled by the engine. I hope to make something interesting with it. It can also deal with animations in what I assume is how Flash worked. I dunno. I never wrote anything with Flash. Remember Flash? I remember Flash. Flash. There used to be it.

screenshot.PNG
Screencap of Deeper and Deeper.

The story is cute, involving elderly and estranged twin brothers, and we fit in a long silly recording of it told aloud.

I hope to do this event again in the future as I continue to learn Godot. Except for some trouble we had with Git, it worked the way it was supposed to. A solo project wouldn’t need to use Git to share between team members anyway.

Logic Gates in Rimworld 2: Gates overview

Continuing from this post, where I wrote about playing Rimworld a bit. My goal was to create a logic circuit of some kind within the game, which I will describe in a follow-up post.

Logic Gates

Logic gates are like, how a computer thinks? They work with channels, or wires, which typically have one of two states at any time. The wire can be “on” or “off.” These are often represented as the familiar 1s and 0s.  In an electronic circuit, a wire is on if it has a voltage above a threshold, and off if it’s below.

A gate looks at two of these wires’ states, the inputs, and applies a state on an output wire. The output state depends on the specific type of logic gate and the two input states.

AND gates accept two inputs, both either ON or OFF, and spit out a single output. ANDs only output ON if both inputs are ON. The rectangle isn’t the normal symbol for this kind of gate, but who cares right now.

An And gate outputs “on” if both of the inputs are on, and “off” otherwise. An Or gate outputs “off” if both of the inputs are off, and on otherwise. We can summarize all of the possible responses by one of these gates in a table.

\begin{array}{cc} \text{And} & \text{Or} \\ \begin{tabular}{|c|c|c|}\hline  \text{In 1}& \text{In 2} & \text{Out} \\ \hline \hline \text{off} & \text{off} & \text{off} \\ \hline \text{off} & \text{on} & \text{off} \\ \hline \text{on} & \text{off} & \text{off} \\ \hline \text{on} & \text{on} & \text{on} \\ \hline \end{tabular} &  \begin{tabular}{|c|c|c|}\hline  \text{In 1}& \text{In 2} & \text{Out} \\ \hline \hline \text{off} & \text{off} & \text{off} \\ \hline \text{off} & \text{on} & \text{off} \\ \hline \text{on} & \text{off} & \text{off} \\ \hline \text{on} & \text{on} & \text{on} \\ \hline \end{tabular} \end{array}

We can shorten this up by calling the inputs A and B, and the outputs the name of the gates. I’ll also tack on the outputs for a third gate, XOR, the exclusive OR.

\begin{tabular}{|c|c||c|c|c|}\hline \text{A} & \text{B} & \text{AND} & \text{OR} & \text{XOR} \\ \hline\hline  0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 1 & 1  \\ \hline 0 & 1 & 0 & 1 & 1  \\ \hline  1 & 1 & 1 & 1 & 0  \\ \hline \end{tabular}

XOR is “output 1 if A or B is 1, but not both.” These gates can be chained; the output of one can be sent into another. We can also split the output of any gate so it can be an input for multiple gates. Linking these together allows us to take multiple input wires and interpret them. You can then just shove these all into a black box and not care about what it’s made of. For both our sakes take a look at Wikipedia page. They can be used to add binary digits.

 

Halfadder.gif
A Half Adder. This thing uses an XOR and an AND gate to add two single bit numbers. The result is two bits. The S sum is the 1s place and the C carry is the 2s place of the 2 bit sum. So A + B = CS, 0 + 0 = 00, 0 + 1 = 01, and 1+1 = 10. From the Adder Wikipedia page.

So that’s cool. So here’s the thing. With only ANDs, ORs, or XORs, there’s no way you can start with only 0s as inputs and end up with any 1s. This leads us to the idea of functional completeness: we want to be able to chain together these gates to create any output given any collection of inputs. You do not have functional completeness with those three only. The best option to add on at that point is a NOT — a gate that accepts only one input, and inverts it. It turns a 1 input into a 0 output, or a 0 into a 1. With a bucket of NOTs available, as well as only one of those three, you can make whatever you want.

There are two “universal” gates — ones that on their own are sufficient to make any kind of circuit. These are NAND and NOR, the negation of AND and OR.

\begin{tabular}{|c|c||c|c|}\hline \text{A} & \text{B} & \text{NAND} & \text{NOR}  \\ \hline\hline  0 & 0 & 1 & 1  \\ \hline 1 & 0 & 1& 0  \\ \hline 0 & 1 & 1 & 0  \\ \hline  1 & 1 & 0 & 0  \\ \hline \end{tabular}

Putting these together cleverly lets you make whatever kind of processing you need.

Half_adder_using_NAND_gates_only.jpg
A half adder again, but made only with NAND gates. It uses a lot more gates, but we don’t need to keep track of which one is which.

Logic gates in Rimworld 1

I made a system of NAND logic gates using the logic of production benches in the video game Rimworld. First I’ll say a bit about the game itself. The posts following this one will detail the NAND gates and the half-adder built with them.

Rimworld: Transhumanistic Cult Simulator

Image result for rimworld
The title of the game. It’s that.

In Rimworld, the player operates a small colony of people in a Wild-West-but-on-Another-Planet setting. It touts itself as a “story generator,” which I truly appreciate.  It features several AI storytellers with different personalities and difficulty settings. They decide on randomized events to occur in and around your colony according to their whims. Given a large number of mechanics and options for survival and success in the world, this leads to surprising, charming, often brutal or devastating, and unexpected developments. There is a well-supported and easy-to-use modding system. The creator encourages the community to add items, alter mechanics, introduce entirely new modes of gameplay, or change the colonists’ hairstyles.

rimworldexample.png
A busy screenshot of Rimworld showing off a dining room, kitchen and freezer, and fields. This colony admittedly has 10 or 20 times the number of typical colonists.

Emergent events and structure don’t exist in traditional storytelling. Driving and witnessing a course of events particular only to the player’s instance of the game leads to a strong personal investment. And personal investment is fun. So it’s fun when a game does that. I like fun and fun is good.

Natural disasters and dangerous weather can strike at any time. Human and robotic raiding parties wait for opportune times to assault the camp. There is an ending condition, in which you build a spaceship to escape the planet, but the game is really an “it’s all about the experience-” experience. In terms of what you do, it’s squarely in the simulation genre. Your colony has a bunch of people with a bunch of skills, and you direct them to build homes and farms, workshops and infirmaries. The core of the game is choosing what buildings and structures to build, and where and when to do so, and which colonists will prioritize which kinds of work that must be done to keep the compound running smoothly. The essence of the thing is a matrix of values you tweak to set these priorities appropriately. This becomes necessary once the colony grows too large to comfortably direct each colonist. If changing values in a matrix and later realizing you were wrong about them doesn’t sound like great fun, well, I just don’t know how to help you.

rimworldpriorities.png
The job priorities window — one of the more important menus in the game. Colonists will work on available jobs with the lowest numbers, and furthest to the left, before doing others. For example, once all “Basic” tasks have been completed or are reserved by other colonists, Stin will then begin work on any Cooking that needs to be done. Once those are done, he will move on to any Grow tasks, like planting crops or caring for houseplants. The squares around some numbers reflect tasks which some colonists are particularly skilled or poor at.

And so you try to create stability amongst randomness. Ima might decide to install those floorboards first instead of these. Euterpe might decide to clean the kitchen before the infirmary. Most tasks can be overridden, and colonists directed to do something specific, but the goal is to develop a system and infrastructure within which everyone gets around to doing all of the tasks you’ve planned.

In my game, once my settlement had developed a solid foundation, defenses, and food stores, I turned to improving the colonists. I became focused on finding, buying, and manufacturing bionic body parts to install in every colonist. A brewing operation shipped thousands of bottles of beer to nearby settlements to pay for resources and trade for any parts they had made. I began sending small parties in ballistic transport pods across the continent to scout distant towns for anyone selling the coveted computer-designed archotech body parts. Any downed but still living enemies were imprisoned and nursed back to health. Healthy raiders were captured by an army sporting an arsenal of psychic shock lances, procured with funds from the brewery. Regardless of their skills, dispositions, preexisting health conditions or chemical addictions, all prisoners would be eventually convinced to join as members of the colony. Once they agreed, they were sent straight to the hospital, where a bionic heart, stomach, and a suite of artificial limbs awaited them. And they had to want it. Body purists, those who were naturally revolted by the transhumanistic dream of glistening plasteel augments, had their bodies replaced regardless. They were fitted with a custom Joywire installed deep in their brain to mind the additions just a little less.

I had discovered the purpose of the game. We were the Borg, but every potential drone’s mind would first be broken down and then rebuilt, until they begged to join and be Improved.

So, I had played the game as it was surely intended, and was ready to do something fresh. I had turned all of my colonists into robots, so the next step was to turn the colony itself into a computer.

Constructing the digits of pi from conservation of energy and momentum in collisions

A short one, just to get the ol’ bloggin’ fingers movin’ again’. It’s starting to get chilly outdoors and they’re simply covered in cobwebs!! Where did they come from? How did I get spider webs on my hands…..?

Last year, 3Blue1Brown produced a series of lovely videos (but really, they all are) on a strange little thing. An object is sent bouncing back and forth between a rigid wall and another moving object with a particular (integer!) mass ratio. If this ratio is a power of 100, the number of collisions will be some of the first digits of pi!! Isn’t that nuts.

My two blocks.

Anyone who has taken a high school senior level physics class or above would recognize the need to implement two conservation laws: that of momentum (linear with velocities) and that of energy (quadratic with speeds). Someone with just a bit of programming experience should be able to set up a series of solutions for the speeds after successive collisions, until the “no-collide” condition is met: the two blocks are traveling in the same direction, away from the wall, and gaining distance between them.

Hey, I’ve got a bit of programming experience! I whipped this up to see if I could get the same results.

Both blocks equal. The block on the right is 100^o = 1 times more massive than the block on the left. Three collisions.
100:1. There are 31 collisions in there. Looking good!

10000:1. There are 314 collisions! It looks like it’s working. Trust me. You must trust me.

It looks like they’re doing it right! A fun little exercise. In the spirit of the original prompt I’ll let you find or figure out why it works for yourself. You can always watch Sanderson’s explanation video.

One hint I can give is it is a bit related to the Buffon’s needle technique of finding pi. This is a fun idea that understanding could open some avenues of thought for more difficult estimation problems. Check out the description and explanation on Numberphile:


OK ENJOY PI PLEASE AND THANKS PEACE OUT DUDERINOS

Exploring cellular automata 2 — my lovely maggot children

In some other post I linked to the Wolfram “atlas” of 1-dimensional elementary cellular automata, the list of 256 rules that can apply to a row of “on” or “off” pixels to determine a successive row, in which a pixel’s successive value is only dependent on its own and those of its two neighbors. There are only 256 of these because of this limitation.

We can extend this to a 2-dimensional space. This seems to be easier to follow what’s going on — maybe just by the nature of looking at a computer screen. It’s the space that the Game of Life lives in. A grid of pixels can be on or off, and a successor grid follows, with each pixel’s state determined by its initial state and those of its neighbors. In order to visualize this, you could lay each successor on top of its parent grid to make a 3D object (analogous to forming a 2D image for a 1D array of pixels). But this would be really hard to look through. Instead, just play each image after its successor as an animation.

Stable_puffer_animation.gif
A puffer in Game of Life. Each frame of the animation is a new epoch. Needless to say, this isn’t an obviously possible construction given the GoL rules. Wikipedia

Conways_game_of_life_breeder_animation.gif
A GoL breeder. This configuration only creates guns, which create glider spaceships. In the last frame, the breeder is colored red, the guns green, and the gliders blue. Bickibeepia

A ruleset can be described with a series of images, like the 1-D automaton, but again, you’d have to either stack the start example on top of the resulting center pixel, or place them side by side. In addition, there are going to be many more images necessary for a rule, as there are more configurations of on-and-off neighboring cells in 2 dimensions.

This also introduces the uncertainty in what a neighborhood is in 2D. We could consider the 4 cells in the cardinal directions as the only neighbors (the von Neumann neighborhood), or we could tack on the corners, too (the Moore neighborhood).

CA-von-Neumann.png
The von Neumann neighborhood. Pink cells show an extended version.

CA-Moore.png
The Moore neighborhood. Wikibleedia

For the von Neumann neighborhood, each rule alone would require 2^5 = 32 images. With a choice of on or off for the resulting cell for each image, there are 1024 elementary cellular automata. Not a bewildering number, but still a lot.

vonneumannex1

vonneumannex2
Two examples of rule icons for an elementary cellular automaton using the von Neumann neighborhood. For a single ruleset, one would need 32 of these bad boys. These read a bit differently than the 1-D rule icons — the square on the right replaces the center square in the image on the left.

The Moore neighborhood would have 9 squares, replacing the center square, and so one would need 2^9 = 512 icons for each rule. There are 512^2 = 262144 rules of this type. Yikes! There are the same number of squares in the “extended” von Neumann neighborhood, so these numbers apply there as well.

mooreex.png
A single icon of 512 from one of the elementary automata using the Moore neighborhood regime.

Both of these neighborhoods are probably characterized completely — tracking a few of the properties of 263000 systems doesn’t seem to be out of the realm of possibility, especially when considering that this number can be reduced by a factor of due to horizontal-mirroring, vertical-mirroring, centerpoint-mirroring, and color-inverting of each of these. So, the 512 in the 5-cell neighborhood reduces to only 64 with unique behaviors, and the 9-cell neighborhood reduces to 32,768.

What seemed exciting was to look at larger systems. A Super von Neumann neighborhood of 21 squares will have 4,398,046,511,104 rulesets of 2^{21} = 2,097,152 icons each, and the Super Moore neighborhood of 25 squares has 1,125,899,906,842,624 rulesets of 2^25 = 33,554,432 icons each. Now we’re having fun.

extendedvonneumann.png
The Super von Neumann neighborhood of 21 squares.

extendedmoore.png
The Super Moore neighborhood.

It felt like there was some chance of picking something weird out of here.

A ruleset based on an oscillator

One contruction I didn’t mention in the last post was an oscillator — a repeating, unmoving pattern. Some oscillators can be seen in the wakes of the dirty puffers.

2x2_p2s.gif
Some period-2 oscillators in GoL. Bumbobedia.

Koksgalaxy.gif
A larger oscillator, with period 8. Bunklebeepia.

Oscillators are emergent in GoL; there isn’t anything about the rule that explicitly reflects an intent of repetition. I was interested in making a rule that was based on an oscillator, and to see if something like spaceships would emerge.

Maggotworld

The simplest oscillator I could think of that seemed interesting was a clockwise spinning line. 3 activated cells, with a diagonal between the first two, would appear to spin around in a circle. If there were three cells in this configuration, the next epoch would deactivate the cell in the back, and activate a new one in the next step around a circle. This resulted in an 8-period oscillator. I originally called the idea Spinworld, but it soon seemed appropriate to call it Maggotworld.

oscillators.gif
Four base oscillators (maggots) in Maggotworld with different phases. White pixels are considered activated. The base oscillator has a period of 8 epochs.

Instead of matching the Super Moore neighborhood around every pixel with one of the 33.5 million in the ruleset, the maggotworld propagator just checks if there are three activated pixels in the right configuration nearby to turn on. There are eight ways this could happen — corresponding to the eight squares that are activated throughout the cycle. This doesn’t mean there are only eight icons of the 33.5 million that lead to an activated central pixel. The pixels toggle; a dead pixel toggles on if there are one, or three maggots approaching it. It toggles off if there are two or four approaching it. The same is likewise true for the tail ends of a maggot — two maggots with the same tail will leave the tail point on, as it is toggled twice. So there are many ways pixels could be left on or off depending on the situation. This seems ripe for emergent behavior!

I ended up trying a bunch of different setups to see how they behaved way back in April, and wrote most of the above then, but haven’t gotten around to this showcase until now.

Random starts

I began by just slapping random starting configurations down and seeing what happened.

1.png
The seed…

2.gif
…and the result. My optimistic dream of seeing spaceships shoot out into the black shattered.

randomstart.png
A high density of activated squares…

randomstart2.gif
…is quickly lost after the first epoch, leaving just a few maggots wallowing in their filth. Some of these are neat.

After some initial discouragement, some neat patterns did emerge. In the last animation you can spot a few figure 8s, figure triple-Os(?), and maggots passing straight through each other. Not a spaceship, but at least something a bit more complicated.

splotchesanimreal1.gif

Not so random starts

If I couldn’t get a spaceship out of this, I could at least look for something cool. Time to get systematic. I placed maggots at varying relative phases and positions to start and let them go.

7.png
Two maggots 180 degrees out of phase at varying relative starting positions.

8.gif
And the results.

allstartingpositions.gif
All the same with different starting phases.

This was a bit discouraging — none of the ones that touched made any lasting patterns beyond a single oscillator. Anything interesting would have to be made of more than 2 of these, or have some stationary gunk waiting around to push them around a bit.

In one of the random setups, I saw this little friend, which I’ll call a slingshot oscillator.

slingshotstill.png
Slingshot oscillator.

slingshot.gif
Slingshot has a period of 14.

It’s got a period of 14. Having a different period is… useful? How is any of this useful? I dunno whatever.

Another neat result would be to get a pattern that turns on cells outside of the normal oscillator radius. Here’s one. It looks like a really low resolution hand-style cursor. Let’s call it the flinger.

flinger.png
Flinger start!

flinger.gif
Flinger go! It turns on a cell in the upper right well outside of the original oscillator’s domain.

The last thing of this type is a little u. I like this because, while it seems like these rules easily reduce the number of lit cells, this generates an extra sixth cell by the end of its process.

uthing.png
U-thing!

uthing.gif
U-thing go!

Behavior in grids

If I couldn’t get a spaceship flying away in empty space, maybe I could get something neat with stationary patterns placed ahead of time. I started by plopping oscillators in a lattice of lit points. It’s real easy to get some ongoing domino action this way.

grid1.gif
Diagonal motion.

grid4.gif
A couple more.

grid2.gif
Vertical motion. There’s a lot of trash left behind, but the key to see here is that only one or two lit squares next to a lit bar drives the motion. The lit side determines which way it travels.

Trams and fuses

The last one looked neat. I tried to make more “trams” following a track. Here are a bunch. There are also configurations that rip the track apart. Let’s call em fuses.

trainsandtrams.gif
“Trams”

Some of the styles are repeated with a hole in the track to show how they deal with that. Holes are also placed at different locations along the track for the same style of tram to show off how they travel: trams hop two spaces at a time, not one. An “oddly” placed hole will force it to reflect or scatter differently than an “evenly” placed one.

I was taken with the self-repairing track on the bottom.

trainrepair.gif
Self-repairing tramtrack.

This seemed like a way to create a caterpillar-like spaceship of indefinite length that scootches forward a couple squares each cycle. I wasn’t able to figure how to get it to repeat, thought.

Another fuse had an end that, after some acrobatics, sent another tram that flipped the orientation of the fuse.

trainwithflipper.gif
A single lit square between the tracks flips the sides of the track: two lit on top and one on the bottom become one on the top and two on the bottom.

Timers

One last thing and I’ll let you go, I swear. A row of lit squares can be used as a timer of indefinite length. This could go up above with the other oscillators, but the fact that this can be stretched to as long as you need it puts it in its own class.

timer1.gif
A timer. This repeats. It takes some time to do that so I called it a timer.

These can also take a turn. These turns could be used to stop the motion, or to initiate the timer in the first place:

fuse.gif
Two timers. The top timer is initiated after the maggot turns the corner. The bottom stops after the path has been traversed.

Maybe these are more appropriately called fuses? But I already used that word.

Showing these little tricks and behaviors is interesting enough alone for me, but it would be really swell if we could figure out how to produce oscillators / timers of truly any period. I’m also still wondering if there is an elusory spaceship, one without a grid or track to support it, hiding in these rules. The holy grail after that is to get these things to do binary logic, but let’s save that for another day.

I Lied

I said I had only one last thing but I totally lied to you. You must be feeling pretty silly right now! Just one more bit. I promise. 🤞

Before finishing up with the maggotworld animations, I had tried a lot of different rulesets to see how they’d act. One of them was simple: A cell would turn on if it had an odd number of neighbors (in the Moore neighborhood), and off otherwise.

Fredkins.gif
Fredkin’s Replicator.

What a pretty pattern! I was pleased to learn later that I had recreated something called Fredkin’s Replicator. You can see some explanation in Numberphile’s Terrific Toothpick Patterns (starting at 14:20). A lecture also by Neil Sloane gives more insight into analyzing this and other “odd-ruled” cellular automata. Do yourself a favor and check out at least the Numberphile vid.

Fredkins.png
The Fredkin’s replicator having reached the 63rd generation. What a beaut! You can see the same repeating structures, and the same shapes repeated at different scales.

Visualizing text as paths

I was reminded recently that I actually have a blog. I have a Cellular Automata Part II post that’s been sitting around since April, but here’s something quick before I get around to polishing that up.

During a prescribed Art Night, I decided to come up with a visualization of long texts. I grabbed a few examples of literature from Project Gutenberg as plain text files, and converted to binary using each letter’s ASCII character code.

a\qquad\to\qquad 97\qquad\to\qquad \begin{array}{ccccccc} 1 & 1 & 0 & 0 & 0 & 0 & 1 \\ \end{array} 

An entire word will then create a string of ones and zeroes 7 times longer. The visualization comes in by turning string into something like the output of a Lindenmayer system. A line is drawn, and at each step it reads the next character. A one instructs the drawing to take a right turn, and a zero instructs it to take a left turn.

\text{baloney} \\ \strut\quad\downarrow\\ 98, 97, 108, 111, 110, 101, 121 \\ \strut\quad\downarrow \\ 110001011000011101100110111111011101 \\ \strut\quad\downarrow \\ RRLLLRLRRLLLLRRRLRRLLRRLRRRRRRLRRRLR

Starting from the bottom of an image, drawing upward, and taking these turns one at a time gives a blocky figure.

baloney.png
“baloney”. There are 49 turns, from each of baloney’s 7 characters’ 7 bits.

This was pretty quick to implement. It initially took about 10 or 15 minutes to finish for single-MB size books using repeated AppendTos in Mathematica to construct a list of successive points along the path. A rewrite using Reap/Sow made these finish in less than a minute. I have no learning about optimization, except that the latter is supposed to be fast in M-ca. I suspect that Append rewrites the entire list each time, while Reap/Sow does not. There are likely 800 other better languages in which to write this kind of thing, but hey! Heyo! Woo!

Gutenberg adds in header and footer to each text file, saying some standard stuff about where it was downloaded and when it was written, etc. For each of these, I removed everything except the nitty gritty, proper text. This would only serve to tack on a bit at the beginnings and ends, ultimately not changing much to the shapes. They would have been rotated, though.

Here are the results for 4 books. Here’s a zoomable pdf of the Frankenstein one. In each of these, a green point marks the beginning, and a red point the end. Frankenstein’s green point, for example, is in the lower right, and its red is on the ear of the evil head in the upper left. Boo!

Frankleystein
Frankenstein; Or, The Modern Prometheus. Mary Wollstonecraft (Godwin) Shelley.

 

 

kingjamesbible.png
The King James Bible.

mobydick.png
Moby Dick; Or, The Whale. Herman Melville.

ulysses.png
Ulysses. James Joyce.

It was really fun to get these results. I acknowledge that there is probably little meaning in these images from a stylometric point of view, but it is interesting to observe the differences. I’m sure these are very sensitive to format, the words themselves, and the ASCII codes for English letters. The Bible is loaded with many paragraph breaks and colons for each verse, in contrast to the long paragraphs of Ulysses. Ensuring that each character begins with a right turn (1) guarantees that each image is going to have some kind of curliness. A single typo wouldn’t change the shape of the subsequent curve, but it would lead to a rotation about that point.

So, is it just happenstance that Ulysses is so much more unfolded than Moby Dick? Is there something fundamentally different about the second half of Moby Dick that makes it hover around the end point?

An issue with the visualizations themselves is that they do not show any overlap as a line covers regions it has already followed. A way around this would be to assign an increasing height to each point, and show these paths as text tunnels in space. These could be seen as a 3D model or as projected images from different directions. Maybe I’ll try this out.

The full image files showing these walks are about 10x the size of the original text files. A text file storing the list of ordered pairs of the path is about 100x the original text file size. So, a forseeable application of this technique could be to make text file sizes much larger, as well as difficult to read and process.

As a final thought, it was easy to call these random walks. Clearly these aren’t random in that they carry the meaning of the texts, but perhaps English letters appear random after being converted in this way. A way to test this is by looking at the distance from the starting point — A true random walk of N steps ought to be a distance of roughly \sqrt{N} from the starting point.

flength

kjblength

ulength

mblength

None of these seem to be plots of distance = sqrt steps.

Exploring cellular automata 1

Conway’s game of life is maybe the most well known ruleset for cellular automata. Each cell in a grid has some possible set of states, and set of rules for changing state based on its own and it’s (usually) immediate neighbor cells. Time elapses with a number of discrete steps, sometimes called epochs. In Game of Life, a cell can either be “alive” or “dead” (on or off), and will live for the next time step only if it has one or two alive neighbors.

Some incredible patterns emerge, including spaceships, guns (creating gliders and spaceships), breeders (creating guns), other puffersrakes, long-lived methuselahs (this example creating some gliders).

Other rulesets lead to neat patterns. Here’s a fun one with many cell states. It lets you rate the results as it scans some starting parameter space.

So, we’re certainly not limited to one ruleset. Coming up with one is easy — finding if there are any neat patterns maybe not so much. It might be better to first explore simpler rules — ones in only 1 dimension. In this case, a linear space is segmented into cells, and each cell has only two neighbors. Given some configuration of three cells in a row, the middle cell’s state in the next epoch will be dependent on its own state as well as the two alongside. When considering how to change any cell, since there are only three cells to consider, and only two possible states for each, there are only 2^3 = 8 arrangements to consider when changing any cell. Because of this small number, it’s no problem to display the rule of the automaton as a list of diagrams, each showing one of the eight starting arrangements and the resulting state of the middle cell in the next epoch.

The rule icons for Wolfram’s “Rule 73”, one example of the 8 diagrams needed to completely describe this cellular automaton. To determine if a cell is on or off in the next epoch, compare it and its neighbors to the icons, and find the one whose top row matches. The cell beneath in the icon indicates the state of the cell in the next epoch.

With some initial state, for example:

initial73.png

The next line is produced by scanning over this line and matching the top row with the one from the rule icons.

Rule73.gif

Note that this wraps around, connecting the two edges. This process then continues indefinitely for successive lines. We’re free to pick any length of line and any arrangement of on and off cells as an initial condition, but once this is done, the rest is automated. This particular initial condition for this rule ends up looking like this:

rule73.png

This is kind of neat, but these processes don’t get to show off unless they’ve got some room to work with. Here’s the same rule applied to an initial state of a single black cell in a long row of white.

The repeating triangles are a pattern characteristic of some of the more interesting rules. Rules 30, 90, 110, and 184 are supposedly the most cited in scientific or mathematics articles.

Because there are only 8 arrangements of cells we need to specify to determine a 1-D ruleset, and each arrangement offers the choice of the middle cell turning on or off, there are 2^8 = 256 possible rules of this type. You can see links to them all here. One could argue there are only 128 rules, 64, or even 32, since negating all the colors, taking a mirror image, or negating the mirror image doesn’t change the behavior much. This means each rule is one of a set of four that operate in the same way.

I’ve already linked to the Wolfram Atlas a few times. The images above were taken from there or made using Mathematica. Stephen Wolfram, of Mathematica fame, is also well-known for his work on cellular automata. He’s argued that these things may be more fundamental to nature than mathematics. It’s considered controversial. I haven’t read A New Kind of Science, a book where he lays this all out, but it’s free and online. Maybe worth checking out. It seems like some of the argument hinges on the fact that some of these automata are Turing complete — they can be set up to operate and produce results like any computer program.

I’d just like to note that my interest in cellular automata is based pretty much entirely on having fun. It’s 11 o’ clock on a Saturday night.

A step after checking out the 1-D rules is to explore the 2-D rules. Next I’ll show off a rule and try to do something interesting with it.