## Ludum Dare 48: Deeper and Deeper

For the first time, I’ve participated in Ludum Dare. It’s a long-running “game-jam” and competition, in which participants make a (video) game in a weekend. This is what we came up with.

The competition involves making games solo, and the jam allows you to have teams of any size. The division seems like a simple solution for competetive people, but it’s hard for me to imagine the mindset for treating the event competetively. The rules are a tiny bit uncertain when it comes to using already-made material, and mostly rely on the honor system anyway.

The event runs three times a year, starting with the announcement of a theme. I was on a team along with Ben, Declan, and Zach. We named our game after the theme, “Deeper and deeper,” and went with classic isometric dungeon exploration.

Ben and Zach did all the visual art, which rules. Declan made the awesome score, which changes as you get… further and further in the cave. I was able to practice with Godot’s logic and signal system. Ben and Zach made all the enemies and NPCs, and were able to glue everything together.

I have been learning Godot recently for another project, and it’s proven to be a real blast. It is an open-source alternative to Unity, providing a ton of classes for 2D and 3D 0bjects whose lighting and physics can be automatically handled by the engine. I hope to make something interesting with it. It can also deal with animations in what I assume is how Flash worked. I dunno. I never wrote anything with Flash. Remember Flash? I remember Flash. Flash. There used to be it.

The story is cute, involving elderly and estranged twin brothers, and we fit in a long silly recording of it told aloud.

I hope to do this event again in the future as I continue to learn Godot. Except for some trouble we had with Git, it worked the way it was supposed to. A solo project wouldn’t need to use Git to share between team members anyway.

## Logic Gates in Rimworld 2: Gates overview

Continuing from this post, where I wrote about playing Rimworld a bit. My goal was to create a logic circuit of some kind within the game, which I will describe in a follow-up post.

# Logic Gates

Logic gates are like, how a computer thinks? They work with channels, or wires, which typically have one of two states at any time. The wire can be “on” or “off.” These are often represented as the familiar 1s and 0s.  In an electronic circuit, a wire is on if it has a voltage above a threshold, and off if it’s below.

A gate looks at two of these wires’ states, the inputs, and applies a state on an output wire. The output state depends on the specific type of logic gate and the two input states.

An And gate outputs “on” if both of the inputs are on, and “off” otherwise. An Or gate outputs “off” if both of the inputs are off, and on otherwise. We can summarize all of the possible responses by one of these gates in a table.

$\begin{array}{cc} \text{And} & \text{Or} \\ \begin{tabular}{|c|c|c|}\hline \text{In 1}& \text{In 2} & \text{Out} \\ \hline \hline \text{off} & \text{off} & \text{off} \\ \hline \text{off} & \text{on} & \text{off} \\ \hline \text{on} & \text{off} & \text{off} \\ \hline \text{on} & \text{on} & \text{on} \\ \hline \end{tabular} & \begin{tabular}{|c|c|c|}\hline \text{In 1}& \text{In 2} & \text{Out} \\ \hline \hline \text{off} & \text{off} & \text{off} \\ \hline \text{off} & \text{on} & \text{off} \\ \hline \text{on} & \text{off} & \text{off} \\ \hline \text{on} & \text{on} & \text{on} \\ \hline \end{tabular} \end{array}$

We can shorten this up by calling the inputs A and B, and the outputs the name of the gates. I’ll also tack on the outputs for a third gate, XOR, the exclusive OR.

$\begin{tabular}{|c|c||c|c|c|}\hline \text{A} & \text{B} & \text{AND} & \text{OR} & \text{XOR} \\ \hline\hline 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 1 & 1 \\ \hline 0 & 1 & 0 & 1 & 1 \\ \hline 1 & 1 & 1 & 1 & 0 \\ \hline \end{tabular}$

XOR is “output 1 if A or B is 1, but not both.” These gates can be chained; the output of one can be sent into another. We can also split the output of any gate so it can be an input for multiple gates. Linking these together allows us to take multiple input wires and interpret them. You can then just shove these all into a black box and not care about what it’s made of. For both our sakes take a look at Wikipedia page. They can be used to add binary digits.

So that’s cool. So here’s the thing. With only ANDs, ORs, or XORs, there’s no way you can start with only 0s as inputs and end up with any 1s. This leads us to the idea of functional completeness: we want to be able to chain together these gates to create any output given any collection of inputs. You do not have functional completeness with those three only. The best option to add on at that point is a NOT — a gate that accepts only one input, and inverts it. It turns a 1 input into a 0 output, or a 0 into a 1. With a bucket of NOTs available, as well as only one of those three, you can make whatever you want.

There are two “universal” gates — ones that on their own are sufficient to make any kind of circuit. These are NAND and NOR, the negation of AND and OR.

$\begin{tabular}{|c|c||c|c|}\hline \text{A} & \text{B} & \text{NAND} & \text{NOR} \\ \hline\hline 0 & 0 & 1 & 1 \\ \hline 1 & 0 & 1& 0 \\ \hline 0 & 1 & 1 & 0 \\ \hline 1 & 1 & 0 & 0 \\ \hline \end{tabular}$

Putting these together cleverly lets you make whatever kind of processing you need.

## Logic gates in Rimworld 1

I made a system of NAND logic gates using the logic of production benches in the video game Rimworld. First I’ll say a bit about the game itself. The posts following this one will detail the NAND gates and the half-adder built with them.

# Rimworld: Transhumanistic Cult Simulator

In Rimworld, the player operates a small colony of people in a Wild-West-but-on-Another-Planet setting. It touts itself as a “story generator,” which I truly appreciate.  It features several AI storytellers with different personalities and difficulty settings. They decide on randomized events to occur in and around your colony according to their whims. Given a large number of mechanics and options for survival and success in the world, this leads to surprising, charming, often brutal or devastating, and unexpected developments. There is a well-supported and easy-to-use modding system. The creator encourages the community to add items, alter mechanics, introduce entirely new modes of gameplay, or change the colonists’ hairstyles.

Emergent events and structure don’t exist in traditional storytelling. Driving and witnessing a course of events particular only to the player’s instance of the game leads to a strong personal investment. And personal investment is fun. So it’s fun when a game does that. I like fun and fun is good.

Natural disasters and dangerous weather can strike at any time. Human and robotic raiding parties wait for opportune times to assault the camp. There is an ending condition, in which you build a spaceship to escape the planet, but the game is really an “it’s all about the experience-” experience. In terms of what you do, it’s squarely in the simulation genre. Your colony has a bunch of people with a bunch of skills, and you direct them to build homes and farms, workshops and infirmaries. The core of the game is choosing what buildings and structures to build, and where and when to do so, and which colonists will prioritize which kinds of work that must be done to keep the compound running smoothly. The essence of the thing is a matrix of values you tweak to set these priorities appropriately. This becomes necessary once the colony grows too large to comfortably direct each colonist. If changing values in a matrix and later realizing you were wrong about them doesn’t sound like great fun, well, I just don’t know how to help you.

And so you try to create stability amongst randomness. Ima might decide to install those floorboards first instead of these. Euterpe might decide to clean the kitchen before the infirmary. Most tasks can be overridden, and colonists directed to do something specific, but the goal is to develop a system and infrastructure within which everyone gets around to doing all of the tasks you’ve planned.

In my game, once my settlement had developed a solid foundation, defenses, and food stores, I turned to improving the colonists. I became focused on finding, buying, and manufacturing bionic body parts to install in every colonist. A brewing operation shipped thousands of bottles of beer to nearby settlements to pay for resources and trade for any parts they had made. I began sending small parties in ballistic transport pods across the continent to scout distant towns for anyone selling the coveted computer-designed archotech body parts. Any downed but still living enemies were imprisoned and nursed back to health. Healthy raiders were captured by an army sporting an arsenal of psychic shock lances, procured with funds from the brewery. Regardless of their skills, dispositions, preexisting health conditions or chemical addictions, all prisoners would be eventually convinced to join as members of the colony. Once they agreed, they were sent straight to the hospital, where a bionic heart, stomach, and a suite of artificial limbs awaited them. And they had to want it. Body purists, those who were naturally revolted by the transhumanistic dream of glistening plasteel augments, had their bodies replaced regardless. They were fitted with a custom Joywire installed deep in their brain to mind the additions just a little less.

I had discovered the purpose of the game. We were the Borg, but every potential drone’s mind would first be broken down and then rebuilt, until they begged to join and be Improved.

So, I had played the game as it was surely intended, and was ready to do something fresh. I had turned all of my colonists into robots, so the next step was to turn the colony itself into a computer.

## Constructing the digits of pi from conservation of energy and momentum in collisions

A short one, just to get the ol’ bloggin’ fingers movin’ again’. It’s starting to get chilly outdoors and they’re simply covered in cobwebs!! Where did they come from? How did I get spider webs on my hands…..?

Last year, 3Blue1Brown produced a series of lovely videos (but really, they all are) on a strange little thing. An object is sent bouncing back and forth between a rigid wall and another moving object with a particular (integer!) mass ratio. If this ratio is a power of 100, the number of collisions will be some of the first digits of pi!! Isn’t that nuts.

Anyone who has taken a high school senior level physics class or above would recognize the need to implement two conservation laws: that of momentum (linear with velocities) and that of energy (quadratic with speeds). Someone with just a bit of programming experience should be able to set up a series of solutions for the speeds after successive collisions, until the “no-collide” condition is met: the two blocks are traveling in the same direction, away from the wall, and gaining distance between them.

Hey, I’ve got a bit of programming experience! I whipped this up to see if I could get the same results.

It looks like they’re doing it right! A fun little exercise. In the spirit of the original prompt I’ll let you find or figure out why it works for yourself. You can always watch Sanderson’s explanation video.

One hint I can give is it is a bit related to the Buffon’s needle technique of finding pi. This is a fun idea that understanding could open some avenues of thought for more difficult estimation problems. Check out the description and explanation on Numberphile:

OK ENJOY PI PLEASE AND THANKS PEACE OUT DUDERINOS

## Exploring cellular automata 2 — my lovely maggot children

In some other post I linked to the Wolfram “atlas” of 1-dimensional elementary cellular automata, the list of 256 rules that can apply to a row of “on” or “off” pixels to determine a successive row, in which a pixel’s successive value is only dependent on its own and those of its two neighbors. There are only 256 of these because of this limitation.

We can extend this to a 2-dimensional space. This seems to be easier to follow what’s going on — maybe just by the nature of looking at a computer screen. It’s the space that the Game of Life lives in. A grid of pixels can be on or off, and a successor grid follows, with each pixel’s state determined by its initial state and those of its neighbors. In order to visualize this, you could lay each successor on top of its parent grid to make a 3D object (analogous to forming a 2D image for a 1D array of pixels). But this would be really hard to look through. Instead, just play each image after its successor as an animation.

A ruleset can be described with a series of images, like the 1-D automaton, but again, you’d have to either stack the start example on top of the resulting center pixel, or place them side by side. In addition, there are going to be many more images necessary for a rule, as there are more configurations of on-and-off neighboring cells in 2 dimensions.

This also introduces the uncertainty in what a neighborhood is in 2D. We could consider the 4 cells in the cardinal directions as the only neighbors (the von Neumann neighborhood), or we could tack on the corners, too (the Moore neighborhood).

For the von Neumann neighborhood, each rule alone would require $2^5 = 32$ images. With a choice of on or off for the resulting cell for each image, there are 1024 elementary cellular automata. Not a bewildering number, but still a lot.

The Moore neighborhood would have 9 squares, replacing the center square, and so one would need $2^9 = 512$ icons for each rule. There are $512^2 = 262144$ rules of this type. Yikes! There are the same number of squares in the “extended” von Neumann neighborhood, so these numbers apply there as well.

Both of these neighborhoods are probably characterized completely — tracking a few of the properties of 263000 systems doesn’t seem to be out of the realm of possibility, especially when considering that this number can be reduced by a factor of due to horizontal-mirroring, vertical-mirroring, centerpoint-mirroring, and color-inverting of each of these. So, the 512 in the 5-cell neighborhood reduces to only 64 with unique behaviors, and the 9-cell neighborhood reduces to 32,768.

What seemed exciting was to look at larger systems. A Super von Neumann neighborhood of 21 squares will have 4,398,046,511,104 rulesets of $2^{21} =$ 2,097,152 icons each, and the Super Moore neighborhood of 25 squares has 1,125,899,906,842,624 rulesets of $2^25 =$ 33,554,432 icons each. Now we’re having fun.

It felt like there was some chance of picking something weird out of here.

## A ruleset based on an oscillator

One contruction I didn’t mention in the last post was an oscillator — a repeating, unmoving pattern. Some oscillators can be seen in the wakes of the dirty puffers.

Oscillators are emergent in GoL; there isn’t anything about the rule that explicitly reflects an intent of repetition. I was interested in making a rule that was based on an oscillator, and to see if something like spaceships would emerge.

### Maggotworld

The simplest oscillator I could think of that seemed interesting was a clockwise spinning line. 3 activated cells, with a diagonal between the first two, would appear to spin around in a circle. If there were three cells in this configuration, the next epoch would deactivate the cell in the back, and activate a new one in the next step around a circle. This resulted in an 8-period oscillator. I originally called the idea Spinworld, but it soon seemed appropriate to call it Maggotworld.

Instead of matching the Super Moore neighborhood around every pixel with one of the 33.5 million in the ruleset, the maggotworld propagator just checks if there are three activated pixels in the right configuration nearby to turn on. There are eight ways this could happen — corresponding to the eight squares that are activated throughout the cycle. This doesn’t mean there are only eight icons of the 33.5 million that lead to an activated central pixel. The pixels toggle; a dead pixel toggles on if there are one, or three maggots approaching it. It toggles off if there are two or four approaching it. The same is likewise true for the tail ends of a maggot — two maggots with the same tail will leave the tail point on, as it is toggled twice. So there are many ways pixels could be left on or off depending on the situation. This seems ripe for emergent behavior!

I ended up trying a bunch of different setups to see how they behaved way back in April, and wrote most of the above then, but haven’t gotten around to this showcase until now.

#### Random starts

I began by just slapping random starting configurations down and seeing what happened.

After some initial discouragement, some neat patterns did emerge. In the last animation you can spot a few figure 8s, figure triple-Os(?), and maggots passing straight through each other. Not a spaceship, but at least something a bit more complicated.

#### Not so random starts

If I couldn’t get a spaceship out of this, I could at least look for something cool. Time to get systematic. I placed maggots at varying relative phases and positions to start and let them go.

This was a bit discouraging — none of the ones that touched made any lasting patterns beyond a single oscillator. Anything interesting would have to be made of more than 2 of these, or have some stationary gunk waiting around to push them around a bit.

In one of the random setups, I saw this little friend, which I’ll call a slingshot oscillator.

It’s got a period of 14. Having a different period is… useful? How is any of this useful? I dunno whatever.

Another neat result would be to get a pattern that turns on cells outside of the normal oscillator radius. Here’s one. It looks like a really low resolution hand-style cursor. Let’s call it the flinger.

The last thing of this type is a little u. I like this because, while it seems like these rules easily reduce the number of lit cells, this generates an extra sixth cell by the end of its process.

#### Behavior in grids

If I couldn’t get a spaceship flying away in empty space, maybe I could get something neat with stationary patterns placed ahead of time. I started by plopping oscillators in a lattice of lit points. It’s real easy to get some ongoing domino action this way.

Vertical motion. There’s a lot of trash left behind, but the key to see here is that only one or two lit squares next to a lit bar drives the motion. The lit side determines which way it travels.

#### Trams and fuses

The last one looked neat. I tried to make more “trams” following a track. Here are a bunch. There are also configurations that rip the track apart. Let’s call em fuses.

Some of the styles are repeated with a hole in the track to show how they deal with that. Holes are also placed at different locations along the track for the same style of tram to show off how they travel: trams hop two spaces at a time, not one. An “oddly” placed hole will force it to reflect or scatter differently than an “evenly” placed one.

I was taken with the self-repairing track on the bottom.

This seemed like a way to create a caterpillar-like spaceship of indefinite length that scootches forward a couple squares each cycle. I wasn’t able to figure how to get it to repeat, thought.

Another fuse had an end that, after some acrobatics, sent another tram that flipped the orientation of the fuse.

#### Timers

One last thing and I’ll let you go, I swear. A row of lit squares can be used as a timer of indefinite length. This could go up above with the other oscillators, but the fact that this can be stretched to as long as you need it puts it in its own class.

These can also take a turn. These turns could be used to stop the motion, or to initiate the timer in the first place:

Maybe these are more appropriately called fuses? But I already used that word.

Showing these little tricks and behaviors is interesting enough alone for me, but it would be really swell if we could figure out how to produce oscillators / timers of truly any period. I’m also still wondering if there is an elusory spaceship, one without a grid or track to support it, hiding in these rules. The holy grail after that is to get these things to do binary logic, but let’s save that for another day.

# I Lied

I said I had only one last thing but I totally lied to you. You must be feeling pretty silly right now! Just one more bit. I promise. 🤞

Before finishing up with the maggotworld animations, I had tried a lot of different rulesets to see how they’d act. One of them was simple: A cell would turn on if it had an odd number of neighbors (in the Moore neighborhood), and off otherwise.

What a pretty pattern! I was pleased to learn later that I had recreated something called Fredkin’s Replicator. You can see some explanation in Numberphile’s Terrific Toothpick Patterns (starting at 14:20). A lecture also by Neil Sloane gives more insight into analyzing this and other “odd-ruled” cellular automata. Do yourself a favor and check out at least the Numberphile vid.

## Visualizing text as paths

I was reminded recently that I actually have a blog. I have a Cellular Automata Part II post that’s been sitting around since April, but here’s something quick before I get around to polishing that up.

During a prescribed Art Night, I decided to come up with a visualization of long texts. I grabbed a few examples of literature from Project Gutenberg as plain text files, and converted to binary using each letter’s ASCII character code.

$a\qquad\to\qquad 97\qquad\to\qquad \begin{array}{ccccccc} 1 & 1 & 0 & 0 & 0 & 0 & 1 \\ \end{array}$

An entire word will then create a string of ones and zeroes 7 times longer. The visualization comes in by turning string into something like the output of a Lindenmayer system. A line is drawn, and at each step it reads the next character. A one instructs the drawing to take a right turn, and a zero instructs it to take a left turn.

$\text{baloney} \\ \strut\quad\downarrow\\ 98, 97, 108, 111, 110, 101, 121 \\ \strut\quad\downarrow \\ 110001011000011101100110111111011101 \\ \strut\quad\downarrow \\ RRLLLRLRRLLLLRRRLRRLLRRLRRRRRRLRRRLR$

Starting from the bottom of an image, drawing upward, and taking these turns one at a time gives a blocky figure.

This was pretty quick to implement. It initially took about 10 or 15 minutes to finish for single-MB size books using repeated AppendTos in Mathematica to construct a list of successive points along the path. A rewrite using Reap/Sow made these finish in less than a minute. I have no learning about optimization, except that the latter is supposed to be fast in M-ca. I suspect that Append rewrites the entire list each time, while Reap/Sow does not. There are likely 800 other better languages in which to write this kind of thing, but hey! Heyo! Woo!

Gutenberg adds in header and footer to each text file, saying some standard stuff about where it was downloaded and when it was written, etc. For each of these, I removed everything except the nitty gritty, proper text. This would only serve to tack on a bit at the beginnings and ends, ultimately not changing much to the shapes. They would have been rotated, though.

Here are the results for 4 books. Here’s a zoomable pdf of the Frankenstein one. In each of these, a green point marks the beginning, and a red point the end. Frankenstein’s green point, for example, is in the lower right, and its red is on the ear of the evil head in the upper left. Boo!

It was really fun to get these results. I acknowledge that there is probably little meaning in these images from a stylometric point of view, but it is interesting to observe the differences. I’m sure these are very sensitive to format, the words themselves, and the ASCII codes for English letters. The Bible is loaded with many paragraph breaks and colons for each verse, in contrast to the long paragraphs of Ulysses. Ensuring that each character begins with a right turn (1) guarantees that each image is going to have some kind of curliness. A single typo wouldn’t change the shape of the subsequent curve, but it would lead to a rotation about that point.

So, is it just happenstance that Ulysses is so much more unfolded than Moby Dick? Is there something fundamentally different about the second half of Moby Dick that makes it hover around the end point?

An issue with the visualizations themselves is that they do not show any overlap as a line covers regions it has already followed. A way around this would be to assign an increasing height to each point, and show these paths as text tunnels in space. These could be seen as a 3D model or as projected images from different directions. Maybe I’ll try this out.

The full image files showing these walks are about 10x the size of the original text files. A text file storing the list of ordered pairs of the path is about 100x the original text file size. So, a forseeable application of this technique could be to make text file sizes much larger, as well as difficult to read and process.

As a final thought, it was easy to call these random walks. Clearly these aren’t random in that they carry the meaning of the texts, but perhaps English letters appear random after being converted in this way. A way to test this is by looking at the distance from the starting point — A true random walk of $N$ steps ought to be a distance of roughly $\sqrt{N}$ from the starting point.

None of these seem to be plots of distance = sqrt steps.

## Exploring cellular automata 1

Conway’s game of life is maybe the most well known ruleset for cellular automata. Each cell in a grid has some possible set of states, and set of rules for changing state based on its own and it’s (usually) immediate neighbor cells. Time elapses with a number of discrete steps, sometimes called epochs. In Game of Life, a cell can either be “alive” or “dead” (on or off), and will live for the next time step only if it has one or two alive neighbors.

Some incredible patterns emerge, including spaceships, guns (creating gliders and spaceships), breeders (creating guns), other puffersrakes, long-lived methuselahs (this example creating some gliders).

Other rulesets lead to neat patterns. Here’s a fun one with many cell states. It lets you rate the results as it scans some starting parameter space.

So, we’re certainly not limited to one ruleset. Coming up with one is easy — finding if there are any neat patterns maybe not so much. It might be better to first explore simpler rules — ones in only 1 dimension. In this case, a linear space is segmented into cells, and each cell has only two neighbors. Given some configuration of three cells in a row, the middle cell’s state in the next epoch will be dependent on its own state as well as the two alongside. When considering how to change any cell, since there are only three cells to consider, and only two possible states for each, there are only $2^3 = 8$ arrangements to consider when changing any cell. Because of this small number, it’s no problem to display the rule of the automaton as a list of diagrams, each showing one of the eight starting arrangements and the resulting state of the middle cell in the next epoch.

With some initial state, for example:

The next line is produced by scanning over this line and matching the top row with the one from the rule icons.

Note that this wraps around, connecting the two edges. This process then continues indefinitely for successive lines. We’re free to pick any length of line and any arrangement of on and off cells as an initial condition, but once this is done, the rest is automated. This particular initial condition for this rule ends up looking like this:

This is kind of neat, but these processes don’t get to show off unless they’ve got some room to work with. Here’s the same rule applied to an initial state of a single black cell in a long row of white.

The repeating triangles are a pattern characteristic of some of the more interesting rules. Rules 30, 90, 110, and 184 are supposedly the most cited in scientific or mathematics articles.

Because there are only 8 arrangements of cells we need to specify to determine a 1-D ruleset, and each arrangement offers the choice of the middle cell turning on or off, there are $2^8 = 256$ possible rules of this type. You can see links to them all here. One could argue there are only 128 rules, 64, or even 32, since negating all the colors, taking a mirror image, or negating the mirror image doesn’t change the behavior much. This means each rule is one of a set of four that operate in the same way.

I’ve already linked to the Wolfram Atlas a few times. The images above were taken from there or made using Mathematica. Stephen Wolfram, of Mathematica fame, is also well-known for his work on cellular automata. He’s argued that these things may be more fundamental to nature than mathematics. It’s considered controversial. I haven’t read A New Kind of Science, a book where he lays this all out, but it’s free and online. Maybe worth checking out. It seems like some of the argument hinges on the fact that some of these automata are Turing complete — they can be set up to operate and produce results like any computer program.

I’d just like to note that my interest in cellular automata is based pretty much entirely on having fun. It’s 11 o’ clock on a Saturday night.

A step after checking out the 1-D rules is to explore the 2-D rules. Next I’ll show off a rule and try to do something interesting with it.

## Why The Star Trek II: Wrath of Khan is Refreshing

The Wrath of Khan is usually considered the best Star Trek movie out of the thirteen (or ten (or six), depending on how you count). Even if it’s not your favorite, it definitely tends to be the undeniable compromise. It’s got action, a bit of life advice, tension and suspense, fun space action, heroic and villainous sacrifice, and a few believable female characters, especially in comparison to the docile bunnies of the original series and the robotic Ilia of The Motion Picture.

Also in contrast with the original movie is the presence of a clear enemy. Khan is a real hole and a half, having no sympathetic qualities despite a reasonable motivation for revenge. V-ger is threatening but mysterious and literally nebulous. The movie is extremely approachable, allowing anyone to enjoy seeing a Bad Man fighting our heroes. You can also think a bit more deeply about Kirk’s struggle with aging if you want, too. That definitely doesn’t make this a cerebral or philosophical flick, but at least opens the movie up to people wanting more than just an action movie.

Most action movies glorify pluckiness. Quick thinking, resourcefulness, physical strength and speed all help the hero solve the problem at the last moment. This is a great way to keep the suspense up — you can feel that the cause is lost just before our hero comes through in the clutch. It’s a classic admirable quality in film and literature. It is especially revered in 19th century English works, Hollywood movies, and anything influenced by those (nearly everything in popular culture).

If our team details a military plan of action complete with contingencies, and then carries it out without a hitch, well. Okay. Good for them. But we want to see something that’s different from work.

An alternative to this is an Ocean’s Eleven style presentation, where the plan is described as a voice over during the execution, or with interstitial scenes.

Wrath of Khan does neither of these things, though. This isn’t strange at all. The Ulysses / Homestuck / Christopher Nolan style of nonlinear storytelling is pretty new as a pop culture thang.

Wrath of Khan is a story about sticking to protocol. It’s a rare instance in film, even in the Star Trek universe, when the guy in control has most everything figured out ahead of time. An old go-getter realizing the wisdom of experience and accepting loss and the unavoidable is the obvious theme of the movie. Supporting this theme, though, is a pragmatic lesson about planning and protocol.

There are three instances when things don’t go as expected. The first and the third support the top-level theme. The Kobayashi Maru test and the background about Kirk cheating it sets us up; Ok, Kirk is going to have to learn how to approach the no-win scenario. Later, when Spock sacrifices himself, Kirk realizes he wasn’t able to save everything in the end and he learns a lesson about aging and loss.

It’s pretty goofy that Kirk learns this lesson only when he loses Spock, but not all the other people that have died on his ship over the years.

In the middle is the screw-up that causes nearly all of the trouble of the movie. When Khan approaches the Enterprise with the Reliant, with no communications, Saavik quotes Star Fleet protocol: Put the shields up, you idiot! For some reason, Kirk doesn’t, and Spock defends him. Why doesn’t he do that? Shields are never described as a precious resource, or particularly difficult to raise or lower. Thinking along the lines of the movie’s theme, he’s probably feeling some thrill of skirting danger.

This screws everything up. Khan gets the chance to fire on the warp core and disable the Enterprise. Kirk very quickly owns up, and tells Saavik that she should keep reminding him of the regulations. If the ship hadn’t been disabled, the Constitution class ship would outmatch the Miranda easily.

After this point, everything is done by the book. Kirk stalls for time expertly while the crew looks up the Reliant’s bypass code. They explain this is a failsafe — a technical workaround planned ahead of time in case of such a situation. They’re able to take down the Reliant’s shields and hit hard, taking away Khan’s advantage.

The entire operation on the Regula asteroid involves a pre-planned code between Kirk and Spock to ensure Khan misunderstands the situation. The code makes Khan think that the Enterprise will return after two days of repairs, when they only needed two hours. The other stranded characters, as well as the viewer might be surprised when they find out that the ship is back early, but Kirk isn’t surprised at all. They’re following through with an agreed-upon plan. In Khantrast, the villain decides to hunt down Kirk and his ship in spite of his crew’s suggestions to escape with the starship Reliant, a fantastic, futuristic home to a bunch of fossils from the 90s.

Even when he’s keeping up the facade of being stranded, Kirk is going by the book. He starts with the most pressing issue, survival, while David is accusing him of ignoring escape.

Later in the Mutara Nebula, while not explicitly a by-the-book maneuver, the 500 meter drop which circumvents Khan’s “two dimensional thinking” probably reflects the Starfleet veterans’ training as their experience. The experience comment aligns with the coming-to-terms-with-aging theme, but it’s also hard to believe a space-naval strategy course wouldn’t give some statement like “Please use all three dimensions available in a fight” on the first day. It’s such a basic maneuver it hardly deserves a name. In submarine warfare, finding an enemy ship, overtaking it out of range, dropping below it and waiting for it to pass overhead before rising to fire torpedoes is a classic World War II tactic called an end-around (pdf). Kirk and crew just did the last bit.

The take-away here is that Kirk really messes up by not adhering to standard procedure in one instance, which leads to all of the further problems in the movie. Later he acts conservatively, avoiding brash, spur-of-the-moment decisions. From his point of view, everything goes swimmingly after the initial attack. Yes, Spock has to sacrifice himself in a last-ditch effort to fix the warp engine, but only after every other option had been exhausted.

It’s a palette cleanser to see a blockbuster-style movie based on planning. Aging gracefully is still the backbone motif of the movie,  and it’s supported by the reliance on plan, not pluck. This doesn’t get in the way at all of delivering the story. It’s invisible. That’s good! But a rewatch, while keeping in mind their use of protocol, can help one appreciate how the characters got into and out of the whole mess.

## A Hermitian Matrix Has Real Eigenvalues

When I studied math, I tended to find myself more interested in the “continuous” side of things rather than the discrete. This is calculus and analysis and such, in contrast to things like logic, abstract algebra, number theory, graphs and other things where everything is rather chunky. I suppose I was largely motivated by the use of calculus in physics, which was usually my main focus. It’s easy to see that calculus and continuous functions are essential when studying physics. It’s not as easy to argue for number theory’s direct applicability to an engineer.

Sometimes I feel a bit ashamed about this interest. Calculus and analysis are the next logical steps after real number algebra. One could argue I didn’t allow myself to expand my horizons~~! But, whatever, who cares. They’re still cool. OK? THEY’RE COOL.

It’s very easy to claim that you’re interested in something. It’s almost as easy to hear someone talk about how they’re a fan and try to call them out on it by asking about some trivia. This is often the crux of an argument in the whole “Fake Geek Girl” and Gamergate things.  Similarly, sometimes, it feels like everyone around me is nuts about Local Football Home Team, and I often find myself skeptical of the “purity” of their fanaticism. I need to remind myself that someone can enjoy something and not know everything about it. You can be interested in Watching Football Match and not know the names of everyone (or even anyone) on the team.

The same is true for something like math. It had better be true, since there’s always another thing we could define or discover, and all of the fields already developed aren’t completely understood by a single person. It’s fine to be more interested in one thing rather than another. If you take that too far, you’d end up criticizing people for not being interested in every single thing in existence equally.

It’s all right to wonder if I should look into a certain topic, though. A few years ago, a colleague teaching introductory calculus to high school seniors mentioned that they were working on delta-epsilon proofs in the class. This blew me away! The concept of a limit is usually introduced in a pre-calculus class, or the beginning of a calculus class. I am under the impression that it’s usually a matter of, “yeah, this function gets really close to this point, but doesn’t actually hit it,” and that’s all a student really needs to know until they do some analysis. A delta-epsilon definition is a way to formally define a limit, so there is no uncertainty as to what is actually happening. “Gets really close” ends up being defined exactly — basically, it says, “For this function $f(x)$, gimme any number bigger than zero, even a SUPER tiny number, and I can give you a distance away from $x=c$ such that $latex$f(x)$is never further from a limiting value L than your number.” Okay, maybe that is not super enlightening. But on a side note, it’s fun to think about how much like a playground taunt that is. I was ready to argue that his students didn’t need to bother with delta-epsilon proofs, that they could learn to work with the fuzzy idea of a limit first and get along just fine in calculus, just as I had. But, I did start to doubt myself. Should I have learned the definition of a limit before trying to use it, in my hubris? In retrospect: no, that’s silly. Looking at epsilon-delta definition, I realize it would have taken me ages to get through it, taking away from valuable time spent thinking about the applications of calculus. But, that feeling is there. Should I have known about this? What does this have to do with Hermitian matrices, you demand, grabbing me by the shoulders and shaking, while my cravat soaks up your spittle. I had this same feeling this week, when thinking about a topic in linear algebra and physics. In quantum mechanics, matrices are used extensively to describe certain kinds of actions that could be done to a particle or a system. One of those actions could be to take a measurement, such as the amount of energy in the system. If you call your matrix H and your particle current state $\psi$, then you could represent a measurement of how much energy the system has by multiplying them together. $H\psi=\lambda\psi$ When you multiply them together, you can get a single number $\lambda$ times the particle state $\psi$ as a result. If measured the energy of the particle, then the value of $\lambda$ is that energy. You can’t just divide both sides by $\psi$ because a particle’s state is a vector of many possibilities, and division by vectors rather than numbers doesn’t mean a whole lot here. (You can multiply both sides by a vector transpose and get into something called Dirac notation, but don’t bother with that now.) The number $\lambda$ and the state$\psi$are called eigenvalues and eigenvectors of… Is this worth describing? If you don’t know this, it might be incomprehensible. It turns out that if H is Hermitian, meaning, it is self-adjoint: $H_{ij} = \bar{H_{ji}}$, then it always has real eigenvalues (as opposed to$\lambda\$ being a complex number). Physicists interpret this to mean the matrix definitely represents a physical measurement. Hermitian matrices aren’t the only ones with real eigenvalues, but they also have a property that lets you be sure you’ve measured your particle as being in a certain state.

I’ve seen proofs that Hermitian matrices have real eigenvalues. Here are a couple. These start by assuming there is some eigenvalue/eigenvector pair, and using the fact that a vector magnitude is real at some point.

Finding the eigenvalues of a matrix is a matter of solving an equation which involves a determinant:

$\det(H-\lambda I) = 0$,

where I is the matrix identity. I thought, could I use an expanded determinant to show that the eigenvalues have to be real?

This isn’t so bad with a 2×2 matrix. With

$H = \left[\begin{array}{cc} a & b+ci \\ b-ci & d\end{array}\right]$,

the characteristic equation is

$0 = \det(H-\lambda I)$

$= \left| \begin{array}{cc} a-\lambda & b+ci \\ b-ci & d-\lambda\end{array}\right|$

$= (a-\lambda)(d-\lambda) - (b^2 +c^2)$

$= \lambda^2 - (a+d)\lambda +ad - (b^2+c^2)$

You can show the two solutions for $\lambda$ have to be real by shoving all these numbers in the quadratic formula. The discriminant (remember that?) is positive because it ends up being a sum of squares. I’m bored.

My thought after this point was to use mathematical induction. We’ve shown that an $n\times n$ Hermitian matrix has real eigenvalues. Let’s show that an $(n+1) \times (n+1)$ one does as a consequence.

Maybe this is doable, but I haven’t done it. It occurred to me that all the cofactors of diagonal entries in a Hermitian matrix would be themselves Hermitian, and a proof by induction would rest on this.

$H = \left[ \begin{array}{cccc}h_{11} & h_{12} & \dots & h_{1,n+1}\\ h_{21} & h_{22} & & h_{2,n+1} \\ \vdots & & \ddots & \vdots \\ h_{n+1,1} & \dots && h_{n+1,n+1} \end{array}\right]$

$= \left[ \begin{array}{cccc}h_{11} & h_{12} & \dots & h_{1,n+1}\\\\ \overline{h_{12}} & h_{22} & & h_{2,n+1} \\\\ \vdots & & \ddots & \vdots \\\\\overline{h_{1,n+1}} & \dots && h_{n+1,n+1} \end{array}\right]$

My thought was… can you construct a determinant using only cofactors of the diagonal entries?

This turned out to be not helpful in an embarassing way. I asked myself, can you calculate a determinant by expanding over a diagonal, rather than a row or column? I was able to convince myself no, but the fact that I considered it at all seemed messed up. Shouldn’t that be something I should know by heart about determinants?

Similar to a student using limits without knowing the delta-epsilon definition, I realized that I don’t have a solid grasp of what determinants even are, although I’ve used them extensively. It felt like I was missing a huge part of my mathematics education. I don’t think I had ever proven any of the properties of determinants I had used in a linear algebra course.

I didn’t even know how to define a determinant. In high school, we learned how to calculate a 2×2 determinant. We then learned how to calculate determinants for larger matrices, using cofactors (although we didn’t use that word). But I didn’t (and still, don’t really) know what it was.

This doesn’t look unusual. I’ve got three books on linear algebra next to my desk here.

DeFranza and Gagliardi start by defining a 2×2 determinant as just ad-bc. It then tells how to calculate a 3×3 determinant, and then how to calculate larger determinants using cofactors. This seems in line with how I was taught (although this isn’t a coincidence. I took Jim DeFranza’s linear algebra class). The useful properties of determinants come later.

Zelinsky starts off (on page 166 of 266!) with an entire chapter on all of the algebraic properties we’d like determinants to have. It waits 11 pages to actually give explicit formulas for 2×2, then 3×3 matrices. It isn’t until after that that it gives something that feels like a definition.

Kolman starts with this definition right away:

Let be an x n matrix. We define the determinant of A by

$|\mathbf{A}| = \Sigma (\pm) a_{1j_1}a_{2j2}\dots a_{nj_n}$

where the summation ranges over all permutations $j_1j_2\dots j_n$ of the set $S = \{1,2,\dots, n\}$. The sign is taken as + or – according to whether the permutation is even or odd.

Woah. This seems like something fundamental. I had only known determinants as, effectively, recurrence relations. This is a closed form statement of the matrix determinant. Why don’t I recognize this?

Really, though, I can see why this might not be commonly taught. It’s even more cumbersome. But it feels like I missed the nuts and bolts of determinants while using them for so long.

That definition seems ripe for a Levi-Cevita symbol.

It’s probably not worth making most people trudge through millions of subscripts. That’s sort of the MO of mathematics, right? You make an easy-to-write definition or theorem, and then use that to save time and energy.

Maybe I’ll try to show Hermitian matrices have real eigenvalues using that definition. Descartes’ rule of signs or Sturm’s theorem might help. But I’m sleepy. So maybe later.