What more can we do with this?

Moretti’s assertion of the notion of distant reading as a reaction to the flaws in the close reading process spoke to all the concerns of my 20-year-old undergraduate literature major heart (may she rest in peace). As quoted in the Ramsay text, he notes, “[T]he trouble with close reading (in all of its incarnations, from the new criticism to deconstruction) is that it necessarily depends on an extremely small canon.” Oftentimes, I found myself getting so close to a particular text that I could no longer even convince myself of the arguments I was making. I mean, I could only disregard author’s intent to a certain extent. Maybe they chose to use “azure” rather than “blue” because they were a pretentious fuck and not, in fact, to conjure up images of Arabia…I digress. This method seems like it could be useful for identifying broader trends within and across certain historical moments. For instance, if I could see a wide range of accounts of the word “azure” being used in conjuncture with references to the Middle East during the literature of a certain time, I might be convinced. (Not sure if this is exactly the point of distant reading, but I’m still grappling with it a bit.)


The JStor project could definitely be useful for some of the stickier terms I’m looking at in my studies. I submitted a “queer world making” query (or should I say “queer-y”…that joke has literally never been made before). Because this is a tricky concept to nail down, I’m interested in seeing the approaches taken to it across a variety of fields. In fact, I was quite surprised by the number of results the search turned up. Furthermore, I think that a method similar to this would probably be quite useful outside the range of peer-reviewed journals. I would love to examine digital rhetoric through this lens, for instance. Just a few points of interest might include which words appear most frequently in certain digital spaces, which sites are arguably more friendly toward (or perpetuate more rhetorical activity regarding) the interests of certain communities, what the ideal message length is for a successful online dating interaction, etc. As I stated before, I’m still wrapping my mind around the notion of analyzing texts from this cold and distant approach, but I’m sure I’ll muster a few more uses for this method after some practical experience.

From Moretti to coffee…


from Moretti’s Graphs, Maps, Trees

My first encounter with “distant reading,” beyond Roberto Busa and his famous concordance, occurred last summer in a digital humanities seminar I took as a Masters student. For the course, we divided up Franco Moretti’s Graphs, Maps, Trees: Abstract Models for Literary History, and focused individually on different sections of the book. As an initial exposure to distant reading, I must admit I found the visualizations in the book to be lackluster, and therefore was not convinced that distant reading was a useful analytical tool. I suppose I’m not convinced that taking a quantitative approach to literature is a “good thing”?

I am, nonetheless, excited to see what happens in our workshop tomorrow. I ran a query through the JStor DFR portal and downloaded a substantial CSV file. My query, “coffee,” yielded 1,000 results from a variety of journals. The usefulness of the results, of course, is not clear from the data aggregated in the CSV file as very few of the relevant hits are accompanied by an abstract, but I’m intrigued by the prospect of doing more with this…

Distant reading for journalism?

As a journalist, I can’t wait for tomorrow’s workshop. I’ve heard a lot about ‘distant reading’ (much of it hostile) and while Agate sounds like it’s mainly used for quantitative data, this new JSTOR project looks very promising. I’m particularly interested in how it can help me grasp unfamiliar topics more quickly. I like to write about specialized subjects, and one of the great dangers of that is failing to understand the ‘shape’ and current status of a field because I’m a layperson trying to grasp it very quickly. For instance, I recently had occasion to read a lot of material about Chicana studies, and it took me longer than it should have to get a sense for which writers/concepts are considered to be the most important in the discipline, and which are marginal. (You’d be surprised how rarely information like this is spelled out.) JSTOR’s DfR sounds like it could make those situations a lot easier to avoid.

I found Agate’s homepage a little difficult to understand, so I Googled around for a more accessible explanation of what it’s all about. I found one article that argues Agate’s main utility is as follows:

As journalists, we not only need to solve these problems for practical reporting purposes, but also for philosophical ones. How can we assert that our numbers are correct if we performed a series of manual processes in a spreadsheet exactly once? Do it that way and the only record of how it was done is the one in your head. That’s not good enough. Journalistic integrity requires that we’re able to document and explain our processes.

The idea that multiple iterations are necessary to get accurate data is certainly new to me, although intuitively, it makes sense in processes where probability, chance, and randomness are at play. I’m not quite sure how this applies to us as humanities researchers, since we tend to engage with static datasets more than processes, but I think that’s exactly the developers’ point: providing better tools will enable more process-oriented research methods. What kind of applications do you see for this kind of research going forward?

Dwarf Fortress vs. Dark Souls

Since we had a relatively light reading and preparation load this week, I’ve been waxing philosophical about some of the key terms of our class. What is procedure? And then, somewhat differently, what is proceduralism and what are its virtues? Dwarf Fortress is pretty clearly procedural in every sense of the term that I can think of. It is obviously procedurally programmed in a coding language, like any other computer game or program. On top of that, the world you play in is procedurally generated in the sense that it is produced by automated machine processes rather than being deliberately designed by a human working in tandem with the machine, like it would be if someone made an accurate Middle Earth map for Dwarf Fortress (I really hope this exists). It is also an example of proceduralism, in that it seems to deliberately call attention to its own procedurality. Very little is abstracted in DF, you have to micromanage your dwarves’ job assignments and your fortresses designations. The primary method of learning to play involves following the Wiki’s step by step process.

Let’s now compare DF to another game noted for being difficult and frustrating: Dark Souls. For those who don’t know, DS is an infamously difficult action rpg. The challenge of DS comes primarily from the difficulty of successfully executing the twitch-based skills and moves to defeat enemy forces that seem to continually increase in both numbers and strength. There is a strategic element in that you have to learn what weapons and moves work well against which enemies and have a plan for surviving the difficult gauntlet your avatar has to run. In some sense, DS ontologically has to be just as procedural as DF—after all a computer is running it. Certainly a higher component of DS’s process intensity is taken up by graphics. My question is, is the gameplay, or user interaction with the game, procedural? On some level it seems it has to be. You learn, adapt and get better at the game over time just like DF. Does the fact that it incorporates twitch-based elements make it less of an example of proceduralism? The more kinesthetic process of moving my fingers over the controller still must be a process on some level, involving neurons, synapses, muscle tissue, etc. But it seems that some of the writers we have looked at for this class would want to maintain that DS is less procedural, somehow, than DF.

It seems that if someone were to memorize the procedural information contained in the Wiki prior to ever playing the game that it might be said that they know how to or can play DF. But if you read a manual or wiki for DS, I submit that no one would claim that you therefore know how to play it. You have to experience the twitch-based combat and die a few thousand times to establish the muscle memory necessary to be successful. It seems there’s a bit of an analogy to be made between these two games and something like geometry versus something like rhetoric. If you memorize a geometry textbook, it might be said that you know geometry. But if you memorize Beebe and Beebe (our “beloved” public speaking textbook) I submit that no one would claim that you know how to deliver a public speech.

It seems that Bogost might say that life is procedural, and so are games, but that not all games call attention to life’s procedurality. Perhaps proceduralism or a strong procedural rhetoric is (gaming) procedure that calls attention to (real life) procedure rather than obscuring it? Throwing a football involves a procedure, but an attempt to understand the steps of that procedure will probably make you worse at executing it. Playing DS may be similar in that regard, you can probably “overthink” what you are trying to do.

How far can we stretch the idea of proceduralism or procedural rhetoric? Should we confine it to computation, or even more specifically to games like DF that wear their proceduralism on their sleeves? Or should it be more widely applicable? And what about the master term, computation? When we play DF, the computer computes, and to play successfully we “compute” with it. The computer computes when DS is running. Do we humans “compute” when we play DS?


An Algorithm for Everything

I was interested in our discussions last week regarding algorithmic composition, and so I wanted to spend some time this week trying to tackle this idea, and other algorithmic systems, a little bit more.  In considering this topic, I wanted to bring to the table an algorithmic system that I recently learned about and, perhaps arbitrarily, have found myself opposed to.

I attended a brief lecture a week and a half ago on different training philosophies behind swimming, as well as their positives and negatives.  The lecturer, Sergei Beliaev, was a sports scientist that had come into his own during the height of the USSR, and therefore had been researching in a time of near unlimited government financial support for sports research.  Unsurprisingly, he believed that the best system for training was his own, SuperSportSystems, or 3s.

The 3s system takes an algorithmic approach to training by gathering information, running it through a program, and spitting out a day, week, and season plan for training to reach specific goals.  .  In order to write this blog post, I created a trial account and went through the paces with one of my own swimmers as an example.  The program starts out by simply asking for the swimmers best event, current best time, goal time, and the size of the pool in which the swimmer trains.  For my swimmer, I have chosen a best event of 100 yard butterfly, best time of 1:01 and goal time of :57


Screen Shot 2015-10-28 at 11.41.25 AM


Next, the program asks for the big meets of the year, as if to say: where do you want your swimmer to be able to compete at their best?


Screen Shot 2015-10-28 at 11.41.44 AM


After that, it asks for the usual breakdown of a week of training:


Screen Shot 2015-10-28 at 11.45.25 AM


And finally, once all of this information has been gathered, it spits out its plans (pictured below is the sample plan for a single day).


Screen Shot 2015-10-28 at 11.51.19 AM


Now, this is just a single practice for the afternoon session of one week.  Also on this page (not-pictured) are the total yardage amount for the entire season, the yardage amount per day for each week, and workouts for every single practice, already planned and written.  Each set, as you can see, is accompanied by “target times”, or times that the program believes my swimmer would need to be able to reach consistently on that day in order to achieve his goal at the correct time of year.

It is important to point out that this is not just a random website or a random system.  It is used by thousands of coaches and over 15,000 athletes throughout the world, and has also aided in producing a handful of olympians and elite competitors.  Considering these facts, why aren’t all coaches (myself included) buying this program?

I have two major issues with the program, and I believe that they can, in some way, connect to other types of algorithmic processes, such as composition.  My first issue relates to something that occurred just last week.  On Thursday, I had planned a difficult workout.  However, about 30 minutes into the practice it became obvious that the kids were tired, too tired to be able to do what I had planned with quality efforts.  Because of this I decided to abandon what I had planned in favor of something easier to accommodate their physical state.  3s does not accommodate this type of change.

The second issue I have is with the setting of an upward bound on progress.  The training is based on a goal that is set at the beginning of the year by a swimmer and coach, in my example case a :57 for a 100 yard butterfly.  The training is then geared towards achieving that time by the set goal meet, which in my example case is in March.  This does not seem like an issue until you consider that, looking back at the goals that kids gave me at the beginning of the season, my swimmer had planned a goal time for the season of a 1:03 in the 100 yard butterfly (down from a 1:10).  In the first meet of the season he eclipsed this with a 1:01.  I was not training him for a 1:03, but was instead training him just to get stronger and faster, having the goal time as a motivator rather than a focal point of training.  If I had trained him for a 1:03, maybe he would have gone faster, maybe not.  My fear is in limiting potential results by setting a bar.

The issues, I believe, can be simplified by calling them issues of communication.  When I am at a workout, or considering training, I am (philosophically, perhaps) engaged in a dialogue with my swimmers.  I have to be able to interpret the information and make adjustments in real time.  It is for this reason that I don’t plan out every workout for an entire season in one sitting.  There needs to be a level of read and response going on.  An algorithmic training system removes this possibility, and it is this absence that makes me uncomfortable.  It doesn’t take account of different strengths and weaknesses, performance variations, feelings on a specific day or week, health, etc.  I takes numbers an returns numbers.

Thinking of it this way called to memory a quote from a podcast that I had listened to recently (the RadioLab podcast, “Words”):

Take a musician…here is a form of thought that carries you through a definite sequence of phrases, feelings, emotions, changes, and there are no words.  But there is something that we get access to when we gain a full natural language that we can use not only to communicate with other people, but with ourselves.

A composer, then, when composing a piece of music, is also engaged in a dialogue with the listener.  While not using words or language, there is a type of communication that goes on, that aids in the triggering of feelings, emotions, etc.  I am beginning to think that perhaps algorithmic composition, or other types of algorithmic, computational approaches to things such as training or writing, remove this direct dialogue.  And while there can still be the argument that the composer is in fact in dialogue with the program, the composer is still a few steps removed from the human audience with the program acting as interpreter, and this is perhaps where the mark is missed.  It is one of the reasons that the Love Letter Generator makes us laugh instead of swoon.


Image/Code as Information

I was drawn to post on this topic as an opportunity to reflect on and extend my previous experience as a photo archivist. Beyond an artistic interest in photography, I have always been interested in the life cycle of the digital image, whether it’s a digital surrogate of a physical object or born digital.  And while the “magic” of digitization fades after you have scanned enough Kodachrome slides and photographed enough glass plate negatives, what is gained is an appreciation of the digital image, a binary representation, as something very much its own, and often still very fragile.

In an attempt to tackle the subject of image from a new perspective, I spent some time experimenting with Processing, a program (language?) I found relatively easy to work with. Understanding that digital images are simply another form of data, it was a useful exercise to reverse my interaction with the image in this way. Rather than looking at a file and seeing it first as a representation of a visual image, while only secondarily acknowledging that the the image is actually a sort of coded numeric representation, I started first with the code and then observed the image.

Initially, the images I generated by following some Processing tutorials were not much more sophisticated than something I might have created 20 years ago in Microsoft Paint:


Still, how gratifying to have created this by simply typing:


Of course, the site demonstrates examples of the level of sophistication possible with this program. While far from that skill level, a little time spent looking through the user gallery on Open Processing provided a number of examples of beautiful work that seem a bit more within reach. Though finding the extra hours in the day to build my Processing skills may just be another pipe dream, I’m not deleting the application any time soon. I admittedly found the instant gratification of entering code and viewing the results very satisfying.

While working through the tutorials and experimenting with Processing, I found myself wondering about the merits of the, well, process of creating images this way. Certainly many of these effects can be achieved through Photoshop and Illustrator. What can we learn about the digital image by bypassing the more user-friendly interface and working directly with code? In “Digital Ontologies: The Ideality of Form in/and Code Storage – or Can Graphesis Challenge Mathesis?” Johanna Drucker explores the relationship of human thought to its representations in various forms, and particularly within the digital environment. She asks:

“Is our conception of an image profoundly changed by its capacity to be stored as digital code? Or is the commonality of code storage as the defining condition of digital processing a confirmation of a long-standing Western philosophical quest for mathesis (knowledge represented in mathematical form, with the assumption that it is an unambiguous representation of thought), in which there ceases to be any ambiguity between knowledge and its representation as a perfect, symbolic, logical mathematical form?”

This week was light on reading, and if for any reason anyone is lamenting that fact and hasn’t read this article, I would recommend it. There is a lot to consider here, and I think Processing provides an apt environment for exploring image at the level of code. Drucker’s article reminded me that problems with the public perception of photographs as objectively truthful evidence date back to the very beginning of photography; in the digital environment, those problems are further complicated by public perception of code as objective, or somehow mathematically

Georg Nees

Georg Nees

“truthful,” rather than as language, authored by individuals, or groups of individuals, as we have discussed in class. After years of watching the light flashing in a scanner and painstakingly color-correcting images in order to most accurately represent the physical attributes of a photograph or print, and wondering what exactly it was that I was creating, I think I have more questions than answers when it comes to exploring the image as code (and ultimately as information), and I think this article is a useful starting point.

It’s also worth noting that one of the works Drucker considers, Georg Nees’ Schotter / Gravel Stones, created with a random number generator can be created in Processing by following this tutorial, which is a fun way to get a little more experience with the program.

Programming, Subversion, and Failure

In class, we keep returning to the notion of a potentially subversive (e.g. feminist, queer, etc.) programming language. Because I apparently volunteered to write a substantial blog post on the lightest reading week of the semester, I’d like to use this space to explore the possibility of subversive programming further by connecting together several of the ideas that we’ve touched on so far and examining them through the lenses of critical feminist and queer theory. Specifically, I’ll discuss and ask a few questions about the potentiality of a language or program existing outside the confines of human structures or systems, and then I’ll posit the idea of deconstructing these systems from within their limits through different manifestations of failure.

Fig. 1 The wonders of visual programming languages - behold an art piece inspired by the motion of a human dancer.

Fig. 1 The wonders of visual programming languages – behold an art piece inspired by the motion of a human dancer.

I was initially drawn to Processing, the language we explored this week, due to its interactive capabilities. From the examples offered in the “Hello” segment of the tutorial, it seems that visual processing languages like this one strive to deconstruct the boundary between programming and the lived world to an extent that is not quite as prominent in the other types of languages that we have examined. This is largely due to its interplay between the program and natural human motion or activity. I was tickled by the idea of moving particles with one’s hands to reveal digital microcosms and impressed by digital performance art inspired by and structured around human motion.

These realities call to mind the discussion from last week regarding a programming language

Fig 2. The wonders of visual programming languages - behold an art piece inspired by my own internal state at this point in the semester.

Fig 2. The wonders of visual programming languages – behold an art piece inspired by my own internal state at this point in the semester.

that might be written in response to human facial expressions. An emotional or affective language would certainly act as a relief from the logic and rationality that seems to form the current framework of the field. However, what if we were to take this idea a step further and remove the human component entirely? What would a programming language that responded to natural elements look like (e.g. weather patterns, animal movement, the growth and decay of organic plant matter, etc.)? If a program is being written alone in the woods and no one is around to operate it, is it still running? Like a distant watchmaker, if a human sets a program in motion and then leaves it to its own devices, does that program still belong to the human? Is it still governed by human ideologies and structures?


While the questions posed above might provide a possible escape from the human ideologies that structure language and meaning (programming languages included), I am still rather pessimistic. In Gender Trouble, Judith Butler notes that because feminism is a reaction to a phallogocentric system, it exists as a product of this system. Thus, it must work to restructure the system from within rather than function beyond it as a type of utopic alternative. Likewise, because programming languages are produced by human systems governed by certain ideologies, these ideologies will always be present, to some extent, in the languages. Thus, a subversive programming language might have to work within the current computational limits (e.g. must follow a type of syntax, must be reducible to 1s and 0s, must be able to be read by a computer…to a certain extent) in order to deconstruct these ideologies through subversive results. One possibility of doing so is through embracing opportunities for failure. I am using the term “failure” specifically in a manner that is in line with contemporary queer theory and its turn toward the anti-social. Writers like Lee Edelman and Judith Halberstam have embraced the notion of “failure” as that which subverts normative views of success. This includes a vast array of possibilities including decentering the family, rejecting reproduction or futurity, and actively forgetting the past.

On a basic level, failure can be written into the processes of certain programs. Bogost touches briefly on failure in games, citing examples like New York Defender (the game where the player takes the role of a shooter aiming at planes headed toward the World Trade Center until he or she is eventually overwhelmed). In this case, the game operates as it is supposed to (as perceived by the player), but there is no chance of achieving a win condition. On a deeper level, programming itself is riddled failure in the forms of glitches or “surprises.” From Turing’s comment, “Machines take me by surprise with great frequency” to Zach Adams’s remark, “We didn’t know that carp were going to eat dwarves…but we’d written them as carnivorous and roughly the same size as dwarves, so that just happened, and it was great,” programs constantly behave in ways that even those writing them cannot predict. In Expressive Processing, Wardrip-Fruin suggests that these moments of unintentional failure reveal the underlying mechanics and processes of the program. I am curious as to whether these moments can be harnessed and intentionally implemented in order to point to certain elements and expectations in programs (from games to word processing software), and by extension, reveal aspects of the “real world” ideological mechanics, systems, and processes that contributed to their creation.

I’ve come up with a few examples that that might work to this effect, but this idea is still clearly in the works. For instance, a game that follows a typical consumerist pattern (e.g. the player collects money, buys and finds better objects, owns and/or builds houses/structures, etc.), but literally crashes every time that the player reaches a certain threshold of wealth might comment on capitalism or the idea of a person being successful insofar as the amount the own or produce. Similarly, a word processor that deletes specific chunks the writer’s material, interjects its own content into the writing after the writer has closed the document, or fails to save the writing entirely might comment on ideals such as intellectual property or the presence subjective viewpoints in fields typically perceived to be objective such as history or science. These both seem incredibly frustrating, but then again, that’s the point. In what other ways could failure be written into programs to reveal flaws in larger social systems?

Could a Martian play Dwarf Fortress?

Hello all,

I wasn’t in class this past week so I’m a little out of the loop, and I apologize in advance if any of these observations are redundant to what you guys discussed last week.

As I’ve continued with Dwarf Fortress, I have done my best to understand how people enjoy this game. The horrific user interface is certainly part of it; after 12 hours, I’m still shaky at the basic mechanics of managing my fortress. But I’m convinced that there’s something deeper to DF’s repellency: a challenge that prettier graphics and simpler controls wouldn’t be able to rectify. I’m not 100% confident what that is yet, but I think the answer lies in DF’s procedural rhetoric. What is this game’s argument? Surely something less banal than ‘death comes for us all’, right?

To solve this puzzle, I think it’s useful to consider how DF engages in Bakhtinian dialogues with its context. There are, of course, many layers of intertextuality here: the ASCII graphics allude to early videogames and position DF as ‘retro’. It parodies fantasy narratives, historical chronicles (per the Boluk and LeMieux reading two weeks ago), and even virtual pets*. It subverts the expectations that players have developed from playing other, ostensibly similar, world-building games like SimCity and Civilization. We could even go so far as to say that it dialogues with the wiki and the fan community, given that everyone seems to rely on the wiki to interpret and navigate the game.

Perhaps the really challenging aspect of DF is its hyperconnectivity with other texts and media. To get meaning out of the game, you not only have to know how to use a computer and be familiar with basic UI conventions; you also have to be fluent in nerd/gaming culture to spot the references and catch on to the game’s playful aesthetic. You must be familiar enough with the conventions of mainstream sim games to appreciate how DF subverts them, and unless you’re an incomparable genius, you probably need to access the wiki, navigate it, and internalize its wisdom. This is a high barrier to entry for would-be players; perhaps there’s a comparison to be made with big fat ‘difficult’ postmodern novels by Gaddis, Pynchon, DeLillo, Wallace, &c., which also tend to be hyper-referential. If you want to have an easy time with Infinite Jest, at the very least you’ll need a dictionary, a working knowledge of Hamlet, and a semester of French. In fact, the novel has inspired a fan community that has a lot in common with DF’s. Perhaps the true challenge of DF is that it demands networked thinking, as opposed to the immersive, solitary experience that we usually expect from sim games.

What do you guys think? Does intertextuality necessarily equate to difficulty? If not, is there a particular kind of intertextuality that makes texts difficult/repellent? Am I totally off base here? To what extent is it necessary to catch DF’s references in order to enjoy it? Could a Martian play it? Would the Martian be better at it than a human?

OK, enough questions for one day. See you all Thursday–


* Did anyone else who played with Tamagotchis as a kid get a kick of déjà vu as your dwarves starved? And am I the only one who always loses from starvation, and never any of the more colorful endings?

Brainfuck Typo

I’ve always been fascinated by the Brainfuck programming language, probably because out of all the weird languages I’ve heard of, it’s the one that makes the most sense to me. I’ve read about it a few times in the past, and was pretty sure I knew how it worked, until reading the Mateas and Montfort (hereafter M&M) piece and running across this snippet of code, which they say is a “Hello, World” program:


What’s the problem? Here:

++++++++++  [  >+++++++>++++++++++>+++>+<<<<>++.>+.+++++++..+++.>++.<<+++++++++++++++.>.+++.------.--------.>+.>.

That doesn’t make any sense (well, it makes less sense than a Brainfuck program should) – it has an open bracket without a close bracket! I actually think M&M do a pretty good job of describing the language in general: it creates a large array of byte cells, “>” goes right, “<” goes left, “+” increments, “-” decrements, “.” outputs, “,” inputs, “[” jumps forward to “]” if the byte is 0, “]” jumps back to “[” if it’s non-zero. However, they don’t bother to go into more detail (which I don’t really blame them) about what exactly that all does in practice. The output command uses ASCII, which just converts bytes into characters using this table:

So really, all that Brainfuck does is start moving pointers around that table. A simple program to print out “helloworld” could just keep adding until it gets to “h” (104) and then add/subtract from there in order to spell out the words – this is kind of like spelling something out on a screen using a remote where all you can do is go left or right (curse you Apple TV!!!). The only problem is that you’ll get a program that looks like this:


It’s a lot easier to start off the program with a [ loop to get some of that pesky addition taken care of more compactly. Think about it as a while loop: the byte before the [ (our counter byte) sets how many times we want it to run, and then we can move to the next byte (our target byte), increment some amount, go back, decrement our counter, and repeat, just like in Python. So if we want to add 100, it’s a lot more succinct to add 10 and loop it 10 times. This makes our “helloworld” program look like this:


Which is a lot nicer. This is what the M&M “Hello World!” program is setting itself up to do, it just forgets to decrement its counter and close the loop, like this:

++++++++++[>+++++++>++++++++++>+++>+<<<<  -]  >++.>+.+++++++..+++.>++.<<+++++++++++++++.>.+++.------.--------.>+.>.

This program actually works now (you can check it using this online Brainfuck interpreter)! It uses 5 bytes: byte 0 for counting, byte 1 for upper case, byte 2 for lower case, byte 3 for the space and exclamation point, and byte 4 for the new line (which is pretty standard for most of these programs). It starts off by setting the counter to 10, and then loops through to set byte 1 to 70 (right in the ballpark of “H” at 72), byte 2 to 100 (close to “e” at 101), byte 3 to 30 (near space at 32), and byte 4 to 10 (which happens to be new line exactly) – this is why the plus signs inside the brackets are in groups of 7, 10, 3, and 1:

++++++++++[             byte 0 = 10
>+++++++                byte 1 = 7         (*10 = 70)
>++++++++++             byte 2 = 10        (*10 = 100)
>+++                    byte 3 = 3         (*10 = 30)
>+                      byte 4 = 1         (*10 = 10)
<<<<-]                  decrement byte 0

From that point on, it’s just a matter of spelling things out using bytes 1-4:

>++.                    H                  byte 1
>+.+++++++..+++.        ello               byte 2
>++.                    (space)            byte 3
<<+++++++++++++++.      W                  byte 1
>.+++.------.--------.  orld               byte 2
>+.                     !                  byte 3
>.                      (new line)         byte 4

So now the programmer in me can breathe a sigh of relief knowing that the code works. I really love the taxonomy of weird code that M&M put together, as well as the examples they give for each: insane (INTERCAL), minimalist (Brainfuck), playful (Shakespeare), and impossible (Malbolge). The one other type of language I would have loved to see them talk about more would be funges, which use two-dimensional (or more!) space to direct their pointers. These can be things that still look like languages, just laid out differently, like Befunge:

 >25*"!dlrow ,olleH":v
                  >  ^

Or they can not look like languages at all. My favorite is probably Piet, name for Dutch painter Piet Mondrian, that actually uses colors in a bitmap to make things that really look more like paintings than programs. Here’s a “Hello World” program in Piet:

So cool! There’s a great gallery of sample programs “written” in Piet here if you all want to see more.

Embark! (again)

That was then: bulky font, stubborn cat, indecipherable landscape.

This is now: domestication, designation, and progress to be proud of!

Even a stray cat which wanted nothing to do with my commands adopted a dwarf companion and I finally figured out how to use designations to go coal-mining! (I know…it only took about ten hours before I got around to tackling the technical objectives of the game). I credit the “phone a friend” lifeline — I called my (gaming enthusiast) sister in a bout of frustration after losing a previous world of dwarves to a winter storm from dehydration,* and challenged her try Dwarf Fortress. She figured things out in a fraction of the amount of time it took me to (down to the helpful note that “F12 makes the font nicer “), and walked me through some basics over a clarifying call.

(*note the mental anguish of my poor hammerer:

df_horrifying. My mental anguish certainly matched this, even as I reminded myself of Boluk and Lemieux’s “Dwarven Epitaphs” sign-off of “Dwarves must die for this game to be fun” (150).

I am now intent on winning this game (“to win the class”), with defiance matching my woodcutter’s:

This is saying something, since I am historically so terrible at games that I never made it through Oregon Trail as a child because the learning curve was too steep. When asking gamer friends for recommendations, they suggest that I should really just stick to games “which only involve collecting animal friends” — which, to be fair, is what I spent my first four hours of Dwarf Fortress trying to do, and how I somehow win some rounds of Settlers of Catan (by exclusively collecting sheep). Clearly I am beginning to identify too much with my dwarves (“she dreams of mastering a skill”), so let’s move onward to more scattered digging. Specifically:

slippery chicken.

Listen to this — a whole album of compositions generated by slippery chicken, the enigmatic “algorithmic composition program” Michael Edwards created and mentioned in his fascinating article on computational music. Of course I was most eager to unearth the origins of its name. Thanks to Edwards’ detailed notes, I quickly discovered (here):

“The structural ideas for slippery chicken were developed during the composition of two pieces of mine, the techniques of which led to their formalisation in this programme: pas de poule,pas de pot and slippery when wet, hence the combination of the two titles into something even more abstruse and silly than the originals (humour is an important part of staying sane whilst programming). Also, the overall design of the software was quite taxing and some organisational problems were about as easy to grasp as a slippery chicken (plucked, naturally).

How adorable is that? I know I can’t resist a terrible pun, especially when poultry’s involved. Finding this tidbit made me wish Edwards had done more with the element of absurdity, and by extension playfulness, in his article. (Why are absurdity and play inextricably yoked together in my mind? I should perhaps disentangle this). Play was implied by Edwards throughout, but invoked literally only once, as music is “played” in compositions — yet play didn’t emerge as a major point of analysis. Of course, the article was already doing plenty in setting up the groundwork on computational music, but especially when looking at how aleatoric music plays with chance, there is clearly much to expand upon. Granted, I was also reading this on the heels of Mateas and Monfort’s persistently play-filled “A Box, Darkly,” and the juxtaposition of the two may have highlighted this slight lack in Edwards’ piece. In addition, the conversation last week about what sorts of “play” is “fun” — and what constitutes fun more broadly — is  something I still have in mind, as tied to “fun” and Dwarf Fortress.

The selection of “computational creativity” readings in themselves were fun for me — perhaps because, unsurprisingly, they appealed to my delight in all things whimsical and odd and confusing, and at least partially spoke the language of my right-brained self. The opening of the final paragraph of “A Box, Darkly” rang true: “Perhaps most oddly, obfuscated programs and weird languages are inviting.” “Inviting” – yes; even with my limited grasp of code, this was an inviting read. Considerations of the readability, play, and poetics of code, in addition to functionality, intrigue me. I keep returning to this Donald Knuth quote: “I do think issues of style do come through and make certain programs a genuine pleasure to read. Probably not, however, to the extent that they would give me any transcendental emotions” (Knuth 6 qtd in Mateas and Monfort 2). The invocation of “transcendental emotions” as something which might be expected of literature, by comparison, is curious. The possible poetics and aesthetics of code — and the styles of composition in general — are items I am eager to learn more about, though I might only be able to absorb them in bits and pieces until I have a fuller grasp of code.

An afterward-aside: While writing this, I was disrupted by the tragic news that my neighborhood storefront silkie hens needed new homes since the Animal Nature pet store is closing. I was quickly sidetracked, and made it a priority to take a last minute stroll to visit them (through their window) before they slipped out of my life forever (yes, my excuse for a late-night entry is literally slippery chickens).  Anyway, the computer programmer companion who I dragged along on this excursion made efforts to keep me on topic by brainstorming impromptu lessons about weird code topics. He taught me about quine (it turns out a quine once won the “worst abuse of the rules” prize in the International Obfuscated C Code Contest, via the linked Wikipedia entry) and lint (which can target bugs, style, any “syntactic discrepancies” and is “derived from the name of the undesirable bits of fiber and fluff found in sheep’s wool”), and a few others things. I can’t recall if these were mentioned in previous readings (they very well could have been but I feel like I would remember if they were? Nevertheless, there have been so many unfamiliar terms that a few must have slipped my memory bank), but I hope to return to these later, in addition to the many things mentioned in the essays. I also keep meaning to study up on Jarry’s ‘Pataphysics. Anyway, perhaps I’ll make it beyond tinkering with absurdities and puns to more focused analysis at some point, but I’ll sign off here for now and return to my Fortress, with its twelve idlers and a fighting dog.