Is coding the new literacy everyone should learn? Moving beyond yes or no

For their Hour of Code initiative this week, presents videos from both House Majority Leader Eric Cantor and President Barack Obama, saying that there’s “one thing Republicans and Democrats agree on:” everyone should learn how to code. US politicians are at historically divergent odds on every issue from culture to finance to defense, yet they converge on just a few things: Wall Street bailouts, strong intellectual property protections, and…that everyone should learn to program computers?  The fact that the Hour of Code initiative appears in that list of neoliberal interests should give us pause. This might be a good idea—but it also might be the sign of powerful forces at work. So, what’s behind this near-universal agreement that everyone should learn to code?

I think it’s because coding is framed as a new literacy. I mean, who supports illiteracy? Literacy is always good, and when we as individuals or nations don’t have enough of it, it’s obviously bad. The biggest pushback we see on this everyone-should-learn-code movement is from software engineers and professional programmers–who, understandably, don’t often think of what they do as literacy. But framing coding as a literacy makes it apolitical in’s promotional efforts. didn’t invent the idea of programming as a literacy. Neither did Khan Academy or Codecademy or any of the online educational venues currently focused on teaching programming. They’ve made a big splash, but the idea has been around for almost as long as computers have been. The reasons behind the argument have shifted throughout the history of the idea of programming-for-everyone: from general education to political liberation to technological freedom to intellectual development. And now, from the looks of’s promotional materials, it’s about individual success in the economic and employment marketplace.

Thinking about coding as a new literacy takes it out of political debate, but it also means we need to think about it beyond yes/no terms. Literacy is not simply decoding letters and words. That’s why many Americans struggle with generating effective written communication and interpreting written texts, although they might know their ABCs. And because the job of teaching and learning literacy is so difficult–yet so important–we try not to leave it up to just one discipline or one institution. Schools, homes, libraries, English and biology classes all chip in to support literacy. In other words, if programming is a literacy, it doesn’t belong to computer science, as implies. Like reading and writing, it’s also going to take a lot more than an hour to learn. If programming is a literacy that everyone should learn, who should teach it? What, exactly, should people be learning, and why? And if everyone really did learn to code, what would that look like?

Before I go on, I want to say that I actually agree with and Eric Cantor and Barack Obama: I think everyone should learn something about programming computers because I agree that programming is a new kind of literacy. Just as textual literacy helps someone navigate a world full of texts, programming literacy can help us navigate a world full of code—which is the world we now live in. Learning to program computers could be about more than employment and STEM education and computer science. It could be about understanding and changing the ways that communication and information are currently structured and transmitted. And this, I think, should give both Obama and Cantor pause.

Computer programming for everyone

The first person on record arguing that programming is a widely applicable skill that should be taught to a broad group of people is Alan Perlis (thanks to Michael Mateas and Mark Guzdial for pointing this out). At a 1961 forum organized by MIT–with the wonderful title of Computers and the World of the Futurehe described an undergraduate course that looked a lot like the standard first year writing course:

the first student contact with the computer should be at the earliest time possible: in the student’s freshman year. This contact should be analytical and not purely descriptive, and each student during this first course should program and run or have run for him a large number of problems on the computer. […] This course should share with mathematics and English the responsibility of developing an operational literacy. […] In a liberal arts program the course could be delayed until the sophomore year, but certainly deserves inclusion in such a program because of the universal relevance of the computer to our times. (188)

Think of the state of computers in 1961: mainframes were only on a handful of college campuses. But computers were already important for defense, business and scientific research. Perlis’s emphasis on broad undergraduate education in programming suggested that future leaders of America should know something about these universally relevant machines.

Perlis’s vision was at least partially realized with the BASIC programming language, designed at Dartmouth University in the early 1960s by John Kemeny and Thomas Kurtz. Like Perlis, Kemeny and Kurtz saw the computer as universally relevant, and designed BASIC to be accessible to all undergraduates–not just those in engineering or the sciences. They made the language freely sharable, and it spread across college campuses in the 1960s. It’s impossible to overestimate the impact of the BASIC programming language on initiatives to teach computer programming beyond computer science. (For a good discussion of BASIC and its legacy, check out the recent book 10 PRINT CHR$(205.5+RND(1)); : GOTO 10)

This movement to teach all undergraduates programming in the 1960s moved off of college campuses in the 1970s. In his 1984 book Hackers, Steven Levy traces the epicenter of programming from the east coast to the west around this time, and the impetus to promote programming to the masses seems to have followed the same geographical trajectory. At that time, the push for everyone to learn programming was imbued with post-60s San Francisco area politics–hobbyists and hackers thrived, typified by The Homebrew Computer Club, Ted Nelson, and the People’s Computer Company. 

Meanwhile on a corporate campus in California (Xerox PARC), Alan Kay also dreamed of computers for people—specifically kids. His “KiddieComp,” later called the “Dynabook” in a 1977 publication with Adele Goldberg (pdf and context here) was the first real personal computer. At the time, nearly everyone thought it was crazy: a portable computer?? For kids?? But Kay persisted, and not only pushed the idea of what we now call a laptop, but also a flexible software environment that encouraged customization and design. Smalltalk, the first real object-oriented programming language, was also meant for kids and adults to be able to program the computer. (See Hiltzik’s Dealers of Lightning for more.) 

Seymour Papert, a student of the influential educator Jean Piaget designed the Logo programming language to “scaffold” (here we see Piaget’s influence) kids into learning complex logic, physics, and problem solving through programming. Logo had a heyday in elementary schools in the 1980s (such as mine), which was supported by Cold War defense funding and the anxiety of American competition in a global marketplace. These efforts petered out with decreased funding and poor support for teacher training. But that wasn’t Papert’s fault: many of us kids exposed to Logo loved it and learned a lot from it. If you haven’t read Papert’s 1980 book Mindstorms, about using programming as an “object to think with,” do it. right. now. Papert thought carefully about education, childhood development, and something that’s often missing from current educational policies: the joy of learning something, especially something difficult. 

And let’s not forget the hero and scourge of the open source (ahem, free software) community: Richard Stallman. His moral and political insistence on the free circulation of code and the rights for folks to program their own devices has been critical for open source programming languages. Did you know that as recently as the 1990s, you had to pay for the use and access to most programming languages and development environments? 

More recently, languages that freely circulate on the web like Processing, Ruby, Python and Javascript make programming way more accessible than it was in the 1990s. Figures like Why the Lucky Stiff (now disappeared) in the Ruby community made cases for everyone learning to program, along similar lines to Papert: it was fun! Plus, it’s really good for your brain.

We see the legacy of these people and projects in One Laptop Per Child, MIT’s Scratch, Carnegie Mellon’s Alice, UC-Berkeley’s Boxer and Hackety Hack.

And now comes If the motivations for pushing programming for the masses in the past were intellectual development, liberation, and joy, now it’s: Hey! You can get a good job in computer science! And this is where I am sadface.

Most of these past initiatives come from outside of computer science, or at the very least, were focused on teaching code beyond computer science. (I’ve written more about these initiatives here.Other initiatives by computer scientists such as Mark GuzdialKen Perlin and Mary Flanagan and Jeannette Wing (pdf) are focused on systems thinking and the pleasures of difficulty and the fact that thinking the way that programming encourages means you can think better about politics, physics, philosophy and humanity. They have broad visions of what it could mean to program: not just making apps for the walled garden of Apple, not just promoting Facebook by using their API, not just sticking together code blocks to make a licensed game like Angry Birds. 

Computer Science != Programming purports to teach code, but promotes computer science. Note the shift from computer science to programming and back on their About page: is a non-profit dedicated to expanding participation in computer science education by making it available in more schools, and increasing participation by women and underrepresented students of color. Our vision is that every student in every school should have the opportunity to learn computer programming. We believe computer science should be part of the core curriculum in education, alongside other science, technology, engineering, and mathematics (STEM) courses, such as biology, physics, chemistry and algebra.

If their “vision is that every student in every school should have the opportunity to learn computer programming,” why does the rest of the mission statement and the site talk about computer science? Its rhetoric about STEM education, the wealth of jobs in software engineering, and the timing of the Hour of Code initiative with Computer Science Education Week all reflect the ways that—along with many other supporters and initiatives—conflate programming with computer science. We see this in other arguments for why people should learn to code, in particular Jeannette Wing’s argument for “computational thinking” (pdf). Computer scientist Peter Denning argues that thinking about computer science as just programming is too limiting–but it’s just as limiting to think of programming as just computer science. 

If programming is really a new literacy, it can’t be contained within computer science. We don’t restrict reading and writing to English departments, thankfully. If we thought of The Great Gatsby as the end goal of learning to write, we’d be thinking of writing in pretty narrow ways. Most of us use writing for more mundane, and ultimately more powerful things: grocery lists, blogs, diaries, workplace memos and reports, text messages to friends, fan fiction, and wills. The ability to read and write gives us access to lives and culture and, yes, employment. Do we have to be good writers— defined in particular, narrow ways—in order to get something from our literacy skills? No. Code is so important, so infrastructural to everything we say and do now, that leaving it to computer science is like leaving writing to English professors (like me).

Coding as the new literacy

On, ”literacy” is all over the place. We see the term “literacy” used in other arguments for teaching programming to everyone, such as Guido van Rossum’s 1999 Darpa grant application for Python and arguments from Marc Prensky and Douglas Rushkoff. On Eric Cantor says that “Becoming literate in code…is the only way for you to prepare for the future.” … (For more on the connections between literacy and programming, see my bibliography on the topic.)

There are some good conceptual reasons to make connections between literacy and programming: they’re both abstract symbolic systems for communication and information, for instance. But that’s not really why people make the connection between programming and literacy. 

At, programming is like literacy because no one disagrees with literacy. No one argues that kids should be illiterate because literacy is a moral good. When literacy rates appear to drop, or writing appears to deteriorate, we wring our hands and declare a crisis. This happens all the time, actually (for instance, in 1874, 19751983 and 2013; see Rebecca Moore Howard for a great bibliography of literacy crises). 

The history of how literacy accrued this moral weight is actually quite interesting. Literacy has deep connections with religion. Protestantism posits that people need direct access to God’s Word: they need to read the Bible. And Catholicism, not wanting to be left behind, also promoted literacy through church schools in the early modern era. Literacy campaigns ramped up in the 19th century as mass schooling was perceived to be a way to knit nations together and make people more moral. This was especially true in America and Canada, where promoting shared values amongst immigrants was thought to be essential to building the nation (see Graff, Robbins). As the industrial revolution raged on, knowledge work–often conducted through written words–was a way out of brutal factory work, but also a way of making factory workers behave. We can thank all of these historical factors for our current values on literacy. And, actually, literacy is important. In America, low literacy levels affect access to jobs and independence and correlate with high rates of incarceration.

So has great PR and they’re smart to make these connections to literacy in order to promote programming or computer science or software engineering. But it’s not by accident that this literacy-infused agenda has gotten uptake now, when we’re once again in an era of high unemployment and anxieties about America’s ability to compete in a global marketplace. The dream of total outsourcing is dead: communication and design present insurmountable barriers for sophisticated software. Only if a company can precisely specify their needs can they send their programming projects overseas. And if they can actually specify their needs to that level, they already have good programmers and designers on staff, so they might as well do the work themselves. Instead, Facebook, Microsoft and other tech companies import many of their programmers from overseas. Whether or not they’re American, they have to pay them decent wages. (So, some of the fears of current software engineers and programmers about the everyone-should-learn-to-code initiatives might be well-founded: their wages could be driven down if what they do is no longer special. Still, the distance between a hour of coding and good software engineering is great.)

Relying on imported software engineers is one thing for Facebook, but it’s another thing entirely for the NSA or other government agencies that rely on programmers. A paltry supply of good American programmers is a security risk. Add that to the perceived connection between literacy, programming and economic development, and we can see why politicians might universally support programming for the masses.

What if everyone really did learn to program?

But here’s something else we know from historical and ethnographic studies of literacy: once someone is literate, their literacy can be used in ways they want. They might be marked by the ideologies and values with which they learned to read and write. But they can also read and write in unauthorized ways. They can read banned books, for instance. But more dangerous than reading is writing. They can write seditious materials. They can encourage revolutions.

So here’s where I get hopeful again about’s campaign, despite the fact that I disagree with their conflation of programming and computer science and their confusion between good job prospects and literacy.  I hope succeeds in introducing millions of people to programming, especially women and racially or ethically underrepresented groups.

Lots of people won’t go anywhere with the code they learn. A few will make Facebook apps or the next Angry Birds. That’s fine. But some will learn a bit more about code and think about what it means that our information infrastructure is built on it. They might think more about issues in intellectual property and politics concerning digital rights and expression. They might consider what ways that software could improve civic infrastructure, as Code for America encourages. Because I think all of these things are important, I think that women and underrepresented groups should be participating in these conversations as well as these structuring and designing these technologies. Learning something about programming will help them do that.

And just because and Microsoft and President Obama might like a more computationally-skilled skilled workforce and might not even mind if they happen to be more critically engaged with political debates about technology, it doesn’t mean that it will stop there. Angry Birds was made with code–but so was Bitcoin, which destabilizes our assumptions about central governments and currency control. Network exchange protocols that enabled Napster, Grokster, and the Pirate Bay are also made in code. Edward Snowden knows a thing or two about code.

So, what if everyone really did learn to program? We might have widespread unemployment among lawyers, because a lot of law might be enacted through algorithms in code. We might fundamentally restructure representative government—not just by allowing folks to tweet questions at the President in press conferences, or even have new civic apps about snow removal. But wide-ranging structural changes that we can’t even imagine. These visions have been the domain of science writers such as William Gibson, Neal Stephenson and Cory Doctorow. But they could be our future, too.

So right now, President Obama and U.S. Representive Eric Cantor actually agree on something: everyone should learn to program computers. But if they actually thought about where that might lead, they might agree on something else: programming as a mass literacy is a pretty dangerous scenario for the status quo. 

If you liked this, you might like my article on Understanding Computer Programming as a Literacy or my 20min Vimeo on the Ideologies of the New Mass Literacy of Programming (transcript here).

Print Friendly
Posted in Uncategorized | 6 Comments

A social writing experiment

Last Wednesday, in my Uses of Literacy class, we performed a social writing experiment. I’ve done similar forms of activities in the past, and it’s always interesting, so I thought I’d share. I posted a response to my students on the course blog, but I wanted to reiterate it here for any teachers interested in doing something like this in another writing class.

A quick background on the class: it’s an upper-level composition class that draws a lot of students looking to get into Pitt’s Masters in Teaching program, returning students, as well as a few seniors looking to fill the writing requirement but not necessarily interested in a literature course. We don’t have composition majors at Pitt, but some are lit or creative writing majors. I teach the course as a kind of intro to literacy studies, with a lot of focus on pedagogy. I tend to be straight with them about why I ask them to do certain kinds of writing or reading or why we’re discussing what we’re discussing because I want them to see teaching from the inside, just a little bit, before they begin to do it themselves. They do literacy narratives, interviews, mini-ethnographies, and blogging in class. Later, we do a digital project of some persuasion. I’ve taught the course twice before, and this semester I have just 13 students–a dream for me and the students! The syllabus is here [pdf] and the blog for the course is here.

Last Monday, we discussed Deborah Brandt’s “Remembering Reading, Remembering Writing,” and talk swirled around ideas of writing as individualistic and reading as social. Students lamented the fact that writing was often portrayed as something to do in isolation, and something for which they are often judged. Perhaps because many of them aspire to careers in teaching, they wanted to fix the problem (although I kept pressing them to understand what the “problem” was before jumping in to judge and fix!).

So I decided to run a little social writing experiment in our next class on Weds. It’s a wonderful class and they were good sports about it. I told them that they weren’t being graded and nothing was going to come of the writing, so they could feel free to treat it as a genuine experiment, subject to success, failure or some grey area in-between. I wanted to see and have them see what social writing looks and feels like. I set up four laptops (checked out for the day from our IT support center) and had them split up into groups of three (my ideal group size, always), one group to a laptop. Each computer had a Google doc open with a question: What is literacy good for? What is literacy? What do social theories of literacy help us to understand? What open questions do you have? Each group took 10-15 min for each question. I asked that they not just make lists, but compose. That meant that they had to write, agree on sentence choices, some form of organization, etc. They were writing together in their group, but also creating a palimpsest of answers and inquiries with other groups.

Results? They said, when we wrapped up at the end, that the experiment was a success. It helped them review the theories of literacy we had encountered so far and ask questions about what they didn’t yet understand. One student mentioned it might have been good preparation for a test, had I been inclined to give one–which I am not! (This class is assessed via portfolio, rather than exams. As I pointed out in class, professors give exams in order to get students to study and learn the material–not because they like to read or grade exams! Since students already did the work of review, an exam would be a waste of all of our time.) Students had to negotiate a space of shared writing–and many felt anxiety about changing or deleting the work of previous groups (although that did happen!). They also noted that it made class go by faster, because it was fun and they were conferring and conversing the whole time.

I think there is a lot of smart synthesis represented in the docs they composed in class. Obviously, they’re not polished papers. But that wasn’t the point. There’s a lot I heard in the discussions that isn’t captured in the text. Interestingly, I saw each group approach the problem differently. Some groups all huddled around the computer viewing the screen together. Others had one person read or summarize the work of previous groups (which was noted in discussion afterward to be sometimes difficult to follow). Some got so caught up in the debate that they didn’t or couldn’t write down most of what was discussed. Others carefully composed polished sentences to sum up ideas and provocations in the readings. As students are discovering when we share writing in class, there are many different ways to approach writing events. In this group of highly literate college juniors and seniors, there appears to be no one “right” way to write.

Here are links to the documents they produced in class (will open a Google doc for you to view, but not edit):

What is literacy?

What do the social theories of literacy teach us about how it works?

What good is literacy?

Open questions?

Has anyone else tried a similar experiment with social writing? What did you do, and how did it go?

Print Friendly
Posted in Uncategorized | 5 Comments

On hacking, use, and utilize

I hate grammatical diatribes and so I hesitate to write one. (So stereotypical for an English professor to do this! Might as well get a suit jacket with elbow patches, etc.) But this diatribe—unlike all others before it—is important. It’s about hacking and making and reclaiming an excellent word from misuse.

I have often used this formula to evaluate the degree to which academic prose is overwrought:

instances of “utilize” / instances of “use” + “always already” = overwrought

OK, not really, but I think it would work. That is to say, when people use the word utilize, they generally just mean use, but want to add weight to their sentence. Example: “Scholars can utilize theories of ubermenschenism to cogitate on the prodigious output of members of the canine species.” Because weightiness is what academic prose is all about!

But utilize is not a synonym for use. In fact, it’s far better than that! Here’s the Oxford English Dictionary definition:

To make or render useful; to convert to use, turn to account.

  • 1807   J. Barlow Columbiad ix. 348   [To] Improve and utilise each opening birth, And aid the labors of this nurturing earth.
  • 1860   J. Ruskin Mod. Painters V. 333   Let all physical utilized.

The key here—to me, at least—is the making and rendering. You can use something readymade, but if you have to do something to it to render it useful for your purposes, you must utilize it. Here’s the website Editage with a nice, concise explanation: 

utilize is not simply a synonym for use but suggests a less common alternative or deployment for a different purpose: to utilize ordinary ink for staining, to utilize a dew drop for magnification, or to utilize sand particles as the means of increasing friction, for instance.

Which means that the tragically misused word utilize is not just a convenient proxy for academic BS. When used correctly, utilize is a readymade term for hacking! Just think of the possibilities:

  • I utilized the coasters and boxes to make a standing desk. (true story—utilizing them right now!)
  • The kids utilized the couch, tent canvas and appliance boxes to build their fort.
  • The massive data trails we leave online are ripe for digital humanists to utilize.
Utilization of couch for fort, CC ZRecs on Flickr

Which brings me to an important question: can you utilize an Arduino? That is, can you utilize something for which uses are deliberately not proscribed? Perhaps if you use the Arduino as a coaster? (except for the awesome RFID/Arduino winebottle labeling coaster below)

Arduino-powered coaster, CC toddbot on flickr

At any rate, I say we reclaim the term from the depths to which its been rigorously plummeted, er, sunk.

Print Friendly
Posted in Uncategorized | Comments Off

Computer Programming and Literacy: An Annotated Bibliography

With the recent uptick in the “everyone should code” movement, it seems that everyone’s now talking about computer programming as a new form of literacy. The terms by which people refer to the concept vary, but the central idea is shared: computational literacy; computational thinking; procedural literacy; proceduracy; computer literacy; iteracy. I’ve been working in this area for a few years now from the perspective of literacy studies, and I thought it might be a good time to share an annotated list of resources that I’ve found helpful in thinking through computer programming as a literacy. Chris Lindgren assembled a bibliography before me, and there’s a lot of overlap here. I’m inclined to say that the overlap points toward a burgeoning canon, although that recognition comes with the requisite wincing about a lack of gender/race diversity here.

I’ve listed just online or print texts, and the list tends toward the academic and historical. My Diigo library, assembled over the last few years with the tag “proceduracy”, is a better resource for public discussions about computer programming as a literacy.

I decided to list these in rough order of importance, which is incredibly subjective. I’ve broken the central sources up into a few categories: Really Important Stuff; Blogs & Online Writings; Dissertations; Work in English Studies. This is not to claim that there aren’t overlaps (e.g., something can be important and online!) but just to organize it a bit. After the central list of sources for programming and literacy, I’ve included a list of related work that people might want to read in computer history, pop books, code studies, and composition & rhetoric.

Of course, the whole list is partial and biased! I welcome additions and reactions in the comments or via other contact media.

Here’s the full document, available through Scribd. Below that, I’ve pasted just the bibliographic information. [Edit 6/7: added a couple more sources.]

Really Important Stuff

Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, Inc., 1980. Print.

Continue reading

Print Friendly
Posted in Uncategorized | Comments Off

My notes on Maurice Black’s “The Art of Code”

Maurice Black’s “The Art of Code” is an excellent dissertation that is, unfortunately, very hard to get because it only exists in a single, print-copy form that must be requested through the University of Pennsylvania library. (I don’t think the author has ever published from it, but please alert me if so! I think he might have left academia.) I’ve been meaning to upload these notes for ages in order to provide a bit broader circulation to the ideas. So here you go! Available on Scribd.

Incidentally, Nick Montfort’s notes on the text are great (and are what persuaded me to read the dissertation).

Black, Maurice. “The Art of Code.” University of Pennsylvania, Department of English, 2002. Print.

Print Friendly
Posted in Uncategorized | 1 Comment

Coding Values: my remarks at the Computers & Writing Conference

David Rieder (with a little help from me) organized a Town Hall focused on programming at the Computers and Writing conference at North Carolina State University this last week (May 19). The topic was: “Program or be Programmed: Do We Need Computational Literacy in Computers and Writing?” and the panelists were David Rieder, me, Mark Sample, Alexandria Lockett, Karl Stolley, and Liz Losh as respondent.

From the questions, twitter backchannel [edit: see Mark Sample's backchannel archive of #cwcon], and comments I got from the audience after the Town Hall, it appears to have been a success. For those not already thinking about this question, we got people thinking about it. For those already thinking about this question (which was most of the audience, I think), we said some controversial things, anxiety-producing things, and some things that elicited lots of head-nods.

I’ve pasted my comments on “Coding Values” below. You can find the text of the other panelists’ comments here:

Coding Values

Today I want to talk about good code. Experienced programmers often think about what good code is. But they rarely agree.

And here’s what I want to say: they don’t agree on what good code is because there is no good code. Or, rather, there is no Platonic Ideal of Good Code. Like writing, there is no good code without context.

Unfortunately, when good code is talked about, it is often talked about as if there’s no rhetorical dimension to code. It’s talked about as though the context of software engineering were the only context in which anyone could ever write code. As if digital humanists, biologists, web hackers, and sociologists couldn’t possibly bring their own values to code.

I’ll give you just a couple of examples of how this happens, and what this means for us in computers and writing.

One of the earlier articulations of the supposed Platonic Ideal of Good Code was Edsger Dijkstra’s infamous “GOTO considered harmful” dictum, from 1968.Edsger Dijkstra considers goto harmful

This article railed against unstructured programming and the GOTO command. Now, many of us first learned the joy of coding through the languages that used the GOTO command. But Dijkstra’s statement suggests that the context of the software engineering work place should override all other possible values for code. This is fine—as far as it goes, which is software engineering and computer science. But this kind of statement of values is often taken outside of those contexts and applied in other places where code operates. When that happens, the values of hacking for fun or for other fields are devalued in favor instead of the best practices of software engineering—that is, proper planning, careful modularity, and unit testing.

Here’s a more recent example, which I pulled from the Hacker News forum. Here the values of software engineering are more tacit, and more problematic:

Ender7's comments

Ender7 is replying here to a thread about a recent ScientificAmerican story that suggested scientists were reluctant to release the code they used to reach their conclusions, in part because they were “embarrassed by the ‘ugly’ code they write for their own research.” According to Ender7, they *should* be ashamed of their code. Ender7 goes on to say:

Ender 7's comments

Why is academic code an “unmitigated nightmare” to Ender7? Because it’s not properly following the rules of software engineering.  Again, the rules of software engineering presumably work well for them. I’m not qualified to comment on that. But that doesn’t mean that those values work for other contexts as well, such as biology.

So, in this example, software engineering’s values of modularity, security, and maintainability might be completely irrelevant to the scientist writing code for an experiment. If scientists take care to accommodate these irrelevant values, they may never finish the experiment, and therefore never contribute to the knowledgebase of their own field. The question, then, isn’t about having good values in code; it’s about which values matter.

We often hear how important it is to have proper grammar and good writing skills, as if these practices had no rhetorical dimension, as if they existed in a right or wrong space. But we know from writing studies that context matters.

Put another way: like grammar, code is also rhetorical. What is good code and what is bad code should be based on the context in which the code operates. Just as rhetorical concepts of grammar and writing help us to think about the different exigencies and contexts of different populations of writers, a rhetorical concept of code can help us think about the different values for code and different kinds of coders.

Code is rhetorical.

And this is how coding values are relevant to us in computers and writing. The contingencies and contexts for what constitutes good code isn’t always apparent to someone just beginning to learn to code, in part because the voices of people like Ender7 can be so loud and so insistent. We know from studies on teaching grammar and writing that the overcorrective tyranny of the red pen can shut writers down. Empirical studies indicate it’s no different with code. Sure there are certain ways of writing code that won’t properly communicate with the computer. But circle of valid expressions for the computer is much, much larger than Ender7 or Dijkstra insist upon.

To close, I want to share with you a bit of what might be considered very ugly code, a small Logo program I call, tongue-in-cheek, “codewell”:

Is this good code?

This is bad code because:

  • it is uncommented and hard to read
  • it’s in an old, seldom-used language
  • it is baggy and has repeated statements that should be rewritten as functions
  • it is not modular or reusable
  • it’s an “unmitigated nightmare”

If you run the code [on github here] in a LOGO interpreter, it looks like this:

Results of the "codewell" functionSo, in addition to saying my code sucks, you could also say this:

  • it could be used to teach people some things about functions and code
  • it’s a start for a LOGO library of letters that might be kindof cool
  • it does what I want it to do, namely, make my argument in code form.

Let’s imagine a world where coding is more accessible, where more people are able to use code to contribute to public discourse or solve their own problems, or just say what they want to say. For that to happen, we need to widen the values associated with the practice of coding. To Edsger Dijkstra, I’d say: coding values that ignore rhetorical contexts and insist on inflexible best practices or platonic ideals of code should be CONSIDERED HARMFUL – at least to computers and writing.

Coding values that ignore rhetorical contexts should be CONSIDERED HARMFUL.

Print Friendly
Posted in Uncategorized | Comments Off

Open Access Initiatives in Universities (my contribution to the CCCC IP Annual)

The CCCC Intellectual Property Committee publishes an annual review of interesting IP developments geared especially toward composition scholars. You can download a copy of the IP Annual here [pdf], under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States license.

Here’s the table of contents:

  • Introduction: Copyright and Intellectual Property in 2011 (Clancy Ratliff, Editor)
  • The Defeat of the Research Works Act and Its Implications (Mike Edwards)
  • Open Access Initiatives (Annette Vee)
  • One Step Forward, Two Steps Back: What Golan v. Holder means for the Future of the Public Domain (Traci Zimmerman)
  • “Sentence First—Verdict Afterwards”: The Protect IP and the Stop Online Piracy Acts (Kim D. Gainer)
  • A Dark Day on the Internet Leads to a Sea Change in Copyright Policy (Laurie Cubbison)
  • Occupy Trademark: Branding a Political Movement (Timothy R. Amidon)

The best thing about Creative Commons licensed and open access work is that we can distribute it more widely. In that spirit, I’m pasting my contribution to the IP Annual below (modified slightly: more linkified than in the pdf version). I wrote this as a short but relatively comprehensive review of the current status of open access initiatives in universities: what they are; why they’re happening; and what they mean for us as scholars. At the end, I include some good resources for understanding and negotiating open access scholarship.

Open Access Initiatives

by Annette Vee, excerpted from the IP Annual, published by CCCC IP Committee [pdf]

In September 2011, the Princeton University Faculty Senate approved an “open
access” policy for faculty research, adding the university’s name to a growing
list of research institutions opting for such policies. Harvard University adopted
a similar policy in 2005 (the first of such kind in the United States) and MIT did
in 2008. Following the lead of these elite institutions, many others have adopted
or are considering adopting open access policies, including University of
Pittsburgh, Columbia University, and Emory University. These initiatives aren’t
limited to the United States, either: University of Glasgow (Scotland), University
of Latvia, and University of Khartoum (Sudan) all have participated in open
access discussions and initiatives on campus (“Open Access Call”). A dramatic
graph of the increased numbers in open access initiatives can be seen at the
Registry of Open Access Repositories Mandatory Archiving Policies

The move in “open access” from buzzword to policy afects the
publication, circulation, and readership of our scholarship. These efects are
largely positive for writing researchers: greater circulation for our work;
enlarged rights and control over our scholarship; and new venues and formats
for publication. This brief report outlines trends in open access initiatives, some
of their recent precedents, and a few of the most salient implications for our

What Is Open Access?

Open access (OA) literature is freely available online and has fewer restrictions
on its use. According to Peter Suber, the Director of Harvard’s Open Access
Project, “OA removes price barriers (subscriptions, licensing fees, pay-per-view
fees) and permission barriers (most copyright and licensing restrictions).” OA
policies are often explained in terms of the labor, funding, and distribution of
scholarship: faculty contribute the bulk of labor for journals through their
writing and editing; faculty work is generally funded by universities and public
institutions; and free access to this work allows for greater distribution of
scholarship as well as some return to the public for funding its production. OA
scholarship is compatible with peer review: although scholars can make their
research available on blogs or institutional repositories without peer review, the
paradigm of OA policies is traditional, peer-reviewed scholarship.

Two major forces are currently moving scholarship towards OA. The first
originates from faculty or universities, and Princeton’s, Harvard’s, and MIT’s
open access policies for faculty research are examples. The second originates
from publication venues such as journals; examples are Springer Open, and the
journals Kairos, Enculturation, and Digital Humanities Quarterly, which publish
8scholarship online without paywalls or logins. Working in concert with both of
these forces are repositories for OA scholarship such as BioMed, ERIC, and
Harvard’s DASH.

Faculty OA Policies

The background copyright policy of most research universities assigns copyright
ownership in scholarship to the faculty who produce it. This copyright
ownership assignation distinguishes university faculty from most other kinds of
employees, whose “work for hire” basis means that their employers own the
copyright in their work. As copyright owners in their work, university faculty
are then at liberty to assign their copyrights to whomever they choose. Through
a Copyright Transfer Agreement, journal publishers often request copyright
ownership in exchange for publication of scholarship. Publishers may then
license back to the author limited distribution or reuse rights.

OA policies such as those at Harvard, MIT and Princeton are designed to
help faculty either reclaim some of those rights from publishers or to better
position them to bargain for retaining their copyright. Princeton’s policy states:
Each Faculty member hereby grants to The Trustees of Princeton University a
nonexclusive, irrevocable, worldwide license to exercise any and all copyrights
in his or her scholarly articles published in any medium, whether now known or
later invented, provided the articles are not sold by the University for a profit,
and to authorize others to do the same. […]The University hereby authorizes
each member of the faculty to exercise any and all copyrights in his or her
scholarly articles […]. (“Recommended open access policy” [pdf]).
Under this policy (which echoes Harvard’s), the author and the university can
both exercise copyrights; both have rights to distribute the work as long as they
do so without making a profit from it.

Faculty-driven OA policies can be classified as “opt-in” or “opt-out.” An
“opt-out” policy (such as the one adopted by Harvard, MIT and Princeton) is
more powerful—it is in force unless a faculty member requests to opt-out of it,
whereas the “opt-in” policy (adopted by Nebraska, Emory, and Michigan) is only
activated if a faculty member opts in. Because opting-out of the policy is made
relatively easy for faculty—for instance, Harvard ofers an online waiver request
form—one might suspect the policy to be of less force in practice. However, as
Princeton Faculty Committee explains, universities can use an “open-access
policy of this kind (even with waivers) to lean on the journals to adjust their
standard contracts so that waivers are not required, or with a limited waiver that
simply delays open-access for a few months.” Additionally, while faculty under
an “opt-out” policy can assign their copyright to a publisher, they cannot sign
away their university’s right, which means that the university can still freely
distribute that work, generally in an institutional repository.

Faculty OA policies also difer in terms of their deposit requirement—that
is, where the scholarship must be deposited to comply with the OA policy.
Harvard’s policy requires that faculty deposit their work in their OA repository,
DASH ( Princeton has no such repository (although the
faculty recommended the development of one when they approved the OA
policy) and does not require deposit. At Princeton, faculty can elect to deposit
their work in a repository specific to their field (e.g., PubMed or arXiv). Many
universities who do not yet have an official OA policy for faculty provide online
repositories for faculty to publish their work, for example: University of
Pittsburgh’s D-Scholarship@Pitt (, and University
of Illinois’ IDEALS (

OA Journals

Along with the trend in faculty-driven OA policies, a number of OA journals
have cropped up in the last few years. Most prominent are the Public Library of
Science journals (PLoS One, PLoS Biology, etc.,, which
publish print articles alongside digital versions. To cover costs, PLoS charges
authors’ sponsoring institutions for publication. Recently-launched humanities
journals such as the International Journal of Learning and Media (
and the International Journal of Communication ( are sponsored
by hosting universities (MIT and USC, respectively) and grants. The rhetoric and
technology journal Kairos (, operating as an
online open access journal since 1996, relies on grant support as well as support
from editors’ institutions.

The OA journals mentioned above are peer-reviewed and have editorial
boards comprised of leading scholars in their fields, proving that OA publishing
can be just as competitive and prestigious as publishing behind paywalls.

Why the Recent Trend in OA Initiatives?

OA has been driven by shrinking university budgets, better software platforms
for distribution, and faculty’s increasing recognition that wider distribution and
publicity means higher citation counts and better reputation. As it has become
easier and more accepted to do so, more and more faculty distribute their work
on public archives, blogs, or personal websites, and OA initiatives echo that

While university budgets have been cut worldwide, the cost of journal
subscriptions has risen. Libraries are forced to make difficult choices about what
to cut, yet the major commercial journal publishers have relatively high profit
margins. These financial concerns have become political concerns as well: why is
university research, much of it publicly funded, not freely available to the
public? University of Pittsburgh math professor Thomas Hales quips, “We
researchers create the content of the journals. We conduct the research, write
the articles, referee the papers and staf the editorial boards. We do this for free
every morning and buy the publications back again in the evening” (“Protest
“). In a recent Inside Higher Ed editorial, provosts of eleven large,
publicly-funded research universities wrote in support of OA scholarship: “we
believe that open access to such federally-funded research reports facilitates
scholarly collaboration, accelerates progress, and reinforces our government’s
accountability to taxpayers and commitment to promoting an informed citizenry
essential to the enduring stability of our democracy.” With shrinking public
10funding, faculty researchers are realizing that we are not isolated from
economics and politics. The push for OA scholarship is, in some ways, a response
to the economic and political forces of corporatization and anti-intellectualism.

These economic and political concerns about scholarship are
underscoring shifts in scholarship itself—moves toward digital scholarship in the
humanities and full, published datasets in the sciences. The Internet allows for
more complex scholarship to be published; slowly, that scholarship is being
done, and journals are publishing it. A wave of books about the crisis of the book
—notably Ted Striphas’s Late Age of Print, and Kathleen Fitzpatrick’s Planned
(which specifically addresses the scholarly monograph)—have
highlighted the fact that our traditional, print-based and commerciallyoutsourced publishing model is untenable. Recently developed institutions and
technologies ofer excellent support systems for OA publishing; these include
Creative Commons Licensing (, Open Journal
Systems (, SPARC (, and
DSpace (

Changes in publishing, politics, budgets, and technology have all
contributed to this trend toward OA scholarship. However, recent OA initiatives
have a rich lineage. The dominant repository for math, statistics and physics,
arXiv ( was started in 1991, and its first web interface was
installed in 1994. While not peer-reviewed, this repository is the definitive record
for those fields, due in part to its comprehensiveness and its afordance of rapid
publication. The wide acceptance of the repository has enabled researchers in
these fields to negotiate with publishers for distribution rights to their work. Out
of a December, 2001 meeting of the Open Society Institute (OSI), the Budapest
Open Access Initiative grew out. This influential initiative strove to accelerate
progress in the international efort to make research articles in all academic
fields freely available on the inte it funded must be made publicly available
within a year of publication. Because so much medical research is at least
partially supported by the NIH, this mandate instantiated a de facto OA policy
for the field of medicine.

More specific targeting of commercial publishing has put a finer—and
more political—point on OA initiatives. In 2003, the Turing Award-winning
computer scientist Donald Knuth led a widely-publicized revolt against Elsevier,
the publisher for the Journal of Algorithms, which he had edited since 1980. In a
comprehensively researched letter to the JoA board [pdf], he outlined the paradox of
Elsevier’s decrease in publication costs and its increasing price for the journal.
Knuth, the originator of TeX, the popular typesetting system for math and
computer science), notes that in 1980 the publisher performed the typesetting,
keyboarding and proofreading, “[b]ut now, the authors have taken over most of
that work, and software out the rising price of the journal. Moreover, he was
skeptical of Elsevier’s claim to need exclusive publication rights to avoid
apocryphal publications and make the scientific record “clear and unambiguous”
(Knuth 8). He called a straw poll for the editorial board to decide whether to
stick with Elsevier. As a result, the Editorial Board resigned en masse in 2004 to
found the journal Transactions on Algorithms, published by the professional
organization ACM. Ironically, Knuth closed his letter by stating, “I’m
emphatically not a revolutionary. I just want to do the right thing.”

Another accidental revolutionary, Fields Medal-winner Tim Gowers,
launched a highly publicized action against Elsevier in early long held: he would
no longer review for or publish in Elsevier journals. He cited their high prices,
unorthodox practices of “bundling” journals and their support of the Research
Works Act (H.R. 3699), which threatened to undo some of the work NIH’s OA
mandate had done. His post was a spark in dry tinder: a commentor to his blog
responded by setting up a website, “The Cost of Knowledge”
(, to collect signatures for other scholars
interested in taking a public stand against Elsevier. The successful protest drove
Elsevier to drop its support of the Research Works Act and has raised awareness
among faculty about the predatory business practices of Elsevier and other
commercial publishers.

As a result of all of these forces encouraging OA scholarship, next year’s
IP annual report is likely to list quite a few more schools and journals committed
to OA.

What Does “Open Access” Mean for Our Scholarship?

OA policies often allow for greater authorial control in publications, as they
permit researchers to retain their copyright. With copyright ownership,
researchers are free to distribute their work on personal websites, institutional
and collective repositories where they are indexed by finding tools such as
Google Scholar. Greater dissemination of scholarly work could lead to better,
more well-informed research. A 2001 article in Nature Debates was the first to
recognize that OA scholarship is more frequently cited (Lawrence), but this
finding has been confirmed through subsequent studies (for a more complete list
of articles charting dissemination of research in OA, see here: Moreover, OA scholarship is
available to independent researchers or those associated with less affluent
institutions. As research institutions in developing countries are growing
stronger, and as faculty positions associated with elite institutions with vast
libraries become more rare, the greater availability of scholarship may help to
erase some of the resource disparities between research institutions worldwide.
PLoS argues that the benefits of OA scholarship are:

Accelerated discovery. With open access, researchers can read and build on the findings of others without restriction.

Public enrichment. Much scientific and medical research is paid
for with public funds. Open access allows taxpayers to see the
results of their investment.

Improved education. Open access means that teachers and their
students have access to the latest research findings throughout
the world.

As the PLoS argument suggests, OA has implications for our teaching as well as
our research. Students, under financial pressure from a retracting economy and
tuition hikes, can access OA scholarship more easily and cheaply than work
behind paywalls. Additionally, OA education initiatives such as free online
12courses at MIT and Stanford are in line with the trend in OA scholarship. The
OA repository Open.Michigan strives to make course materials available not
only to members of their university community, but also to the public at large.

Although OA scholarship is clearly able to maintain high quality
standards, it is unclear whether it is compatible with the commercial journal
publishing system over the long run. Financing of journal publishing may be
taken up more by public grants and universities, which may lead to some painful
transitions in journal quality and budgets. Yet sanguine OA advocates claim
these risks are worth taking because OA promises so much for democracy,
education, and public knowledge.


Sherpa/Romeo allows people to check the copyright policies of journals and rates them according to their policies on open access:

DSpace is a turnkey, open source software platform for establishing institutional repositories:

OJS (Open Journal System) is an open source journal management and publishing platform sponsored by the Public Knowledge Project:

The Directory of Open Access Repositories registers OA repositores worldwide:

SPARC (Scholarly Publishing and Academic Resources Coalition) provides an author addendum to add to copyright transfer agreements:

Director of Harvard Open Access Project and SPARC Senior Researcher Peter Suber’s Open Access Overview:

Harvard’s Model Open Access Policy for institutions:

The Open Citation Project – Reference Linking and Citation Analysis for Open Archives, catalogues the research on citation impact for OA scholarship:

Works Cited

11 Research Provosts. “Values in Scholarship.” Inside Higher Ed. 23 Feb 2012. Web. 8 Mar 2012.

“Budapest Open Access Initiative.” Open Society Foundations. n.d. Web. 7 Mar 2012.

“The Case for Open Access.” Public Library of Science. n.d. Web. 7 Mar 2012.

Gowers, Tim. “Elsevier—My part in its downfall.” Gowers’s Weblog. 21 Jan 2012. Web. 29 Jan 2012.

Knuth, Donald. Letter to Editoral Board, Journal of Algorithms. 25 Oct 2003. Web. 7 Mar 2012.

Lawrence, Steve. “Free Online Availability Substantially Increases a Paper’s Impact.” Nature Web Debates. 31 May 2001. Web. 7 Mar 2011.

“Open Access Call for Proposals.” EIFL. 29 Feb 2012. Web. 8 Mar 2011.

[Princeton University] Ad-hoc Faculty Committee to study Open Access. “Recommended Open Access Policy.” 24 Mar 2011. Web. 7 Mar 2012.

“Protest launched against journal publisher.” University Times, University of Pittsburgh. 9 Feb 2012. Web. 7 Mar 2012.

“Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research.” National Institutes of Health. 11 Jan 2008. Web. 8 Mar 2012.

Suber, Peter. “Open Access Overview.” 21 Jun 2004, last updated 3 Mar 2012. Web. 8 Mar 2012.

Print Friendly
Posted in Uncategorized | Comments Off

Quantifying digital labor (my remarks at CCCC)

In a panel at CCCC organized by Madeleine Sorapure and with Joanna Wolfe, I offered some of my thoughts (and personal data) on my current obsession: quantifying time. Specifically, I wanted to figure out how long it took for me to learn some software to support a potential pedagogical project. If you do digital composition, you know it takes a long time to learn programs and support interesting projects. But if you don’t, it’s hard for you to know. I’m trying to figure out how to communicate that a little better.

Before I go into the program and project, I’ll say: I think it’s important to communicate this time for a lot of reasons. On a purely selfish level, I want to be able to tell my tenure committee that I am investing a lot of time to do the digital pedagogy that my department values. From a departmental/composition program level, I think it’s important for directors to see how much time it takes to do this work so they can factor it in to any call for program-wide digital pedagogy imperatives. At the level of the field(s) of digital pedagogy, I think we need to be a bit more circumspect about diving into new projects until we know that they will be supported. As Stuart Selber writes in Multiliteracies for a Digital Age, “high-quality programs in computer literacy cannot be built or sustained on the backs of unsupported teachers” (224). Good digital pedagogy requires systematic support for teachers, and “should account for the fact that technology adds real layers of complexity with any project, pedagogical or otherwise” (226). Selber goes on: this support consists of not just equipment, but incentives, valuation of the work (beyond just thanks), training, and support from key stakeholders. Most importantly, at least for my work here, is that sufficient time be alloted for the labor. For many of us who teach with technology, this is a labor of love. But that doesn’t make it free.

We need to recognize the human labor of digital pedagogy—what resources we draw upon to work into our syllabi these exciting new digital ways of representing information, communicating, and participating. But recognition needs to go beyond just a call to attention. So in a little pilot study, I’ve attempted to quantify some of my own labor in digital pedagogy to attempt to peel back some of those “real layers of complexity” as well as articulate what “sufficient time” might look like. I ask: What resources does it take (for me, in this instance) to do digital pedagogy? And, how might we understand and communicate what those resources are?

Screenshot of the AfterEffects Interface

First, I chose a project and piece of software. I decided to teach myself to use Adobe AfterEffects to do some kinetic typography. [Here's an example with a lovely Ira Glass quote on creativity, and a fun example featuring Nicki Minaj's Superbass.] I thought it might be fun to do that in a class sometime, and if I wanted to do it, I needed a better sense of the program and its capabilities to write up an assignment and support students in the work.

While I taught myself the program and the project, I tried to quantify everything I could–mostly, my time and other resources. I wrote down all of the things I could think of that helped give me a leg up on learning the software. (Well, I didn’t write down everything: my literacy, my flexible job, my luck in having good health, etc.) I kept track of all of the time I spent watching videos and learning the interface and working on a scratch kinetic typography piece.

By my count, it took me just over 22 hours to learn the program and project well enough to begin to feel comfortable using it in a class. (That counts the 2h it took me to procure the software, at a cost of $210 of my university-given research budget, because that time and money is also a resource.) I drew on a lot of things I already knew about complex interfaces, sound and image editing, timeline paradigms, font and design, and key terms to help me search the web when I got stuck. Someone who knew Adobe Final Cut Pro better than I did would have gotten to that level much more quickly; someone who had never used Photoshop might be banging their head against a wall for a lot longer than I did. Also, I still have a lot to learn about the program and the project of kinetic typography. I stopped at the point where I felt sufficiently competent, which isn’t to say that I know this stuff well or am any good at it. (At the bottom of this post, I’ve provided a lot more detail about my own learning process, for any of my hardcore fans out there.)

Here was what I could do after 15 hours, my first attempt at kinetic typography:

And here’s what I could do after 22.5 total hours (I chose a clip from Marshall McLuhan talking about The Medium is the Massage):

As you can see, I’m still not that good at it. But it’s competent work, and sufficient for me to support students’ exploration of a similar project.

So, what does this rather navelgazing pilot project suggest about learning the technologies that we teach?

Support for digital work in the classroom takes more time than the teaching of traditional textual writing, which we already know takes a lot of time to do well. It took me over 20 hours to learn a program well enough to feel comfortable writing up and trying out a new assignment in one of my classes. Importantly, there is no way for me to have kept track of the time it took me to amass the resources I had already to get up to speed in the program in that timeframe. Additionally, instructors need time to maintain and update any learning they do with digital software. Like students in first year writing courses, we cannot expect instructors to have a one-and-done model for learning to support digital pedagogy.

Working in digital spaces means that we must also be willing be bad at something for a long period of time—and as Ira Glass says (in this kinetic typography example), willingness to be bad at something when you know you’re bad at it, and willingness to work through that. Digital pedagogy takes a lot of trial and error and willingness to learn from your students or be an incomplete expert with them. Digital pedagogy, then, is not only a labor of time, it is also emotional labor.

Traditionally, the work of digital pedagogy has been done by those who enjoy it and who elect to do it–who spend a lot of their free time thinking about, learning and practicing digital composition for themselves. I’m one of them, and it’s work I like to do. But instructors of digital work also have other things we do with our non-work time—spending time with kids, house maintenance, travel, normal people things. Nearly every digital instructor I know feels crunched for time to learn and support this kind of teaching, and poaches from their non-work time in order to do it. That stress is widely acknowledged and shared among those of us who do this work, but is not readily apparent to those who don’t.

So here’s my point: We need to better communicate the kind of labor and human resources it takes to bring digital assignments into the classroom. To make this kind of digital labor more visible, we need good methods to catalogue and quantify it. The pilot lifelogging self-study that I’ve done here is one way to think about what those methods might look like. More carefully, rigorously, and quantitatively cataloging our labor will help those of us who work in digital pedagogy to articulate our work in the contexts of administration and constrained budgets. This articulation of labor will be especially critical as we consider programs that scale up digital pedagogy to include instructors who do not already do this work and who do not have the wealth of knowledge gained already through digital hobbies. I fully support digital pedagogy, and scaling it up beyond just the few instructors who already do it. But we must better understand the time and resources it takes to implement, and I argue that we must track those time and resources in order to better understand them.

More detail than you really want about my learning process:

Before I started this project, I knew these things:

  • That kinetic typography existed and that tutorials were available online to help me learn how to do it
  • How to record myself in Audacity (which I already had installed) use a mic (which I own already)
  • The basics of Adobe Illustrator
  • Some aspects of timelines from other time-based composition programs like Audacity and Final Cut Pro
  • Some experience with design: fonts, arrangement, colors, theories (not practice) of animation

By the end of my first attempt, I had learned some rudimentary things about AfterEffects, including details about the interface. I learned how to:

  • play sound while scrolling through the timeline to sync up words
  • import items (although still some troubleshooting there)
  • type in the interface and change position, font, color and other attributes of type
  • make objects 3D, and move them along the X, Y, Z axes.
  • Add a camera to move focus through the piece
  • Control the camera’s postion along 3 axes and rotation (though not well)
  • Preview and render the video with both video and sound functioning

In my first attempt, I ran into trouble in these areas:

  • I couldn’t import a layered Illustrator file with layers intact
  • Sound wouldn’t play while I scrubbed the timeline
  • Camera wouldn’t recognize the 3D words I’d made

And I did some troubleshooting to isolate and fix issues:

  • Googling keywords like: troubleshoot, import, render, scrub, two-node camera, point of interest, timeline, audio
  • Opened new project (started over)
  • Created new, simpler Illustrator file
  • Played with different settings
  • Asked husband the computer programmer (didn’t work)
  • Watched videos very closely to see which settings were being used

Some of the troubleshooting worked, some didn’t. Those problems I just worked around. For instance, I didn’t use an Illustrator file to design the layout of the first attempt because I still couldn’t get it to import correctly.

In my second attempt (the McLuhan video), I ran into more trouble as I ramped up the complexity of the project, and I used similar troubleshooting strategies. I learned a few more things in the process, too.

Additional resources I drew on for my second attempt at kinetic typography:

  • Time (7.5h beyond the first attempt)
  • My knowledge of websites where I could get images and sounds to mix in (flickr creative commons search and
  • Math and coordinate geometry
  • My knowledge of Photoshop
  • Recording and editing in Audacity

Here are a few of the additional things I learned from the second attempt:

  • How to pre-render and pre-compose to manage more complex composition
  • Effects panels
  • Improved work and understanding of keyframes (a common paradigm in time-based media)
  • Offsetting time
  • Updating source material in my project when I’ve edited it in another program (Audacity or Photoshop)

Things I ran into trouble with

  • Increased complexity of animations loaded down my computer and forced me to think about workflow more
  • Program crash and lost work
  • Rendering issues
  • Getting AfterEffects to recognize transparency in images
  • Animating two layers together using a third null layer
  • Bouncing camera paths (not quite resolved)
  • Changing animation settings on preset effects

I made a few observations about the differences in my knowledge-gathering approaches from my very first attempt to my second. For instance, in my first attempt, I spent a lot more time watching general videos introducing me to what the software could do because I didn’t know the scope it. In my second attempt, I spent more time targeting particular problems I had, seeking out videos and explanations that would respond to my own goals and needs. I also played around more with the interface and effects without reading directions because I’d gotten more comfortable doing so. This process is not unusual—many education curricula are structured around the model of initial lessons, followed by independent work.

Print Friendly
Posted in Uncategorized | 3 Comments

Distant Worlds Converge

Recently, Mr. N—- and I went to see Distant Worlds: Music from Final Fantasy at the Benedum Center in Pittsburgh. Fun for the whole family! Video games for Mr. N—-, and getting out of the house for me.

The event was an elaborate fan service ritual: a full orchestra playing the themes from the Final Fantasyvideo game series with a huge screen behind them projecting game footage and cutscenes from the series. The conductor (Arnie Roth, who is obvious a huge FF fan, if nothing else than for the fact that it pays his bills) yukked it up between the pieces by calling for audience participation, paying homage to the composers, and alluding to everyone’s favorite moments in the series. Even for me–someone less familiar with the details of the series–it was awesome.

What made it awesome, however, wasn’t the orchestra (it was fine) or the venue (its 4700lb chandelier!) the pieces (which are beautiful and epic) or even the company (although N. was great, as always). It was the history of game animation that the performance projected. Juxtaposed with this spectacularly restored 1928 theater were the pixelly figures that captured the imagination of thousands of young Japanese and Americans beginning over 20 years ago.

As the images were projected thematically rather than linearly, we saw the technical feats of special effects, landscape, face and hair animation shift. Because Final Fantasy is such a long running series, the orchestra could draw on over two decades of video game animation history. It’s hard to believe that any other era or genre of art experienced that much change in a 20 year time span.





A new exhibit opening this weekend at the Smithsonian American Art Museum takes up that evolution: “The Art of Computer Games.” I was pleased to see my old favorite C64 game Sid Meier’s Pirates!  is featured. Indeed, those were some beautiful blue seas! There are two representatives from the Final Fantasy series–Final Fantasy VII and Final Fantasy Tactics. (the full list of featured games for the Atari to the Wii to the Playstation3 is on the Smithsonian’s exhibit page.)


Pirates! in 1987
Pirates! in 2004

As beautiful as they are, it’s not clear to me that the later images are superior. As the older pixellated characters were juxtaposed with the newer ones, with flowing hair and hipster-emo outfits, it strikes me that the older images were often more evocative than many of the newer ones. It may be my nostalgia for early games (although I never played the Final Fantasy series). But I think it was something more: the earlier images leave a lot to the imagination. The hipster-emo characters from more recent installments in the series look dramatic and distant and too cool and too young for me to want to spend much time with.

too hip for me (FFXII, image from IGN)

At any rate, Mr. N—- and I will have to make a trip to see the exhibit and marvel again at the changes in visual representation in games.

Print Friendly
Posted in Uncategorized | Comments Off