I’d just settle for a decent title generator

When attempting to describe a video game or other media object, it seems there are (at least) three potential choices in orientation to take at the outset—one could choose to focus on what the object is, what it means, or what it does.

Focusing on what the object is leads towards an object-oriented perspective. We might follow Bogost and dissect Atari machines to try to achieve some understanding of what, really is going on inside a video game console or what it’s like to be a computer. Focusing on what the object means requires making a pronouncement on the significance of presenting or publishing a software object in a given social context. Wardrup-Fruin makes a strong case for focusing on, or at least starting with the question of what an object does.

He frames this question in the very pragmatic terms of operational logics. Understanding individual operational logics does not require the ability to produce or read code, nor does it require that we understand what is happen when electrons, light flashes or what have you race around the silicon inside of a computer. Operational logics are the basic categories of how a program communicates or expresses itself to humans. Conveniently, it seems that most gamers already make use of this category to talk about games (“WoW doesn’t have “collision detection” but it has a “reputation system,” etc”). Without knowing much about code or computers I can say quite a lot about the operational logics games use for expression. This is really a theory of expressive processes as media, something that mediates, between the silicon and human world. It is a theory of how processes strike a blow upon the human.

As a result this seems like a very useful tool for game designers or critics. They can break down games into their constitutive operational logics—and perhaps come up with new ones. His discussion of the various Eliza, SimCity, and Tale-Spin effects also offers a compelling theory of good versus bad game design. The focus on these individual logics is useful, but it may also have some limitations.

I’m not usually one to drink the Burke Kool-Aid, but I couldn’t help but imagine the essay he would write in response to this book. Wardrup-Fruin is focused almost entirely on “Agency,” or the how of the actual process. Consider his discussion and critique of the failings of the quest tree process in KOTOR. I submit that most gamers were not overly put off by the narrative incoherence of the side quest line problem on Dantooine. A good bit of the pull of a game like KOTOR comes from participation in the “Scene” of the Star Wars universe. For the Star Wars fan who just wants to explore a bit more of the galaxy, it’s not critically important if the side quest makes narrative sense or not; really they just want to get a sense that they are wandering around and acting in the “frontier” environment of Dantooine. Personally, I was much more bothered when KOTOR’s light side/dark side “system” didn’t match my expectation of Star Wars cosmology, (i.e. you acquire light side points by going far out of your way to help everyone you meet, when of course such behavior demonstrates excessive attachments, which we should all know is really the path to the Dark Side). The game process is entirely obvious, internally consistent, and accessible to the user but still at times unsatisfying due to a mismatch with Scene. Could we extend operational logics beyond the software itself to describe how a game interacts with external elements?

In addition, multiplayer games are almost entirely absent from Wardrup-Fruin’s account. (He does describes the experience in a MMO of having an enemy mob he has just killed reappear, destroying narrative coherence, which, of the many complaints I’ve heard from MMO gamers, has never really been one of them). Adding other human players to a game certainly has to change the equation. What can expressive processes and operational logics tell us about human-to-human interactions mediated by software, rather than human-to-software interactions?

Finally, on a relatively unrelated note, I couldn’t help but speculate on the relevance of this books’ terms outside of game and software studies. I haven’t had much time to develop this idea, but if we take the Eliza, Tale-Spin and SimCity effects and map them onto real world regimes of knowledge organization what would we get? The SimCity effect corresponds with Newtonian physics. A set of rules, perhaps mysterious at first, but readily available through experimentation or “play.” The Tale-Spin effect corresponds to statistics and probability. It completely obscures the material causes and the actual physical interactions of the things it measures and reports only the surface values of correlation. And what could the Eliza effect be? Considering its reliance on surface over depth, it seems like the closest match is rhetoric—or at least the commonplace negative view our venerable discipline. What does it mean? Probably nothing, but I thought I’d leave it here anyway.

Fiction out of context

I must admit that, as it is Tuesday afternoon, I’ve only read the first third of Noah Wardrip-Fruin’s Expressive Processing (2009). Thus far, I’ve attempted to absorb some definitions (“operational processing” and “ideology machine,” for example), and developed a better understanding of Joseph Weizenbaum’s ELIZA, the psychotherapist chat bot that Ian Bogost prompted me to engage with a few weeks ago. I realized, through Wardrip-Fruin’s explanation of the Eliza Effect, that I’m the type of user who provokes nonsense from Eliza. In other words, I’m an uncooperative collaborator. I want to coax her into unwittingly revealing her machinic, non-comprehending self. However, I’ve also realized that I underestimated Eliza. Reading about the Eliza Effect and its inherent weaknesses certainly made me gain a greater appreciation for Weizenbaum’s machine. As the author reveals, the machine’s flaws actually provide insights about the underlying system processes (as described on p. 38).

Implementation, Nick Montfort and Scott Rettberg

Implementation, Nick Montfort and Scott Rettberg

Conversely, I was particularly intrigued by how digital fictions confront the issue of the Eliza effect by explicitly revealing underlying mechanisms. Rather than try to perpetuate the illusion of seamless cognition and appropriate response (Eliza), these fictions embrace the notion of exposing the messy processes that occur during the creation and delivery of narrative. I was reminded of so-called cell phone novels when I read about Nick Montfort and Scott Retberg’s Implementation Project. As excerpts are printed and posted in random locations, the story unfolds in a compelling way…yet the intrinsic processes (of printing and pasting, for example) are very straightforward and immediately recognizable. Interestingly, the project has now been published as a book (2012) with a linear narrative that was intentionally absent from the original novel.

Anyway, let’s return to the cell phone novel, or “keitai shosetsu,” popularized in Japan. This novel, frequently distributed in text messages, represents an intimate type of digital fiction that, similar to Montfort’s project, is serialized and encountered without context. Just as models of the world are represented in video games, and Turing’s “imitation game” is still discussed in this regard (as reiterated by Wardrip-Fruin), cell phone novels are often like folktales in that they convey familiar narratives in new ways (I found this article, I *heart* Novels in the New Yorker to be helpful in understanding this genre of writing). As we discussed last week, some aspects of this new use of cell phones…or this skewing of our media ideologies may seem threatening to the “traditional” author, as these novels are often published anonymously or under pseudonyms. However, these novels are seen as an extension of oral storytelling rather than an intrusion on literature.

Perhaps I’ve strayed too far from computer games, but when I read Wardrip-Fruin’s call to action regarding moving beyond the rigidness of quest flags and dialogue trees, this struck me as one medium that seems to continually captivate users. However, these users are not necessarily responding to the novel–I’d be curious to learn more about how readers suggestions are incorporated into new additions to cell phone novels.

I must finish the rest of the book, but will also just mention this lecture that Wardrip-Fruin gave at UC- Santa Barbara because it is helping me better understand certain concepts as I read about them in the book: Saying it with Systems.

Better Late than Never- Semiotic v Thermodynamic Rhetoric

Brown’s piece draws some interesting links between the OOO theorists Bogost, Harman and Bryant and Rhetoric. I’d like to add to this a little bit and also pose some questions about the possible relationship between a (newly copious) rhetoric and ontology.

For Harman, something about an object always withdraws from relations with any other object. In fact, this is what makes it an object as such. Harman has a long running dispute with Bruno Latour (and other, smaller worker ANTs) on this very point—for Latour and smaller worker ANTs, an object is nothing but the sum of its relations. Harman replies that such a conception leads to absurd results—in a universe where everything was merely the sum of its relations nothing could grow, change or even move, because every object would be caught in and overcoded by the thick web of relations. An object has to “withdraw” or hold something in reserve from what it deploys to its observable local manifestations to explain a dynamic universe.

Harman loves to tarry with Heidegger’s workshop full of tools. The craftsman interacts with a hammer, but is only interested in the wooden handle and the metal head’s ability to pound things. The nail only interacts with the hardness of the metal head, If the workshop caught on fire, it would appreciate a very different aspect of the hammer’s being, namely the capacity of the wooden handle to serve as fuel. A spider might experience an idle hammer for its sturdiness as a web anchor. Objects are interesting for their potential to reach into their withdrawn reserve and inflict a new cut or blow upon reality.

Taking withdrawal seriously, we might say that all of these actants approach the hammer “rhetorically,” i.e. they “persuade” the hammer to engage in various relations. Since the hammer always holds something of its nature back, we might say it is persuaded into various relations.

In Onto-Cartography, Levi Bryant is keen to mark a difference between semiotic and thermodynamic politics. The archetype of semiotic politics is to protest a corporations activities with traditional forms of rhetoric—marching making signs, crafting arguments, etc. A thermodynamic politics would recognize that a corporation is not a machine readily persuadable by semiotic meanings and human rhetoric—it is fundamentally not human, but a machinic amalgam of humans and non-humans that doesn’t care about anyone’s protest in the least. We might say, “corporations do not understand human language.” Thermodynamic politics would seek to attack (or persuade, if you like) the corporation at the level of inputs and outputs that are legible to it, such as labor and revenue. A strike or a boycott are archetypal examples of thermodynamic politics when directed against the corporation—they translate the human perspective into something the corporation can understand.

Computer code brings us to an interesting case—in Bryant’s dichotomy, is it semiotic or thermodynamic? On one hand, as we have discussed, we have to admit that it is a language. On the other hand, it indisputably makes things happen in the world and inflicts direct cuts upon reality. A computer code can tell a missile to launch or a robotic drill to operate. Computers seem to lie right at the borderland of this division. The OOO sympathizer in me wants to ask what exactly happens when you press enter and run your code? We could answer that question in regards to language of Python, or we could ask what is really happening under the hood with whatever blinks of light or electrons that race around the silicon (I really haven’t a clue what is going on in there). Semiotic or thermodynamic machine?

Processing All the Way Down

Though it may be true that “Rhetoricians never tire of defining rhetoric” (Brown 496), my limited experience participant-observing in this field has me wondering whether rhetoricians ever tire of defining (and re-defining) any word in existence which might be integrated into the difficult-to-define discipline at all. This semester alone, we’ve spent significant time pondering nuances of meaning in terms from coding to computation, procedure to process, machines, and more. As I think through how such terms are being reframed and redefined across readings, I am, more practically, trying to utilize them productively in several projects I am working on right now. (Though after reading about Erasmus, I was tempted to pen adoxographies in lieu of a substantive entry).

I’ll offer this as an opening artifact:

This project was made with several obvious sets of constraints – it is my response to an assignment at the Digital Media Composition Institute, where I was asked to define/illustrate a concept using sixty seconds of video footage. Pedagogical procedure is inherent, and up until the very end, when “This video is processing” is a notification received in the last step of uploading a video online, the terms surrounding computation are evident. (I am also resisting the urge to make a [sic] pun “In yolking rhetoric to print” on page 5 of the Brown and Vee draft, which in all seriousness is a useful introductory essay I feel lucky to have gotten a sneak preview of; the balance of serious and not-so-serious wasn’t only a struggle in Erasmus’s day).

Besides Brown replacing Geertz’s turtles with machines (and I’m still digesting his ideas on animal/machine interchangeability via Derrida), Losh’s work with Malinowski offers another moment of anthropological nostalgia. With her comment on how “contemporary users who send and receive digital files must be mindful of unintended audiences, unanticipated purposes…” (2), I almost expected more discussion of Malinowski’s infamous Diary (never intended for the public but published posthumously, it was a blow to his reputation), but nevertheless.

When thinking of procedures, after all, the first thing that comes to mind is the intensive methods training I had to undergo as an undergraduate anthropology major. I’ve been revisiting this most recently while conducting research for a poster I’ll be presenting at the International Writing Centers Association Conference (in Pittsburgh next week!). The lengthy conversations I’ve been having with local directors and coordinators of various writing centers have all followed the gist of a script. To make this even more literally “machines all the way down,” the interviews themselves were on mechanisms of operation – from training procedures to best practices for a writing center to run smoothly (like code?). The similarities in language kept bringing me back to the computational foundations. Especially with the emphases on collecting numerical data, and on assistive technologies such as text-to-speech software, directors collaborate with machines to run effective centers. The staff at Carlow University’s center recently received an award for developing an innovative interactive digital training module for tutors. How machines are used to aid in learning and foster an even more robust face-to-face relationships is fascinating to me. Furthermore, the majority of schools have some sort of online tutoring option, whether it be synchronous or asynchronous, written or videotaped – there are many variations on writing centers procedures worth studying (and nearly ten hours of interviews to process which have rendered me behind on just about everything else).

Moving from anthropology and administration to art, Losh writes compellingly about multimodal installations such as Mark Jeffery’s The Operature and Caitlin Fisher’s Circle – using such evocative phrasing as when “reading machines consume text on human skin” in her depiction of Jeffery’s piece (17-8). In Steve Carr’s Close and Distant Reading seminar last spring, I had attempted another procedural project which was my own attempt at iPad art, and reading this reminded me to revisit my progress. Specifically, I engineered (a liberal use of the word) a series of erasure “poems” from academic articles zoomed fully in on an iPad and photographed with a phone camera. In its own way, this was a play between registration and representation (to allude to the scanner vs. camera issue), and as an act of close and distant reading merged together. Here are random screens:

erasure_2 erasure_1

The actual project-in-progress was more carefully arranged before I unceremoniously abandoned ship, but the concept involves machines forced to “participate in producing…literary experience'”(to take Pressman qtd in Losh 15 out of context; I meant to bring in Maher and ethics somewhere in here…). Of other art that has been left behind, I was disheartened to discover that Marina Abramovic’s chair hasn’t tweeted since March. Of erasure, I have Jen Bervin in mind – and Silk Poems deserves a place somewhere in this discussion.

To pull threads together, “While analysis of and writing about objects is a kind of making and is an engagement with the materiality of language, making, breaking, and designing objects offers a different way of engaging extrahuman rhetorical relations” (Brown 510). I find myself reinvigorated when reminded of Bogost’s call for carpentry (as recalled in Brown 511), as “in attempt to see, smell, taste, touch, or hear the world differently” (511). I have always been more drawn to “making, breaking, and designing objects” than to what might be considered traditional forms of composing, so this sort of work is compelling to me. Breaking forms — figures, objects — leads to invention, new arrangements, and pedagogical experiments — and lots and lots of processing.

(ETA: There was discussion of automatons at some point so I intended to write about the animal-machines of Digesting Duck and gastrobots, but I accidentally left this behind at some stage of haphazard revision so consider this a placeholder for follow-up.)

nothingbot

I chose to make what I expected to be a fairly relatively simple bot in the style of the Poe example provided for the class. I thought I would use text from John Cage’s “Lecture on Nothing,” which is composed of short artful lines that I thought would lend themselves well to tweet form. I set up my Twitter account and app and pulled the text I needed from the Internet Archive, feeling perhaps overly confident about pulling together a project that would be easy(ish) to execute and have a fairly interesting outcome. That is of course a rookie move, and I quickly realized what I had chosen to do wasn’t as simple as I had thought.  It is fitting and ironic that I selected a text that ends:

“All I know about method is that when I am not working I sometimes
think I know something, but when I am working, it is quite clear that I know nothing.”

“Lecture on Nothing” is written out as a musical composition, in measures with musical spacing and repetition. On the page, it looks like this:

page1image

The same content, in a plain text file, looks like this:

page1text

So far, so good! But the order dissolves almost immediately afterward.

page1image2

A little further down the page, the above image translates to the following text:

page1text2

It looks like a fairly standard OCR problem. I found it sort of funny and fitting that in trying to create a bot, I had inadvertently picked a text that a machine had essentially already altered for me. My file ended up creating a nonsensical word-jumble effect that seems fairly common to bots, without my actually coding it to do that. So I ended up with some very typical bot tweets:

tweet1

tweet2

tweet3

While working my way through the initial process of creating the bot, I spent some time reading through additional tutorials and documentation, and thought about bots as compared to human-generated outreach projects on Twitter, like NPR’s Today in 1963, or historical societies that have tweeted diaries from their collections. I’m interested in the impact of these projects, and how the use of bots in these settings can automate the workflow of posting; and yet curatorial decisions made by humans are at the root of these accounts. Additional searching also turned up no “Lecture on Nothing” text that wasn’t similarly formatted in the OCR process. The part of me that would follow a John Cage bot wonders if creating that text by hand is the only way to generate an accurate machine-readable version of the lecture, and if I am the person who will end up obsessively taking the time to type it out…

Drowning in Spam

Like Modulus, I was fascinated with the question of what Twitterbots are for. I also had trouble making my bot functional (although I did have it posting snippets from a body of text by the end of the workshop)! I blame that on my own failure to grasp everything in Codecademy (…or finish it, to be honest), not Matt’s awesome teaching. That said, I’ve learned enough about code to perceive that making a Twitterbot is a relatively easy task for those with basic coding knowledge (or just a lot of patience). Constrained by our fluency with Python, our Twitterbots were pretty simple—we were tweeting from preexisting text files or responding to hashtags, primarily. We were spammers, albeit ones with a sense of humor.

I’d like to think more about how bot-generated spam text drives and circumscribes online communication. I recently read a pretty good book by Sarah Jeong called The Internet of Garbage. She explained that spam text is as old as the Internet; people would troll the early IRC chat rooms by pasting in endless lines from Monty Python’s Spam sketch. Since screens could only show a few lines of the chat at once, this would effectively shut down conversation. Since then, meaningless “garbage text” has proliferated all over the Internet, and any web company has to devote considerable resources to getting rid of it so they can provide a functional service.

I’m not sure exactly how Twitter polices bots, but it’s easy to see how a determined team of programmers could completely shut down conversation around a particular term by overwhelming Twitter with bot-generated responses. Spam also has a chilling effect on human conversations—e.g., I avoid mentioning Viagra or Xanax in emails because those words automatically get it relegated to the recipient’s spam folder. Right now, our anti-spambots are just as unsophisticated as our spambots. But I can completely understand why! Essentially, they fight spam by using very rudimentary strategies to conduct a reverse Turing test. This makes me wonder what a perfect anti-spambot might look like. Is it possible to use computation to tell whether text comes from a bot? For that matter, is it always possible for humans to tell? What do you guys think?

Human Procedure: [I Initially Forgot a Title]

An interpretation of the Brown, Maher, and Losh readings suggests a deconstruction of the human/machine (human/animal-machine) binary, as they argue that both entities operate – or have the potential to operate – along the same procedural lines, whether it is rhetorically or ethically. Brown makes this claim through the figure of the “robot rhetor,” Losh through the idea of computer as both shaper of and audience to human rhetoric, and Maher through possibility of artificial moral agents (AMAs). Essentially, while computers generate output based on their programmed input, humans communicate or make decisions based on their own “input” (information systems based on ideologies, past experiences, beliefs, drives, affects, etc.). As the two continue to shape one another’s products and modes of production, the characteristics that are “natural” to each become blurred.

I had a bit of trouble envisioning the “humanity” (for lack of a better term) of machines until I was able to conceptualize the mechanical nature of human procedures, something that became uncomfortably lucid through an analysis of my own pedagogical experiences. Though this revelation (which I will flesh out) was rather grim, Brown does note, “rigid machines are ‘only one type of machine among many other types of machines’” (508). Thus, as is often the case in structured systems, the possibility of resistance, creativity, or innovation, lies within the manipulation and interpretation of boundaries from within the system. In accordance with the arguments presented in these texts, I’d like to briefly sketch out two personal examples of humans performing in a mechanical manner due to both human and machine enforced limitations, and then open up the possibility to explore flexibility within these limitations specifically in the case of human and computational procedural rhetoric.

In the fall of 2014, I taught a remedial composition course at a community college that services a rural demographic. I was presented with a textbook, another instructor’s syllabus, and the directions to teach the students how to perform to the standards of college writing. We started with parts of speech, worked our way through appropriate sentence structure, and ended on the five-paragraph essay. Articulating the fears of many rhetoricians, Brown notes, “describing rhetoric in terms of procedures might be seen as reducing the art to mere rules” (496). For me, teaching writing became a procedure of “programming” these rules into my students. When I explained my misgivings to a professor at my undergraduate university (“But grammar is a tool of the oppressor! Am I merely another gatekeeper?”), she explained that it was a necessary evil; they needed these skills to function in the real world. Thus, I became a teaching machine limited by a series of other social systems. If I did not teach my students how to write a five-paragraph essay, they would fail at the procedure of the college English course progression and, in turn, fail at the procedure of functioning in the “real world.”

I was fortunate enough to be granted the opportunity to see this pedagogical procedure from another angle – that of the enforcer. I briefly held the position of a standardized test-scoring drone for a major educational products and services company (hint: not ETS). Here, I was instructed to read student essays and score them on the numerical scale 0 through 5. These scores were determined by a series of formulaic standards not all that different from Brown’s aforementioned “rules.” I could not sway from these rules due to random validity checks: essays that had already been assigned scores that I was required to match. In accordance with Losh’s account of the responses to Miller’s attempt to employ AutoSpeech-EasyTM (a computer that reads and reviews student English assignments), I became a scoring machine. Losh states that “the primary theme that she observed in the responses that she collected was skepticism that a computer could ever appreciate the nuances of human affect and recognize ‘creativity, appropriateness to context, the expression of emotion, and individual and cultural differences’” (8). Likewise, due to the established standards for scoring and the computerized enforcement of the validity test, even if I appreciated human “affect,” “creativity,” “emotion,” or “cultural difference,” I could not score according to it. Thus, my performance became no different from that of a computer grader. Losh goes on to explain, “The secondary theme…was anxiety about the potential loss of a dynamic public sphere in which audiences participate in complex and messy feedback loops of communication in which power relationships can be challenged” (8). Again, based on this anxiety, I fulfilled the role of a computer. Because the scoring mechanism was limited to the assignation of a numerical score, I was unable to communicate with the student or give any sort of constructive feedback. Additionally, because the student was unable to revise or explain, I always held the dominant role in the power relation (or at least acted as an extension of the formulaic standards’ dominant role).

At times, then, scenarios that detail humans working in accordance with procedures and machines reveal the mechanical nature of human performance. However, modes of resistance to these limitations exist within the systems themselves both in the model of human as machine and that of machine as human. I have a few ideas pertaining to the pedagogical implications of both models, but also many questions.

Citing Bogost, Brown explains, “what we typically think of as ‘breaking procedure’ is actually just the process of crafting and implementation of a new procedure. In this sense, all rhetorical action…is machinic” (498). We see this “crafting and implementation of new procedure” occurring in composition classrooms all the time – even within the school and social sanctioned requirements. While I was required to teach my students the mechanics of writing and essay structure, I had the possibility of allowing them to demonstrate their knowledge through a variety of procedures. For instance, instead of having them take a grammar final, I scored them based on their performance in a grammar Jeopardy game. Additionally, final papers take many different forms. Students can create blogs, newsletters, texts with visuals, and other multi-modal projects to demonstrate their understanding of writing and argument. While these alternatives may appear to break the procedure of a traditional composition classroom, they are really just a series of new or adapted procedures that fall within the limitations of the same system. They do, however, demonstrate the flexibility of the teacher machine.

With the machine as human model, I’m still having a bit of trouble conceptualizing possibilities, though these, of course, lie within the flexibility of software (perhaps, even, software that has yet to be written). In terms of my experience as a test scorer, software was definitely a limiting factor, as it did not allow me to score for creativity or provide feedback. Vee and Brown cite Leblanc’s argument, stating, “writing teachers should write their own software not only because of the constraints that programs put on composition but also because of the deeply intertwined relationship between writing code and the writing of human language” (5). This idea of the limitations of software is reiterated through the other articles as well, so I’d like to finish by posing a few questions about the possibility of software design to a pedagogical end.

What might software designed by writing teachers look like?

What possibilities do the limitations of current writing software prevent students and writers from achieving?

If students were asked to articulate their frustrations with the current composition classroom, what software could they craft with their teachers to combat these issues?

Would a coding language designed by rhetoricians look different than those written by computer scientists? How?

Mr. Roboto only wants you to make the grade

Ian Bogost’s opening move in the first chapter of Persuasive Games,  “Procedural Rhetoric”, was to appeal to a game called Tenure. I was both surprised and pleased to see that teaching was positioned as a game so prominently in a book about games. As a reminder since this already feels like a while ago, Bogost’s description of Tenure accounted for a training device that was meant to simulate the first year of teaching with the aim toward getting a contract for the second year (1). Through the multiple choice decision making events, the procedural argument about teaching is that it requires a complexity of decision making that a teacher must navigate, involving contradictory and multidimensional conditions related to classroom management, collegiality with other teachers, student advocacy, and career advancement (to name a few) that can have dynamic results, many of which may or may not be in the best interest of any one party at any one moment. Bogost hones in on the aspect of institutional politics of the game, but I’d argue that even within the confines of determining what is the best decision to make as a teacher with a duty toward students, complexity runs abound in any decision made.

Where there is complexity, there is also, I am increasingly finding, a beckoning for potential assistance from computation. If we can explore the possibility of mechanical ethicists, as Maher does in “Artificial Rhetorical Agents and the Computing of Phronesis,” then why not mechanical teachers? Teaching, after all, can be rule-bound and is also heavily invested in making decisions. In a way, it is a series of moments of dealing with oughts and ought nots, much like what Tenure procedurally displays. Maher’s description of Beliefs, Desires, and Intentions as creating an architecture that “attends to the fact that action most often occurs in a space of ‘competing alternatives’ that must be weighed before deciding one” suggests that the machine’s potential ability to deliberate about a moral decision is similar to the Tenure player’s deliberation about whether or not to ignore a student’s tardiness (15). But let’s imagine what could happen if we remove the player and insert the machine as player. What would it be like to have a robot teacher?

random robot with 2 plus 2

Apparently, from Ohio to South Korea, robot teachers are already in action. However, these robots are controlled by humans and not (at least totally) by code. I’m interested in something like Maher’s AMA (which would require an ARA). I guess, to be very creative, we will call it the Artificial Teacher Agent (ATA) for now. If we accept that teachers deal with oughts, then perhaps it wouldn’t be too much of a stretch to go from moral systems to teaching decisions. After getting through the promise and problematics of Kantian and utilitarian ethics as models for machines to run, Maher spends a moment on Aristotelian virtue ethics and case-based ethics, providing an example of Marcello Guarini’s method that “uses a series of cases to train artificial ‘neural network models’ so that any kind of abstract moral rules rise organically from the situation rather than deontologically” (10). The machine would be educated in morals through this process of experiencing case-based ethics of sorts. If we maintain an acceptance of the assumption that teaching can be considered a series of oughts, we might also consider the dynamism and contingency that the situatedness of teaching provides. Things that maybe can be planned for, but perhaps still can’t be completely planned for (situations that might somehow violate rules and models we hold).

Some Teaching Oughts  that I thought of (this is furious free association, so it could be a little suspect)
 

This student ought to spend time doing this kind of writing

 

I ought to spend more time on working on the understanding of the assignment in class with students

 

I ought to grant an extension because the student has been dealing with a personal issue

 

I ought to reach out this student to have a meeting about generating ideas for writing

 

I ought to be more stern today to send a message that the class or assignment will be difficult

 

I ought to be more friendly today to send a message that this is a space for experimentation

 

I ought to say very little today and do more listening to see where they are at

 

 

I read a book this summer called After Pedagogy: The Experience of Teaching by Paul Lynch. Lynch’s project is a response to a movement in composition studies called “postpedagogy.” A brief (and probably sloppy) attempt to define this movement would be something like this: work in the field has positioned developments in pedagogical theory as an impossible aim because, as Lynch writes, teaching is “too complex, too particular, too situated to be rendered in any repeatable and therefore portable way” (xiv). In other words, pedagogy cannot be relied upon, as a priori, in order to reliably make decisions (and good ones) as teachers. Lynch’s response to the movement is not a rebuttal or a counterargument, but instead, drawing from John Dewey’s conception of experience (one that is very difficult to pin down), a move to develop a philosophy of experience in teaching writing that accounts for both the “raw data of everyday living” as well as “our methods of reflecting, repurposing, and learning from everyday living”, so that compositionists can make a sort of yin and a yang of pedagogy and contingency (xix).

One of Lynch’s heavy investments for his application of a philosophy of experience to teaching is in the moral system of casuistry.  Essentially, casuistry “asks whether and when circumstances change the ways in which we judge moral action….casuists check their judgments against paradigm and analogy and frame their decisions for the particular case at hand….a judgment holds for only for the given case” (Lynch 104-105). For an application to classroom ethics, Lynch provides the following example:

“A student asks for an extension. ‘I am swamped in my other classes,’ she says. This claim (which students invariably fail to understand insults the teacher to whom they are talking) might not win much sympathy, until we consider that the student (a) has never missed a deadline, (b) has been stellar all semester long, (c) is holding down a full-time job, (d) is raising three kids alone, and (e) is a member of the honor society. Given these circumstances, teachers might be inclined to bend the rules a bit. This is a basic casuistic situation, in which circumstances seem to demand some deviation from the usual procedure (111-112).”

There is so much information available and imagined at any given moment in the classroom and in preparing for class. If we imagine an ATA like an AMA, then, how great would it be (and possibly this would be a requirement like for the AMA) if it was also an ARA. If it also had to explain the decisions it made in the classroom. Lynch’s example is complex, but the classroom certainly has more potential for complexity. The words I choose as a teacher, the activities I design, the way I arrange the room, the questions I ask, the way I speak to students at any given moment, how I talk to one student vs. another student (say, a student with low self-esteem and another with, well, plenty of esteem to go around) : all of these are decisions made for one reason or another, and they may lead to myriad outcomes. Because of the situatedness of teaching, I might have no time or foresight to prepare to make some of these decisions (or adjustments to decisions already made). If an ATA might have some sort of case-based learning ability, it might make some interesting and effective moves as a teacher.

Perhaps it wouldn’t work, but maybe, like Tenure, there could at least be some training value. Maher also seems to retreat to this sort of possibility about the potential to learn about morality and possibly update it for a digitized world (30). Teaching teachers, via intentional professionalization or more routine observations by administrators or other teachers, has long been held as a difficult and onerous task, especially when implications for one’s career are implicitly or explicitly involved. Maybe having a machine report out decisions it would make in your classroom could  be an outstanding way to consider something like Lynch’s argument for a balance between pedagogy and contingency: considering and reflecting on what we’ve done to learn how to (loosely) teach tomorrow (only to again follow the same pattern). Having an ATA (that is also, necessarily, an ARA) explain its decisions to a teacher (novice or seasoned) might be promising for future professionalization and training of teachers. This would ostensibly remove the interpersonal anxiety between and among teachers that sometimes manifests itself; the lack of a holistic subject that one has an ongoing social relationship (or, at least, could have an ongoing social relationship with) might allow for someone to let their guard down when considering the case-analysis by the ATA. This might kind of be like the Rogerian ELIZA that found itself to be a great conversation partner. Rather than right or wrong answers, the ATA could provide options in decision making that the teacher may not have considered and that information might inform future decisions (pedagogically a priori or ad libbed in the moment of teaching in the future).

Or it could be a terrible intervention that only encourages neoliberal surveillance and quests for “teacher effectiveness”. I could also see this being terrible. But, I think it is interesting to imagine as a useful possibility.

Searching for a Better Title

The twitter-bot tutorial was great.  I had no knowledge of what a twitter-bot was until that lesson, as I had never worked with twitter and had never looked into it very far either.  The range of possible uses of the twitter bot are fascinating to me, as you can essentially make them do whatever you would want, as long as you have the coding skills to pull it off.  That was, for me, one of the minor negatives of the workshop.  Despite having gone through the majority of the code academy tutorial and having learned a decent amount of coding, i still found that I did not know near enough to make a bot that would function anywhere near a way that I would want it to.

The possibilities that I found most intriguing about the bot itself were not so much in posting to twitter, but instead its ability to do a significant amount of research for a user with a very small amount of effort necessary.  The search function that can be programmed into a bit, which I attempted to toy with a bit, was extremely intriguing.  It could be a very useful and easy way to immediately search for and collect data on certain topics as they appear in tweets, such as frequency, positive or negative connotation, etc.  I am sure that there are plenty of people out there currently exploring these functions in interesting ways, but on a personal level it would be worthwhile to be able to manipulate and use bots in this way.

Difficulty Tapping into the Stream

Twitterbots! It was really cool to get programming in Python “out in the wild” and actually see things happening as a result. There’s some childlike glee in seeing lines of code transformed into action, and that seems to be amplified when the output is coming from twitter rather than a simple printout. In addition to getting the bot to walk through lines of text, I also was able to get it to post photos randomly selected from some I’d downloaded to my computer, which took a little more finagling. What I would really love to do is get it set up so that it’s watching the twitter stream and responds whenever somebody tweets at it or uses a certain hashtag, but that turned out to be a whole new can of worms when I tried to get started during our workshop, especially since my hasty google search didn’t find any useful implementations of tweepy along those lines.

Overall, I think that’s the one thing that really stuck out to me: the frustrations in sifting through tweepy’s documentation trying to figure out how to do things. I’m thankful to have a library to take care of calls to the twitter API for me, but nothing sucks quite so much as knowing there’s a function defined in a class and not being able to figure out how to call it. Another option would be to just write a program that searches for the things I’m trying to respond to whenever I run it, but that’s not nearly as fun as having it actively monitor the stream live. I’ll probably play around with both implementations a bit and see if I can get either one working; all of the ideas I could come up with for bots were pretty lame, but I definitely like the idea of having it be responsive rather than just chunking out lines from a text file. Here’s hoping that instinctive excitement at writing a program and watching it tweet helps me power through the torrent of error messages that are sure to be coming my way…