I apologize if this post is a bit late. I was hoping I would be able to program a bot to complete this blogpost for me, along with all of my other assignments and duties. Alas, it was not to be. I sit here actually typing this message with my fingers like a common 20th century chump… or am I? Perhaps I am really a computational creation of the greatest bot-maker ever, pushing the Turing test to new limits even as we speak.
But seriously–after observing the difficulty and hard work that goes into make an interesting Twitterbot, even a simple one, made me think a bit. If computational media classes like this one take off and become part of curriculum for undergraduates one day then perhaps a common assignment will be to have students make Twitterbots or other simple chatbots. I was imagining the hilarious situation where a student, unable or unwilling to really learn how to code a bot, stays up all day and night typing inane chatter into Twitter in an attempt to pretend to be a bot in order to fool the teacher into thinking the programming assignment was completed successfully. The instructor would have to engage in some kind of reverse Turing test to determine the authenticity of the bot. If machines can emulate humans, can humans also emulate machines? I mentioned this thought to a friend who replied that recently a popular Twitterbot was in fact exposed as a human. What new species of intellectual misconduct is this? Passing something off as NOT your original work!
I suppose the best chatbot would actually be a human-machine symbiote, with a program amalgamating all the relevant Twitter and headline chatter and suggesting possible humorous mashups, but with the human getting final review. If Twitter chatter is to be the new form of aesthetic expression, then let’s push it to the limit!
This post was supposed to end brilliantly, but my chat program still seems to experience bugs while winding its random musings into a succinct conclusion. Perhaps one day someth>>> 10 * (1/0) Traceback (most recent call last): File “<stdin>”, line 1, in ? ZeroDivisionError: integer division or modulo by zero >>> 4 + spam*3 Traceback (most recent call last): File “<stdin>”, line 1, in ? NameError: name ‘spam’ is not defined >>> ‘2’ + 2 Traceback (most recent call last): File “<stdin>”, line 1, in ? TypeError: cannot concatenate ‘str’ and ‘int’ objects
I was struck by the way in which he did several things at once. Sure, he helped us follow a functional arc of code rooted in libraries and the sine qua non of digitalia, copy/paste. And boy did he define a host of terms, such as API and ASCII and Unicode, while leaving some opaque (such as “web scraping”). But he also unfolded bits of “brogrammer” lore (his word) and touched upon points of cultural critique, such as the corporate desire to index our online avatars to our consumptive bodies.
He reminded me of a foreign language instructor with whom I am currently taking a class (beginning Turkish). She shows us with infinite patience how a difficult yet extremely regular grammar functions at a code-like level, and intersperses this work with cultural asides in rapid-fire frames. In both cases I see a translation from semiotic domain to semiotic domain, to use a term from Gee’s What Video Games Have to Teach Us About Learning and Literacy. Critical learning, Gee claims, is a matter of both internal and external design grammars — both the content within the design space afforded by a given domain, and the ways of thinking, acting, and being that constitute that domain in the social sense (which many of us felt was spottily addressed by Bogost’s Persuasive Games).
instruction, the chance to ask questions, the comestible goldfish. Even so, I felt able to grasp some of the contours of what we were doing. I wasn’t just copying & pasting blindly, but instead thinking about the shareable nature of Python code, the value of open source libraries, and how so much of the latticed software constituting my humble bots is out there, above the virtual