I apologize if this post is a bit late. I was hoping I would be able to program a bot to complete this blogpost for me, along with all of my other assignments and duties. Alas, it was not to be. I sit here actually typing this message with my fingers like a common 20th century chump… or am I? Perhaps I am really a computational creation of the greatest bot-maker ever, pushing the Turing test to new limits even as we speak.
But seriously–after observing the difficulty and hard work that goes into make an interesting Twitterbot, even a simple one, made me think a bit. If computational media classes like this one take off and become part of curriculum for undergraduates one day then perhaps a common assignment will be to have students make Twitterbots or other simple chatbots. I was imagining the hilarious situation where a student, unable or unwilling to really learn how to code a bot, stays up all day and night typing inane chatter into Twitter in an attempt to pretend to be a bot in order to fool the teacher into thinking the programming assignment was completed successfully. The instructor would have to engage in some kind of reverse Turing test to determine the authenticity of the bot. If machines can emulate humans, can humans also emulate machines? I mentioned this thought to a friend who replied that recently a popular Twitterbot was in fact exposed as a human. What new species of intellectual misconduct is this? Passing something off as NOT your original work!
I suppose the best chatbot would actually be a human-machine symbiote, with a program amalgamating all the relevant Twitter and headline chatter and suggesting possible humorous mashups, but with the human getting final review. If Twitter chatter is to be the new form of aesthetic expression, then let’s push it to the limit!
This post was supposed to end brilliantly, but my chat program still seems to experience bugs while winding its random musings into a succinct conclusion. Perhaps one day someth>>> 10 * (1/0) Traceback (most recent call last): File “<stdin>”, line 1, in ? ZeroDivisionError: integer division or modulo by zero >>> 4 + spam*3 Traceback (most recent call last): File “<stdin>”, line 1, in ? NameError: name ‘spam’ is not defined >>> ‘2’ + 2 Traceback (most recent call last): File “<stdin>”, line 1, in ? TypeError: cannot concatenate ‘str’ and ‘int’ objects
Your point about passing deliberately crafted tweets off as a bot in future coursework is one that I think will continue to be increasingly important, and it reminded me of the story of horse_ebooks a few years back (http://mashable.com/2013/09/24/rip-horse-ebooks/#b0HnpUqSvkk9). The account had been the subject of so much speculation (positive and negative) prior to this reveal, and it was interesting to see how many people were disappointed that the text had been generated by humans and not by some kind of bot.