I can’t make it to CCCC2013, sadly! But here’s a video and transcript to represent me there.
Last Wednesday, in my Uses of Literacy class, we performed a social writing experiment. I’ve done similar forms of activities in the past, and it’s always interesting, so I thought I’d share. I posted a response to my students on the course blog, but I wanted to reiterate it here for any teachers interested in doing something like this in another writing class.
A quick background on the class: it’s an upper-level composition class that draws a lot of students looking to get into Pitt’s Masters in Teaching program, returning students, as well as a few seniors looking to fill the writing requirement but not necessarily interested in a literature course. We don’t have composition majors at Pitt, but some are lit or creative writing majors. I teach the course as a kind of intro to literacy studies, with a lot of focus on pedagogy. I tend to be straight with them about why I ask them to do certain kinds of writing or reading or why we’re discussing what we’re discussing because I want them to see teaching from the inside, just a little bit, before they begin to do it themselves. They do literacy narratives, interviews, mini-ethnographies, and blogging in class. Later, we do a digital project of some persuasion. I’ve taught the course twice before, and this semester I have just 13 students–a dream for me and the students! The syllabus is here [pdf] and the blog for the course is here.
Last Monday, we discussed Deborah Brandt’s “Remembering Reading, Remembering Writing,” and talk swirled around ideas of writing as individualistic and reading as social. Students lamented the fact that writing was often portrayed as something to do in isolation, and something for which they are often judged. Perhaps because many of them aspire to careers in teaching, they wanted to fix the problem (although I kept pressing them to understand what the “problem” was before jumping in to judge and fix!).
So I decided to run a little social writing experiment in our next class on Weds. It’s a wonderful class and they were good sports about it. I told them that they weren’t being graded and nothing was going to come of the writing, so they could feel free to treat it as a genuine experiment, subject to success, failure or some grey area in-between. I wanted to see and have them see what social writing looks and feels like. I set up four laptops (checked out for the day from our IT support center) and had them split up into groups of three (my ideal group size, always), one group to a laptop. Each computer had a Google doc open with a question: What is literacy good for? What is literacy? What do social theories of literacy help us to understand? What open questions do you have? Each group took 10-15 min for each question. I asked that they not just make lists, but compose. That meant that they had to write, agree on sentence choices, some form of organization, etc. They were writing together in their group, but also creating a palimpsest of answers and inquiries with other groups.
Results? They said, when we wrapped up at the end, that the experiment was a success. It helped them review the theories of literacy we had encountered so far and ask questions about what they didn’t yet understand. One student mentioned it might have been good preparation for a test, had I been inclined to give one–which I am not! (This class is assessed via portfolio, rather than exams. As I pointed out in class, professors give exams in order to get students to study and learn the material–not because they like to read or grade exams! Since students already did the work of review, an exam would be a waste of all of our time.) Students had to negotiate a space of shared writing–and many felt anxiety about changing or deleting the work of previous groups (although that did happen!). They also noted that it made class go by faster, because it was fun and they were conferring and conversing the whole time.
I think there is a lot of smart synthesis represented in the docs they composed in class. Obviously, they’re not polished papers. But that wasn’t the point. There’s a lot I heard in the discussions that isn’t captured in the text. Interestingly, I saw each group approach the problem differently. Some groups all huddled around the computer viewing the screen together. Others had one person read or summarize the work of previous groups (which was noted in discussion afterward to be sometimes difficult to follow). Some got so caught up in the debate that they didn’t or couldn’t write down most of what was discussed. Others carefully composed polished sentences to sum up ideas and provocations in the readings. As students are discovering when we share writing in class, there are many different ways to approach writing events. In this group of highly literate college juniors and seniors, there appears to be no one “right” way to write.
Here are links to the documents they produced in class (will open a Google doc for you to view, but not edit):
Has anyone else tried a similar experiment with social writing? What did you do, and how did it go?
I hate grammatical diatribes and so I hesitate to write one. (So stereotypical for an English professor to do this! Might as well get a suit jacket with elbow patches, etc.) But this diatribe—unlike all others before it—is important. It’s about hacking and making and reclaiming an excellent word from misuse.
I have often used this formula to evaluate the degree to which academic prose is overwrought:
instances of “utilize” / instances of “use” + “always already” = overwrought
OK, not really, but I think it would work. That is to say, when people use the word utilize, they generally just mean use, but want to add weight to their sentence. Example: “Scholars can utilize theories of ubermenschenism to cogitate on the prodigious output of members of the canine species.” Because weightiness is what academic prose is all about!
But utilize is not a synonym for use. In fact, it’s far better than that! Here’s the Oxford English Dictionary definition:
To make or render useful; to convert to use, turn to account.
The key here—to me, at least—is the making and rendering. You can use something readymade, but if you have to do something to it to render it useful for your purposes, you must utilize it. Here’s the website Editage with a nice, concise explanation:
utilize is not simply a synonym for use but suggests a less common alternative or deployment for a different purpose: to utilize ordinary ink for staining, to utilize a dew drop for magnification, or to utilize sand particles as the means of increasing friction, for instance.
Which means that the tragically misused word utilize is not just a convenient proxy for academic BS. When used correctly, utilize is a readymade term for hacking! Just think of the possibilities:
- I utilized the coasters and boxes to make a standing desk. (true story—utilizing them right now!)
- The kids utilized the couch, tent canvas and appliance boxes to build their fort.
- The massive data trails we leave online are ripe for digital humanists to utilize.
Which brings me to an important question: can you utilize an Arduino? That is, can you utilize something for which uses are deliberately not proscribed? Perhaps if you use the Arduino as a coaster? (except for the awesome RFID/Arduino winebottle labeling coaster below)
At any rate, I say we reclaim the term from the depths to which its been rigorously plummeted, er, sunk.
With the recent uptick in the “everyone should code” movement, it seems that everyone’s now talking about computer programming as a new form of literacy. The terms by which people refer to the concept vary, but the central idea is shared: computational literacy; computational thinking; procedural literacy; proceduracy; computer literacy; iteracy. I’ve been working in this area for a few years now from the perspective of literacy studies, and I thought it might be a good time to share an annotated list of resources that I’ve found helpful in thinking through computer programming as a literacy. Chris Lindgren assembled a bibliography before me, and there’s a lot of overlap here. I’m inclined to say that the overlap points toward a burgeoning canon, although that recognition comes with the requisite wincing about a lack of gender/race diversity here.
I’ve listed just online or print texts, and the list tends toward the academic and historical. My Diigo library, assembled over the last few years with the tag “proceduracy”, is a better resource for public discussions about computer programming as a literacy.
I decided to list these in rough order of importance, which is incredibly subjective. I’ve broken the central sources up into a few categories: Really Important Stuff; Blogs & Online Writings; Dissertations; Work in English Studies. This is not to claim that there aren’t overlaps (e.g., something can be important and online!) but just to organize it a bit. After the central list of sources for programming and literacy, I’ve included a list of related work that people might want to read in computer history, pop books, code studies, and composition & rhetoric.
Of course, the whole list is partial and biased! I welcome additions and reactions in the comments or via other contact media.
Here’s the full document, available through Scribd. Below that, I’ve pasted just the bibliographic information. [Edit 6/7: added a couple more sources.]
Really Important Stuff
Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, Inc., 1980. Print.
Maurice Black’s “The Art of Code” is an excellent dissertation that is, unfortunately, very hard to get because it only exists in a single, print-copy form that must be requested through the University of Pennsylvania library. (I don’t think the author has ever published from it, but please alert me if so! I think he might have left academia.) I’ve been meaning to upload these notes for ages in order to provide a bit broader circulation to the ideas. So here you go! Available on Scribd.
Incidentally, Nick Montfort’s notes on the text are great (and are what persuaded me to read the dissertation).
Black, Maurice. “The Art of Code.” University of Pennsylvania, Department of English, 2002. Print.
David Rieder (with a little help from me) organized a Town Hall focused on programming at the Computers and Writing conference at North Carolina State University this last week (May 19). The topic was: “Program or be Programmed: Do We Need Computational Literacy in Computers and Writing?” and the panelists were David Rieder, me, Mark Sample, Alexandria Lockett, Karl Stolley, and Liz Losh as respondent.
From the questions, twitter backchannel [edit: see Mark Sample's backchannel archive of #cwcon], and comments I got from the audience after the Town Hall, it appears to have been a success. For those not already thinking about this question, we got people thinking about it. For those already thinking about this question (which was most of the audience, I think), we said some controversial things, anxiety-producing things, and some things that elicited lots of head-nods.
I’ve pasted my comments on “Coding Values” below. You can find the text of the other panelists’ comments here:
- David Rieder’s “Programming is the New Ground of Writing” [pdf]
- Mark Sample’s “5 BASIC statements on computational literacy“
- Alexandria Lockett’s “I am not a Programmer“
- Karl Stolley’s “Source Literacy: A Vision of Craft“
- Liz Losh’s response
Today I want to talk about good code. Experienced programmers often think about what good code is. But they rarely agree.
And here’s what I want to say: they don’t agree on what good code is because there is no good code. Or, rather, there is no Platonic Ideal of Good Code. Like writing, there is no good code without context.
Unfortunately, when good code is talked about, it is often talked about as if there’s no rhetorical dimension to code. It’s talked about as though the context of software engineering were the only context in which anyone could ever write code. As if digital humanists, biologists, web hackers, and sociologists couldn’t possibly bring their own values to code.
I’ll give you just a couple of examples of how this happens, and what this means for us in computers and writing.
One of the earlier articulations of the supposed Platonic Ideal of Good Code was Edsger Dijkstra’s infamous “GOTO considered harmful” dictum, from 1968.
This article railed against unstructured programming and the GOTO command. Now, many of us first learned the joy of coding through the languages that used the GOTO command. But Dijkstra’s statement suggests that the context of the software engineering work place should override all other possible values for code. This is fine—as far as it goes, which is software engineering and computer science. But this kind of statement of values is often taken outside of those contexts and applied in other places where code operates. When that happens, the values of hacking for fun or for other fields are devalued in favor instead of the best practices of software engineering—that is, proper planning, careful modularity, and unit testing.
Ender7 is replying here to a thread about a recent ScientificAmerican story that suggested scientists were reluctant to release the code they used to reach their conclusions, in part because they were “embarrassed by the ‘ugly’ code they write for their own research.” According to Ender7, they *should* be ashamed of their code. Ender7 goes on to say:
Why is academic code an “unmitigated nightmare” to Ender7? Because it’s not properly following the rules of software engineering. Again, the rules of software engineering presumably work well for them. I’m not qualified to comment on that. But that doesn’t mean that those values work for other contexts as well, such as biology.
So, in this example, software engineering’s values of modularity, security, and maintainability might be completely irrelevant to the scientist writing code for an experiment. If scientists take care to accommodate these irrelevant values, they may never finish the experiment, and therefore never contribute to the knowledgebase of their own field. The question, then, isn’t about having good values in code; it’s about which values matter.
We often hear how important it is to have proper grammar and good writing skills, as if these practices had no rhetorical dimension, as if they existed in a right or wrong space. But we know from writing studies that context matters.
Put another way: like grammar, code is also rhetorical. What is good code and what is bad code should be based on the context in which the code operates. Just as rhetorical concepts of grammar and writing help us to think about the different exigencies and contexts of different populations of writers, a rhetorical concept of code can help us think about the different values for code and different kinds of coders.
And this is how coding values are relevant to us in computers and writing. The contingencies and contexts for what constitutes good code isn’t always apparent to someone just beginning to learn to code, in part because the voices of people like Ender7 can be so loud and so insistent. We know from studies on teaching grammar and writing that the overcorrective tyranny of the red pen can shut writers down. Empirical studies indicate it’s no different with code. Sure there are certain ways of writing code that won’t properly communicate with the computer. But circle of valid expressions for the computer is much, much larger than Ender7 or Dijkstra insist upon.
To close, I want to share with you a bit of what might be considered very ugly code, a small Logo program I call, tongue-in-cheek, “codewell”:
This is bad code because:
- it is uncommented and hard to read
- it’s in an old, seldom-used language
- it is baggy and has repeated statements that should be rewritten as functions
- it is not modular or reusable
- it’s an “unmitigated nightmare”
- it could be used to teach people some things about functions and code
- it’s a start for a LOGO library of letters that might be kindof cool
- it does what I want it to do, namely, make my argument in code form.
Let’s imagine a world where coding is more accessible, where more people are able to use code to contribute to public discourse or solve their own problems, or just say what they want to say. For that to happen, we need to widen the values associated with the practice of coding. To Edsger Dijkstra, I’d say: coding values that ignore rhetorical contexts and insist on inflexible best practices or platonic ideals of code should be CONSIDERED HARMFUL – at least to computers and writing.
The CCCC Intellectual Property Committee publishes an annual review of interesting IP developments geared especially toward composition scholars. You can download a copy of the IP Annual here [pdf], under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States license.
Here’s the table of contents:
- Introduction: Copyright and Intellectual Property in 2011 (Clancy Ratliff, Editor)
- The Defeat of the Research Works Act and Its Implications (Mike Edwards)
- Open Access Initiatives (Annette Vee)
- One Step Forward, Two Steps Back: What Golan v. Holder means for the Future of the Public Domain (Traci Zimmerman)
- “Sentence First—Verdict Afterwards”: The Protect IP and the Stop Online Piracy Acts (Kim D. Gainer)
- A Dark Day on the Internet Leads to a Sea Change in Copyright Policy (Laurie Cubbison)
- Occupy Trademark: Branding a Political Movement (Timothy R. Amidon)
The best thing about Creative Commons licensed and open access work is that we can distribute it more widely. In that spirit, I’m pasting my contribution to the IP Annual below (modified slightly: more linkified than in the pdf version). I wrote this as a short but relatively comprehensive review of the current status of open access initiatives in universities: what they are; why they’re happening; and what they mean for us as scholars. At the end, I include some good resources for understanding and negotiating open access scholarship.
Open Access Initiatives
by Annette Vee, excerpted from the IP Annual, published by CCCC IP Committee [pdf]
In September 2011, the Princeton University Faculty Senate approved an “open
access” policy for faculty research, adding the university’s name to a growing
list of research institutions opting for such policies. Harvard University adopted
a similar policy in 2005 (the first of such kind in the United States) and MIT did
in 2008. Following the lead of these elite institutions, many others have adopted
or are considering adopting open access policies, including University of
Pittsburgh, Columbia University, and Emory University. These initiatives aren’t
limited to the United States, either: University of Glasgow (Scotland), University
of Latvia, and University of Khartoum (Sudan) all have participated in open
access discussions and initiatives on campus (“Open Access Call”). A dramatic
graph of the increased numbers in open access initiatives can be seen at the
Registry of Open Access Repositories Mandatory Archiving Policies
The move in “open access” from buzzword to policy afects the
publication, circulation, and readership of our scholarship. These efects are
largely positive for writing researchers: greater circulation for our work;
enlarged rights and control over our scholarship; and new venues and formats
for publication. This brief report outlines trends in open access initiatives, some
of their recent precedents, and a few of the most salient implications for our
What Is Open Access?
Open access (OA) literature is freely available online and has fewer restrictions
on its use. According to Peter Suber, the Director of Harvard’s Open Access
Project, “OA removes price barriers (subscriptions, licensing fees, pay-per-view
fees) and permission barriers (most copyright and licensing restrictions).” OA
policies are often explained in terms of the labor, funding, and distribution of
scholarship: faculty contribute the bulk of labor for journals through their
writing and editing; faculty work is generally funded by universities and public
institutions; and free access to this work allows for greater distribution of
scholarship as well as some return to the public for funding its production. OA
scholarship is compatible with peer review: although scholars can make their
research available on blogs or institutional repositories without peer review, the
paradigm of OA policies is traditional, peer-reviewed scholarship.
Two major forces are currently moving scholarship towards OA. The first
originates from faculty or universities, and Princeton’s, Harvard’s, and MIT’s
open access policies for faculty research are examples. The second originates
from publication venues such as journals; examples are Springer Open, and the
journals Kairos, Enculturation, and Digital Humanities Quarterly, which publish
8scholarship online without paywalls or logins. Working in concert with both of
these forces are repositories for OA scholarship such as BioMed, ERIC, and
Faculty OA Policies
The background copyright policy of most research universities assigns copyright
ownership in scholarship to the faculty who produce it. This copyright
ownership assignation distinguishes university faculty from most other kinds of
employees, whose “work for hire” basis means that their employers own the
copyright in their work. As copyright owners in their work, university faculty
are then at liberty to assign their copyrights to whomever they choose. Through
a Copyright Transfer Agreement, journal publishers often request copyright
ownership in exchange for publication of scholarship. Publishers may then
license back to the author limited distribution or reuse rights.
OA policies such as those at Harvard, MIT and Princeton are designed to
help faculty either reclaim some of those rights from publishers or to better
position them to bargain for retaining their copyright. Princeton’s policy states:
Each Faculty member hereby grants to The Trustees of Princeton University a
nonexclusive, irrevocable, worldwide license to exercise any and all copyrights
in his or her scholarly articles published in any medium, whether now known or
later invented, provided the articles are not sold by the University for a profit,
and to authorize others to do the same. […]The University hereby authorizes
each member of the faculty to exercise any and all copyrights in his or her
scholarly articles […]. (“Recommended open access policy” [pdf]).
Under this policy (which echoes Harvard’s), the author and the university can
both exercise copyrights; both have rights to distribute the work as long as they
do so without making a profit from it.
Faculty-driven OA policies can be classified as “opt-in” or “opt-out.” An
“opt-out” policy (such as the one adopted by Harvard, MIT and Princeton) is
more powerful—it is in force unless a faculty member requests to opt-out of it,
whereas the “opt-in” policy (adopted by Nebraska, Emory, and Michigan) is only
activated if a faculty member opts in. Because opting-out of the policy is made
relatively easy for faculty—for instance, Harvard ofers an online waiver request
form—one might suspect the policy to be of less force in practice. However, as
Princeton Faculty Committee explains, universities can use an “open-access
policy of this kind (even with waivers) to lean on the journals to adjust their
standard contracts so that waivers are not required, or with a limited waiver that
simply delays open-access for a few months.” Additionally, while faculty under
an “opt-out” policy can assign their copyright to a publisher, they cannot sign
away their university’s right, which means that the university can still freely
distribute that work, generally in an institutional repository.
Faculty OA policies also difer in terms of their deposit requirement—that
is, where the scholarship must be deposited to comply with the OA policy.
Harvard’s policy requires that faculty deposit their work in their OA repository,
DASH (http://dash.harvard.edu/). Princeton has no such repository (although the
faculty recommended the development of one when they approved the OA
policy) and does not require deposit. At Princeton, faculty can elect to deposit
their work in a repository specific to their field (e.g., PubMed or arXiv). Many
universities who do not yet have an official OA policy for faculty provide online
repositories for faculty to publish their work, for example: University of
Pittsburgh’s D-Scholarship@Pitt (http://d-scholarship.pitt.edu/), and University
of Illinois’ IDEALS (http://www.ideals.illinois.edu/).
Along with the trend in faculty-driven OA policies, a number of OA journals
have cropped up in the last few years. Most prominent are the Public Library of
Science journals (PLoS One, PLoS Biology, etc., http://www.plos.org/), which
publish print articles alongside digital versions. To cover costs, PLoS charges
authors’ sponsoring institutions for publication. Recently-launched humanities
journals such as the International Journal of Learning and Media (http://ijlm.net/)
and the International Journal of Communication (http://ijoc.org/) are sponsored
by hosting universities (MIT and USC, respectively) and grants. The rhetoric and
technology journal Kairos (http://www.technorhetoric.net/), operating as an
online open access journal since 1996, relies on grant support as well as support
from editors’ institutions.
The OA journals mentioned above are peer-reviewed and have editorial
boards comprised of leading scholars in their fields, proving that OA publishing
can be just as competitive and prestigious as publishing behind paywalls.
Why the Recent Trend in OA Initiatives?
OA has been driven by shrinking university budgets, better software platforms
for distribution, and faculty’s increasing recognition that wider distribution and
publicity means higher citation counts and better reputation. As it has become
easier and more accepted to do so, more and more faculty distribute their work
on public archives, blogs, or personal websites, and OA initiatives echo that
While university budgets have been cut worldwide, the cost of journal
subscriptions has risen. Libraries are forced to make difficult choices about what
to cut, yet the major commercial journal publishers have relatively high profit
margins. These financial concerns have become political concerns as well: why is
university research, much of it publicly funded, not freely available to the
public? University of Pittsburgh math professor Thomas Hales quips, “We
researchers create the content of the journals. We conduct the research, write
the articles, referee the papers and staf the editorial boards. We do this for free
every morning and buy the publications back again in the evening” (“Protest
Launched“). In a recent Inside Higher Ed editorial, provosts of eleven large,
publicly-funded research universities wrote in support of OA scholarship: “we
believe that open access to such federally-funded research reports facilitates
scholarly collaboration, accelerates progress, and reinforces our government’s
accountability to taxpayers and commitment to promoting an informed citizenry
essential to the enduring stability of our democracy.” With shrinking public
10funding, faculty researchers are realizing that we are not isolated from
economics and politics. The push for OA scholarship is, in some ways, a response
to the economic and political forces of corporatization and anti-intellectualism.
These economic and political concerns about scholarship are
underscoring shifts in scholarship itself—moves toward digital scholarship in the
humanities and full, published datasets in the sciences. The Internet allows for
more complex scholarship to be published; slowly, that scholarship is being
done, and journals are publishing it. A wave of books about the crisis of the book
—notably Ted Striphas’s Late Age of Print, and Kathleen Fitzpatrick’s Planned
Obsolescence (which specifically addresses the scholarly monograph)—have
highlighted the fact that our traditional, print-based and commerciallyoutsourced publishing model is untenable. Recently developed institutions and
technologies ofer excellent support systems for OA publishing; these include
Creative Commons Licensing (http://creativecommons.org/), Open Journal
Systems (http://pkp.sfu.ca/?q=ojs), SPARC (http://www.arl.org/sparc/), and
Changes in publishing, politics, budgets, and technology have all
contributed to this trend toward OA scholarship. However, recent OA initiatives
have a rich lineage. The dominant repository for math, statistics and physics,
arXiv (http://arxiv.org/) was started in 1991, and its first web interface was
installed in 1994. While not peer-reviewed, this repository is the definitive record
for those fields, due in part to its comprehensiveness and its afordance of rapid
publication. The wide acceptance of the repository has enabled researchers in
these fields to negotiate with publishers for distribution rights to their work. Out
of a December, 2001 meeting of the Open Society Institute (OSI), the Budapest
Open Access Initiative grew out. This influential initiative strove to accelerate
progress in the international efort to make research articles in all academic
fields freely available on the inte it funded must be made publicly available
within a year of publication. Because so much medical research is at least
partially supported by the NIH, this mandate instantiated a de facto OA policy
for the field of medicine.
More specific targeting of commercial publishing has put a finer—and
more political—point on OA initiatives. In 2003, the Turing Award-winning
computer scientist Donald Knuth led a widely-publicized revolt against Elsevier,
the publisher for the Journal of Algorithms, which he had edited since 1980. In a
comprehensively researched letter to the JoA board [pdf], he outlined the paradox of
Elsevier’s decrease in publication costs and its increasing price for the journal.
Knuth, the originator of TeX, the popular typesetting system for math and
computer science), notes that in 1980 the publisher performed the typesetting,
keyboarding and proofreading, “[b]ut now, the authors have taken over most of
that work, and software out the rising price of the journal. Moreover, he was
skeptical of Elsevier’s claim to need exclusive publication rights to avoid
apocryphal publications and make the scientific record “clear and unambiguous”
(Knuth 8). He called a straw poll for the editorial board to decide whether to
stick with Elsevier. As a result, the Editorial Board resigned en masse in 2004 to
found the journal Transactions on Algorithms, published by the professional
organization ACM. Ironically, Knuth closed his letter by stating, “I’m
emphatically not a revolutionary. I just want to do the right thing.”
Another accidental revolutionary, Fields Medal-winner Tim Gowers,
launched a highly publicized action against Elsevier in early long held: he would
no longer review for or publish in Elsevier journals. He cited their high prices,
unorthodox practices of “bundling” journals and their support of the Research
Works Act (H.R. 3699), which threatened to undo some of the work NIH’s OA
mandate had done. His post was a spark in dry tinder: a commentor to his blog
responded by setting up a website, “The Cost of Knowledge”
(http://thecostofknowledge.com/), to collect signatures for other scholars
interested in taking a public stand against Elsevier. The successful protest drove
Elsevier to drop its support of the Research Works Act and has raised awareness
among faculty about the predatory business practices of Elsevier and other
As a result of all of these forces encouraging OA scholarship, next year’s
IP annual report is likely to list quite a few more schools and journals committed
What Does “Open Access” Mean for Our Scholarship?
OA policies often allow for greater authorial control in publications, as they
permit researchers to retain their copyright. With copyright ownership,
researchers are free to distribute their work on personal websites, institutional
and collective repositories where they are indexed by finding tools such as
Google Scholar. Greater dissemination of scholarly work could lead to better,
more well-informed research. A 2001 article in Nature Debates was the first to
recognize that OA scholarship is more frequently cited (Lawrence), but this
finding has been confirmed through subsequent studies (for a more complete list
of articles charting dissemination of research in OA, see here:
http://opcit.eprints.org/oacitation-biblio.html). Moreover, OA scholarship is
available to independent researchers or those associated with less affluent
institutions. As research institutions in developing countries are growing
stronger, and as faculty positions associated with elite institutions with vast
libraries become more rare, the greater availability of scholarship may help to
erase some of the resource disparities between research institutions worldwide.
PLoS argues that the benefits of OA scholarship are:
Accelerated discovery. With open access, researchers can read and build on the findings of others without restriction.
Public enrichment. Much scientific and medical research is paid
for with public funds. Open access allows taxpayers to see the
results of their investment.
Improved education. Open access means that teachers and their
students have access to the latest research findings throughout
As the PLoS argument suggests, OA has implications for our teaching as well as
our research. Students, under financial pressure from a retracting economy and
tuition hikes, can access OA scholarship more easily and cheaply than work
behind paywalls. Additionally, OA education initiatives such as free online
12courses at MIT and Stanford are in line with the trend in OA scholarship. The
OA repository Open.Michigan strives to make course materials available not
only to members of their university community, but also to the public at large.
Although OA scholarship is clearly able to maintain high quality
standards, it is unclear whether it is compatible with the commercial journal
publishing system over the long run. Financing of journal publishing may be
taken up more by public grants and universities, which may lead to some painful
transitions in journal quality and budgets. Yet sanguine OA advocates claim
these risks are worth taking because OA promises so much for democracy,
education, and public knowledge.
Sherpa/Romeo allows people to check the copyright policies of journals and rates them according to their policies on open access: http://www.sherpa.ac.uk/romeo/
DSpace is a turnkey, open source software platform for establishing institutional repositories: http://www.dspace.org/
OJS (Open Journal System) is an open source journal management and publishing platform sponsored by the Public Knowledge Project: http://pkp.sfu.ca/?q=ojs
The Directory of Open Access Repositories registers OA repositores worldwide: http://www.opendoar.org/
SPARC (Scholarly Publishing and Academic Resources Coalition) provides an author addendum to add to copyright transfer agreements: http://www.arl.org/sparc/
Director of Harvard Open Access Project and SPARC Senior Researcher Peter Suber’s Open Access Overview: http://www.earlham.edu/~peters/fos/overview.htm
Harvard’s Model Open Access Policy for institutions: http://osc.hul.harvard.edu/sites/default/files/model-policy-annotated_0.pdf
The Open Citation Project – Reference Linking and Citation Analysis for Open Archives, catalogues the research on citation impact for OA scholarship: http://opcit.eprints.org/oacitation-biblio.html
11 Research Provosts. “Values in Scholarship.” Inside Higher Ed. 23 Feb 2012. Web. 8 Mar 2012. http://www.insidehighered.com/views/2012/02/23/essay-open-access-scholarship#ixzz1oTB6U1LO
“Budapest Open Access Initiative.” Open Society Foundations. n.d. Web. 7 Mar 2012. http://www.soros.org/openaccess
“The Case for Open Access.” Public Library of Science. n.d. Web. 7 Mar 2012. http://www.plos.org/about/open-access/
Gowers, Tim. “Elsevier—My part in its downfall.” Gowers’s Weblog. 21 Jan 2012. Web. 29 Jan 2012. http://gowers.wordpress.com/2012/01/21/elsevier-my-part-in-its-downfall/
Knuth, Donald. Letter to Editoral Board, Journal of Algorithms. 25 Oct 2003. Web. 7 Mar 2012. http://www-cs-faculty.stanford.edu/~uno/joalet.pdf
Lawrence, Steve. “Free Online Availability Substantially Increases a Paper’s Impact.” Nature Web Debates. 31 May 2001. Web. 7 Mar 2011. http://www.nature.com/nature/debates/e-access/Articles/lawrence.html
“Open Access Call for Proposals.” EIFL. 29 Feb 2012. Web. 8 Mar 2011. http://www.eifl.net/news/call-proposals-open-access-advocacy-campaig-0
[Princeton University] Ad-hoc Faculty Committee to study Open Access. “Recommended Open Access Policy.” 24 Mar 2011. Web. 7 Mar 2012. http://www.cs.princeton.edu/~appel/open-access-report.pdf
“Protest launched against journal publisher.” University Times, University of Pittsburgh. 9 Feb 2012. Web. 7 Mar 2012. http://www.utimes.pitt.edu/?p=19679
“Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research.” National Institutes of Health. 11 Jan 2008. Web. 8 Mar 2012. http://grants.nih.gov/grants/guide/notice-files/NOT-OD-08-033.html
Suber, Peter. “Open Access Overview.” 21 Jun 2004, last updated 3 Mar 2012. Web. 8 Mar 2012. http://www.earlham.edu/~peters/fos/overview.htm
In a panel at CCCC organized by Madeleine Sorapure and with Joanna Wolfe, I offered some of my thoughts (and personal data) on my current obsession: quantifying time. Specifically, I wanted to figure out how long it took for me to learn some software to support a potential pedagogical project. If you do digital composition, you know it takes a long time to learn programs and support interesting projects. But if you don’t, it’s hard for you to know. I’m trying to figure out how to communicate that a little better.
Before I go into the program and project, I’ll say: I think it’s important to communicate this time for a lot of reasons. On a purely selfish level, I want to be able to tell my tenure committee that I am investing a lot of time to do the digital pedagogy that my department values. From a departmental/composition program level, I think it’s important for directors to see how much time it takes to do this work so they can factor it in to any call for program-wide digital pedagogy imperatives. At the level of the field(s) of digital pedagogy, I think we need to be a bit more circumspect about diving into new projects until we know that they will be supported. As Stuart Selber writes in Multiliteracies for a Digital Age, “high-quality programs in computer literacy cannot be built or sustained on the backs of unsupported teachers” (224). Good digital pedagogy requires systematic support for teachers, and “should account for the fact that technology adds real layers of complexity with any project, pedagogical or otherwise” (226). Selber goes on: this support consists of not just equipment, but incentives, valuation of the work (beyond just thanks), training, and support from key stakeholders. Most importantly, at least for my work here, is that sufficient time be alloted for the labor. For many of us who teach with technology, this is a labor of love. But that doesn’t make it free.
We need to recognize the human labor of digital pedagogy—what resources we draw upon to work into our syllabi these exciting new digital ways of representing information, communicating, and participating. But recognition needs to go beyond just a call to attention. So in a little pilot study, I’ve attempted to quantify some of my own labor in digital pedagogy to attempt to peel back some of those “real layers of complexity” as well as articulate what “sufficient time” might look like. I ask: What resources does it take (for me, in this instance) to do digital pedagogy? And, how might we understand and communicate what those resources are?
First, I chose a project and piece of software. I decided to teach myself to use Adobe AfterEffects to do some kinetic typography. [Here's an example with a lovely Ira Glass quote on creativity, and a fun example featuring Nicki Minaj's Superbass.] I thought it might be fun to do that in a class sometime, and if I wanted to do it, I needed a better sense of the program and its capabilities to write up an assignment and support students in the work.
While I taught myself the program and the project, I tried to quantify everything I could–mostly, my time and other resources. I wrote down all of the things I could think of that helped give me a leg up on learning the software. (Well, I didn’t write down everything: my literacy, my flexible job, my luck in having good health, etc.) I kept track of all of the time I spent watching videos and learning the interface and working on a scratch kinetic typography piece.
By my count, it took me just over 22 hours to learn the program and project well enough to begin to feel comfortable using it in a class. (That counts the 2h it took me to procure the software, at a cost of $210 of my university-given research budget, because that time and money is also a resource.) I drew on a lot of things I already knew about complex interfaces, sound and image editing, timeline paradigms, font and design, and key terms to help me search the web when I got stuck. Someone who knew Adobe Final Cut Pro better than I did would have gotten to that level much more quickly; someone who had never used Photoshop might be banging their head against a wall for a lot longer than I did. Also, I still have a lot to learn about the program and the project of kinetic typography. I stopped at the point where I felt sufficiently competent, which isn’t to say that I know this stuff well or am any good at it. (At the bottom of this post, I’ve provided a lot more detail about my own learning process, for any of my hardcore fans out there.)
Here was what I could do after 15 hours, my first attempt at kinetic typography:
And here’s what I could do after 22.5 total hours (I chose a clip from Marshall McLuhan talking about The Medium is the Massage):
As you can see, I’m still not that good at it. But it’s competent work, and sufficient for me to support students’ exploration of a similar project.
So, what does this rather navelgazing pilot project suggest about learning the technologies that we teach?
Support for digital work in the classroom takes more time than the teaching of traditional textual writing, which we already know takes a lot of time to do well. It took me over 20 hours to learn a program well enough to feel comfortable writing up and trying out a new assignment in one of my classes. Importantly, there is no way for me to have kept track of the time it took me to amass the resources I had already to get up to speed in the program in that timeframe. Additionally, instructors need time to maintain and update any learning they do with digital software. Like students in first year writing courses, we cannot expect instructors to have a one-and-done model for learning to support digital pedagogy.
Working in digital spaces means that we must also be willing be bad at something for a long period of time—and as Ira Glass says (in this kinetic typography example), willingness to be bad at something when you know you’re bad at it, and willingness to work through that. Digital pedagogy takes a lot of trial and error and willingness to learn from your students or be an incomplete expert with them. Digital pedagogy, then, is not only a labor of time, it is also emotional labor.
Traditionally, the work of digital pedagogy has been done by those who enjoy it and who elect to do it–who spend a lot of their free time thinking about, learning and practicing digital composition for themselves. I’m one of them, and it’s work I like to do. But instructors of digital work also have other things we do with our non-work time—spending time with kids, house maintenance, travel, normal people things. Nearly every digital instructor I know feels crunched for time to learn and support this kind of teaching, and poaches from their non-work time in order to do it. That stress is widely acknowledged and shared among those of us who do this work, but is not readily apparent to those who don’t.
So here’s my point: We need to better communicate the kind of labor and human resources it takes to bring digital assignments into the classroom. To make this kind of digital labor more visible, we need good methods to catalogue and quantify it. The pilot lifelogging self-study that I’ve done here is one way to think about what those methods might look like. More carefully, rigorously, and quantitatively cataloging our labor will help those of us who work in digital pedagogy to articulate our work in the contexts of administration and constrained budgets. This articulation of labor will be especially critical as we consider programs that scale up digital pedagogy to include instructors who do not already do this work and who do not have the wealth of knowledge gained already through digital hobbies. I fully support digital pedagogy, and scaling it up beyond just the few instructors who already do it. But we must better understand the time and resources it takes to implement, and I argue that we must track those time and resources in order to better understand them.
More detail than you really want about my learning process:
Before I started this project, I knew these things:
- That kinetic typography existed and that tutorials were available online to help me learn how to do it
- How to record myself in Audacity (which I already had installed) use a mic (which I own already)
- The basics of Adobe Illustrator
- Some aspects of timelines from other time-based composition programs like Audacity and Final Cut Pro
- Some experience with design: fonts, arrangement, colors, theories (not practice) of animation
By the end of my first attempt, I had learned some rudimentary things about AfterEffects, including details about the interface. I learned how to:
- play sound while scrolling through the timeline to sync up words
- import items (although still some troubleshooting there)
- type in the interface and change position, font, color and other attributes of type
- make objects 3D, and move them along the X, Y, Z axes.
- Add a camera to move focus through the piece
- Control the camera’s postion along 3 axes and rotation (though not well)
- Preview and render the video with both video and sound functioning
In my first attempt, I ran into trouble in these areas:
- I couldn’t import a layered Illustrator file with layers intact
- Sound wouldn’t play while I scrubbed the timeline
- Camera wouldn’t recognize the 3D words I’d made
And I did some troubleshooting to isolate and fix issues:
- Googling keywords like: troubleshoot, import, render, scrub, two-node camera, point of interest, timeline, audio
- Opened new project (started over)
- Created new, simpler Illustrator file
- Played with different settings
- Asked husband the computer programmer (didn’t work)
- Watched videos very closely to see which settings were being used
Some of the troubleshooting worked, some didn’t. Those problems I just worked around. For instance, I didn’t use an Illustrator file to design the layout of the first attempt because I still couldn’t get it to import correctly.
In my second attempt (the McLuhan video), I ran into more trouble as I ramped up the complexity of the project, and I used similar troubleshooting strategies. I learned a few more things in the process, too.
Additional resources I drew on for my second attempt at kinetic typography:
- Time (7.5h beyond the first attempt)
- My knowledge of websites where I could get images and sounds to mix in (flickr creative commons search and freesounds.org)
- Math and coordinate geometry
- My knowledge of Photoshop
- Recording and editing in Audacity
Here are a few of the additional things I learned from the second attempt:
- How to pre-render and pre-compose to manage more complex composition
- Effects panels
- Improved work and understanding of keyframes (a common paradigm in time-based media)
- Offsetting time
- Updating source material in my project when I’ve edited it in another program (Audacity or Photoshop)
Things I ran into trouble with
- Increased complexity of animations loaded down my computer and forced me to think about workflow more
- Program crash and lost work
- Rendering issues
- Getting AfterEffects to recognize transparency in images
- Animating two layers together using a third null layer
- Bouncing camera paths (not quite resolved)
- Changing animation settings on preset effects
I made a few observations about the differences in my knowledge-gathering approaches from my very first attempt to my second. For instance, in my first attempt, I spent a lot more time watching general videos introducing me to what the software could do because I didn’t know the scope it. In my second attempt, I spent more time targeting particular problems I had, seeking out videos and explanations that would respond to my own goals and needs. I also played around more with the interface and effects without reading directions because I’d gotten more comfortable doing so. This process is not unusual—many education curricula are structured around the model of initial lessons, followed by independent work.
Recently, Mr. N—- and I went to see Distant Worlds: Music from Final Fantasy at the Benedum Center in Pittsburgh. Fun for the whole family! Video games for Mr. N—-, and getting out of the house for me.
The event was an elaborate fan service ritual: a full orchestra playing the themes from the Final Fantasyvideo game series with a huge screen behind them projecting game footage and cutscenes from the series. The conductor (Arnie Roth, who is obvious a huge FF fan, if nothing else than for the fact that it pays his bills) yukked it up between the pieces by calling for audience participation, paying homage to the composers, and alluding to everyone’s favorite moments in the series. Even for me–someone less familiar with the details of the series–it was awesome.
What made it awesome, however, wasn’t the orchestra (it was fine) or the venue (its 4700lb chandelier!) the pieces (which are beautiful and epic) or even the company (although N. was great, as always). It was the history of game animation that the performance projected. Juxtaposed with this spectacularly restored 1928 theater were the pixelly figures that captured the imagination of thousands of young Japanese and Americans beginning over 20 years ago.
As the images were projected thematically rather than linearly, we saw the technical feats of special effects, landscape, face and hair animation shift. Because Final Fantasy is such a long running series, the orchestra could draw on over two decades of video game animation history. It’s hard to believe that any other era or genre of art experienced that much change in a 20 year time span.
A new exhibit opening this weekend at the Smithsonian American Art Museum takes up that evolution: “The Art of Computer Games.” I was pleased to see my old favorite C64 game Sid Meier’s Pirates! is featured. Indeed, those were some beautiful blue seas! There are two representatives from the Final Fantasy series–Final Fantasy VII and Final Fantasy Tactics. (the full list of featured games for the Atari to the Wii to the Playstation3 is on the Smithsonian’s exhibit page.)
As beautiful as they are, it’s not clear to me that the later images are superior. As the older pixellated characters were juxtaposed with the newer ones, with flowing hair and hipster-emo outfits, it strikes me that the older images were often more evocative than many of the newer ones. It may be my nostalgia for early games (although I never played the Final Fantasy series). But I think it was something more: the earlier images leave a lot to the imagination. The hipster-emo characters from more recent installments in the series look dramatic and distant and too cool and too young for me to want to spend much time with.
At any rate, Mr. N—- and I will have to make a trip to see the exhibit and marvel again at the changes in visual representation in games.
Lately, I’ve been dabbling with becoming a “quantified self.” That is, I’m trying to keep closer tabs on the time and resources I use to do the things I do. I’ve started to use a program called RescueTime to log my time spent on writing, websurfing, and emailing. So far, the results are mixed. I feel guiltier when I’m on Facebook, but I haven’t figured out how to make the data RescueTime collects particularly useful. For instance, I’d like it to communicate with my schedule to see when I’m teaching and in meetings, and be able to break down blocks of teaching, research and service (all of which involve emailing and word-processing, which RescueTime keeps track of). I’m working on it.
I’m presenting at CCCC on a related topic–the time and resources it takes to do digital pedagogy–in a session with Madeleine Sorapure and Joanna Wolfe. And Stephen Wolfram’s recent post on his quantified self, cataloguing email volume, paper scraps, and time since 1989, is an inspiration if an intimidating and somewhat scary one.
My forays in quantification, plus some recent academic service work and my upcoming 3rd year review, has got me thinking: what if academics had quantified vitas? The vita is already a document of quantity–a list of publications, educations, organizations, conferences, and classes, arranged by categories and dates. Quality and time values are implicit: when reading a CV, we can gauge that a book likely took longer to write than a conference presentation, and that this publication was in a more influential journal than that one.
But can you tell from my courselist that I spend a great deal of time and attention on my feedback to students? Perhaps student evaluations offer an oblique measure of this, but we all know how those are affected by levels of courses, types of students, and curricular necessities. And student evals are generally left off of the CV anyway.
Even more difficult to assess is service. Two people can be on one committee and devote wildly different amounts of time to it. Time ≠ productivity or effectiveness, of course. But doesn’t it mean something? On my own CV, I could point to the in-name-only committees– those that only met once or didn’t do much. I could also point to the committees that met often and for which I (think I) did sustained and effective work. For my “honors,” I could indicate the awards that weren’t particularly competitive, and those that were. But I lived my own vita, and I don’t have that level of access to others.
I know there are significant limits to a quantified vita. I’m leary of the quantification-uber-alles value that’s implicit here. We can’t count everything. For instance, in the sciences, a journal’s impact factor is an attempt to measure influence, but this practice is viewed askance in the humanities because of issues of interdisciplinarity, circulation, and a general skepticism about numbers as a form of representation. And although I can catalogue the time and resources I spend teaching my students, it’s impossible to count the ways I love them. The best parts of my life are unquantified, and I’d like to keep them that way.
But what value might there be to counting certain things in certain ways? Certainly, the Digital Humanities is working on answering that question. Distant reading doesn’t replace close reading, but still tells us some interesting things about texts (e.g., Witmore and Hope’s work on Shakespeare’s corpus, or Google’s Ngram viewer). And statistics on salary and promotion disparities between faculty of different races and sexes can give us tools to fix real problems. The often mentioned challenges of women faculty in service work and work-life balance is, in part, an issue of quantity: how much time is spent by whom?
CVs are already standardized information delivery systems, albeit imperfect ones. To repeat a mantra of computer programmers, might the best remedy for low information be more information?