Here’s the opening line from the first email I saw this morning.
Using smart phones for word processing in the English classroom is becoming more and more common in the 21st Century.
Oh, god. Really?
Forget the unwieldy noun phrase. Forget also the tired tech-rhetorical appeal to what century we’re in (got it, thanks) and what’s “becoming more and more” (a nice turn on my favorite phrase to hate, our increasingly digital X …).
Let’s focus instead on the perverse image of the classroom this email invites us to imagine. It’s a classroom filled with students equipped with smartphones. And of all the interesting, innovative things that those students could be doing with their devices, they are using them for word processing—a term which, if Wikipedia is to be believed, The New York Times labeled a buzz word. In 1974.
I’m drowning in email this semester. And it’s only the second week.
In his book The Laws of Simplicity, John Maeda describes the familiar Pareto Principle, also known as the 80/20 rule: “in any given bin of data, generally 80% can be managed at a lower priority and 20% requires the highest level,” adding that “Everything is important, but knowing where to start is the critical first step” (14).
This comes in the middle of his discussion of SLIP: Sort, Label, Integrate, Prioritize.
Following that paragraph is a photo illustration of Maeda’s own system of prioritization, including the labels Focus, Base, and Next. Focus being those things immediately at hand; Next being the (duh) next set of tasks; and Base being the great unwashed slop pile from which all attempts at having free time for oneself are thwarted.
I’ve decided to get rid of my usual email prioritization scheme (the stuff in my inbox, and the purgatory of a mythical todo label) and give Focus, Base, and Next a try. I’ve already this afternoon gone through my remaining email in the inbox and todo label, and sorted things into the three new categories.
Pro Tip: I’m a Gmail user, but many email clients will list hyphens at the top of any folder/tag list. So, my labels are actually
----focus, the alphabetically next
----next, and the three-hyphen and thus alphanumerically and conceptually lesser
Things already feel more manageable. But then—like a new shelving system or a new productivity app, isn’t that always the feeling? Let’s see if this lasts.
I wanted to take to my own blog to talk about my choices behind building Git repositories for each of my courses. I also want to offer a few different approaches instructors might take for using GitHub for sharing syllabi. I’ll also preview how I’m planning to use GitHub to run a course this fall (rather than just hosting the repository course materials there). I plan to detail that process in a future post, for those instructors looking to do the same.
In his questions about the practice of posting syllabi to GitHub, Mark Sample claimed that GitHub only understands plain text, and that it’s a laborious task to convert one’s syllabi into plain text form. That’s a somewhat incomplete description of GitHub, and Git itself. And because of that, putting syllabi under Git control may not be all that laborious.
It’s more accurate to say that Git/GitHub best understands any format that exists as plain text. That includes, of course, the vanilla plaintext file,
.txt, but also other forms including Markdown, HTML, and so on. (Pretty much any markup and programming language is plain text; its file extension—
.html, etc.—only serves to cue software as to how to interpret the file. Making matters somewhat more complicated, Windows and Mac OS both hide file extensions for known file types in their respective file/folder views.) You can put Word documents and PDF files and other binary format files under Git control. But what you lose are diffs: changes represented by a particular commit from the previous commit, or between two arbitrary commits anywhere along a repository’s timeline of development. Those diffs help to confirm and illustrate the change as described in a commit message.
But I would argue that if your syllabi are not in some plaintext form, in a flat file (versus a database entry in WordPress or Blackboard or something), they really should be.
I can make that argument from my own experience. I came to rely on GitHub for my syllabi and course-website needs from frustration with the wikis I had been using to run course websites. Every semester, I would create a new instance of WikkaWiki (which is a really nice and relatively lightweight wiki, if you’re truly in need of a wiki). But then would come the inevitable copying and pasting of materials over from previous semesters. Not to mention also the problem of how to archive the old wiki (all of my past wiki-driven courses are currently inaccessible, for exactly that reason; and I’m too paranoid to just leave the things running in case an exploit should be discovered that compromises not just the wikis themselves, but my entire server that runs them).
So I considered what needs I had that wikis were filling, and whether there wasn’t some other way to address those needs while moving to something without the overhead and security issues of a full-on per-class wiki installation. I’m developing this list in retrospect, but basically:
- Fast editing. As an instructor, it was nice that I was able to jump in and change something on the calendar or a project description at a moment’s notice, without all of the overhead of an SFTP client or whatever.
- Open editing. It was equally great to have a system that students could edit themselves, minus things like the course policy statement (which I used access control locks in WikkaWiki to prevent).
- Versioning. I liked the version history of individual pages that WikkaWiki provided (although if my memory serves me correctly, the version history only went about ten revisions deep).
Git and GitHub meet all three of those needs (on top of my long-standing demand to never keep a course’s materials behind some private wall or Blackboard-like roach motel). Better still, I’d meet my needs without the overhead and risks of a database-driven system (anyone who’s installed a WordPress installation has probably experienced the phrama hack and other ills). And best of all, I needed some kind of external motivation to continue improving my own skills with Git, and to expand in what I thought were some imaginative ways for using Git and GitHub.
So, here’s how I’ve approached posting course materials on GitHub, and some new ways that I intend to try out this fall already:
I’ve been amped up all afternoon after having read this blog post, “6 Things Your Dissertation Director Wishes You Knew,” on the Chronicle.
Although there are some nuggets of somewhat useful advice to be had there, there are two serious problems in the post’s approach.
First, the title is presumptuous. When I consider my own dissertation advisees who are most likely to read this, they are already the most nervous and uncertain group to begin with—conscientious and already worried about my time, etc. So no, advisees of mine: I don’t wish that you knew these six things. You know my parameters and expectations, and I’m always willing to clarify them.
Second is the problem of “wishing,” which pairs nicely with mind-reading as a pedagogical strategy. If any piece of information is so vital that students absolutely must know it, then it deserves a better treatment than being the contents of a wish.
I don’t have anything I wish my advisees knew. I was the first person in my family to go to graduate school and I can say that 1) nobody is born knowing how to be a graduate student, let alone a dissertation writer, and 2) I remain grateful to this day for my own mentors’ explicit setting of boundaries and expectations in getting me through the program.
Instead of wishing things, I have those things that I need to teach my advisees. And I do my best to do just that (granted, sometimes they need gentle reminders, and granted, sometimes I could be better at it).
So what do I tell my advisees? What do I try and teach them? Here’s the main list. If you’re a capital-A academic, some of it may offend your sensibilities. Scandalize you. I’m a pragmatist, though. I’ll be frank, though: most dissertations suck, and aren’t worth the anxiety and effort to make them “great”—especially when doing so delays graduation by a matter of months or years. There’s nothing great about that. But maybe you will find this more helpful than the Nike-like “Just do it” regarding writing the dissertation:
- It’s just a dissertation. All of this talk about “original contributions,” sometimes even “significant original contributions” either in lore or sometimes even in graduate handbooks just builds anxiety. It’s just a dissertation. It’s not your life’s work, and it’s certainly not your best life’s work. It’s not even your first work as a professional; it’s your last work as a student. It has to be your own, of course, in that sense of “original”—but you are not going to revolutionize your field of study with your dissertation.
- If the topic seems too big, that’s because it is. You will never hear a (reasonable) dissertation adviser say, “There just isn’t enough material to make this topic viable.” The number-one thing I spend time working with my advisees is paring down and focusing. Again: it’s just a dissertation. You aren’t going to have everything to say that there is to say about your topic, let alone a stack of related sub-topics. Keep stripping down your topic until you can express it in a single commaless sentence. That becomes your guiding star for judging the reading and writing you’re doing.
- You don’t need to write your dissertation, or that chapter. You just need to write something toward your degree today. Most of the paralysis I feel in my own struggles to write, and that I detect in my own students, is that it’s so tempting to think in terms of units that are bigger than the lived experience of writing: nobody really “writes” chapters, dissertations, or books. The only unit we really control with writing is time, the moment-by-moment composition of sentences, then into paragraphs; be gentle enough with yourself to accept that some days are better than others.
- Share work early and often. This is a hard one. I still struggle to let things go in early form. I don’t want people to think I’m dumb, and the last thing dissertation advisees want is to be seen as dumb. But submitting early and often has a two-fold benefit: as an adviser, I’m much more capable of quickly responding to smaller chunks than entire chapters. And for my advisee, I can catch problems and encourage good work earlier on, before there’s an entire chapter (or more) to rework and rewrite—one that an advisee has already spent far too much time massaging and polishing.
- Mindless work is still work. Keep up with citations, formatting, all of that stuff. Some days, the writing just doesn’t come. Take the time to do something with your diss, even if it’s the thankless work of conforming with university style guides or APA or whichever. If you’re lucky, dispassionately going through the text looking for formatting problems will cause some bit to catch your eye, stir some thought—and then you’ll be writing again. Even if just for a moment. And even if you’re not so lucky, that work has to be done. You really don’t want the frustration of fighting with Microsoft Word as the final act before depositing your diss.
- A great diss, a good diss, and a passable diss all get you the same degree. I’d love for my students to write great dissertations, or even good dissertations. But that ultimately doesn’t matter to me. I want done dissertations. My own dissertation was not very good (it’s posted here). It was never destined to be a book; and frankly, once I made my final edits after my defense, that was it. It’s only purpose was to spring-board me into my research that I’d do after leaving graduate school. What mattered wasn’t the diss as an artifact, but the diss as a bed for ideas that I could talk about in job letters and interviews & pursue, in different forms, as a faculty member. Put simply, it’s better to be done and to get onto your next project than to think you have to write something that’s ready to be a book—when you yourself are still an apprentice researcher. I have plenty of friends who did turn their dissertations into books, but I don’t think they had any easier of a time with their books than I did writing mine from scratch.
That question has been a long-term puzzlement for me.
Although the last thing that the world needs is another definition for literacy, mine is fairly simple: Literacy is the manipulation of symbols. If we want to talk about mathematical literacy, computer literacy, or a traditional alphabetic literacy, I think that my definition holds up. “Manipulation” is a broad enough word to cover reception (reading) and invention (writing), as well as rearrangement (remixing, if you’d like).
But “manipulation” in my simple definition also implies artful, studied symbolic action (hat-tip to Kenneth Burke). The New Oxford American Dictionary on my Mac here adds the qualifier “typically in a skillful manner” in its definition of manipulate.
This semester I’m teaching a course called Humanizing Technology. One of my opening remarks to the class at its first meeting last Tuesday night was that the course’s title is kind of a lie: Technology, specifically the digital/computer technology that is the focus of the class, isn’t waiting for us to humanize it; it’s already a bit too humanized, in so far as technology reflects human imperfection, as all symbol systems do, as well as a human desire to control and perfect—or what Burke describes in his “Definition of Man” (which students are reading this week) as both being “goaded by the spirit of hierarchy” and “rotten with perfection.”
Symbolicity rules the terms of engagement with digital technologies. And digital technologies, no matter how magical, are both the product and the enabling force of specific symbol systems: computer languages.
Of all the challenges that I will be putting in front of students this semester, working through Bruce Tate’s Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages (7L7W for short) is the one that already has students most on edge (judging by the number of emails I’ve received this first week of class alone).
As I’ve reassured a number of students this week, 7L7W is not on the reading list because I think it will (or even can) make anyone a programmer; it’s there because I want to achieve two goals.
The first is to demystify programming languages, and the learning of them, by simple (over)exposure. Seven languages is a lot by any measure, especially spread out over seven weeks (more like ten because of how the class is structured). Especially in a course offered in the humanities department. (For some students in the course, particularly my undergraduates coming in from the Information Technology and Management program, that demystification may not be so profound. Though I’m hoping the learning that Tate encourages in the book will.)
The second and, to my mind, more important goal is to help students come to see programming languages (as well as other symbol systems, from number systems like hex and octal right down to binary and ASCII and its modern day Unicode supersets) as designed things. Just as its tempting to think that spoken and written languages came down directly from the gods, so too is it tempting to think that computer languages themselves have their own origins in divinity, or something outside of human invention and cooperation.
One of the features that sold me on the 7L7W book (I’d also considered Chris Pine’s excellent Learn to Program) was the interviews that Tate arranged with the creators or lead developers of the languages in the book. It’s one thing to slog through the syntax and application of, say, Ruby; it’s another to do so alongside the words of Ruby’s creator, Yukihiro “Matz” Matsumoto, who recalled:
Right after I started playing with computers, I got interested in programming languages. They are the means of programming but also enhancers for your mind…
The idea of moving from play to an interest in programming languages, I believe, is unusual. Matz further observed that “the primary motivation” to designing Ruby was “to amuse myself.”
And this is where I return to my original question: Why don’t writers gravitate toward code? Writers, and not necessarily even good ones, all share a certain love for how amusing written language is. Otherwise, we wouldn’t have puns, double entendre, and other forms of word play. We likewise wouldn’t have style, or at least a sense of it. And less obviously, we probably wouldn’t have writing period unless it were well funded (Boswell’s Life of Johnson is quotable here: “No man but a blockhead ever wrote, except for money.”)
There are few writers I can think of, though, who would find much amusing about HTML5 or Ruby. And that is unfortunate, especially when there is so little to lose in being exposed (as my students will be) to computer languages. I’m not crazy enough to believe that all writers will become programmers and developers. But some might. And quite possibly to good effect. Although at first glance a computer language like Ruby is lacking in the flexibility and ambiguity of a natural human language, it is nevertheless full of subtlety and elegance—and has a self-consciousness and even humor about itself that is absent in natural human language (minus maybe a fake one like Pig Latin).
Put another way, the distance between writing and the digital-material conditions and technologies that support writing should be of far more concern than past distances between, say, writers and typists, or scribes and makers of parchment or stone tablets.
Why? Because the digital-material conditions that make Facebook or Tumblr or even Microsoft Word possible are grounded in symbolicity that, like traditional alphabetic writing or even media-based writing (images, film), has a certain grounding in language itself. There is a continuity between communication through and programming of digital technologies, despite the assumption that writers/communicators and other lute-playing humanistic wood nymphs comprise one camp, and cold, calculating, logical (a- or anti-humanistic) programmers compromise another. It’s much more complicated than that. And I still don’t have an answer for my question.
Fascinating article that looks at the current outer limits of digital storage at the atomic level, including some of the implications for quantum computing.
In the Words of Steve Jobs:
I think great artists and great engineers are similar, in that they both have a desire to express themselves. In fact some of the best people working on the original Mac were poets and musicians on the side. In the seventies computers became a way for people to express their creativity. Great artists like Leonardo da Vinci and Michelangelo were also great at science. Michelangelo knew a lot about how to quarry stone, not just how to be a sculptor.
One of the wickedly fun things about anonymous, academic peer review is what I have come to think of as the reviewer confessional.
More than a few times I’ve submitted conference proposals or academic articles that somehow are taken very personally by the nameless reviewer on the other end, who then makes a sort of confessional digression from the actual review of my work.
Often this looks like someone taking something way too personally. For one of my articles, I remember something to the effect of, “Well, I would never personally want to learn how to write code…” Which is strange to me. When I review other’s research, I’d never write “Well, I would never personally want to…” followed by some component part of the research, or even some ancillary detail. What difference does it make? And yet still—that my work would be taken so personally, to the point that anonymous reviewers feel the need to defend themselves, or somehow say that my research is good for everyone else, but really just not applicable to the reviewer him/herself: that’s fascinating.
So I suppose I wasn’t surprised when this happened with a workshop I proposed for Computers and Writing 2012 that, like most of the things I propose for conferences, is a bit of a stretch. Titled “End-to-End Agile Web Application Development from Basically Nothing,” it’s a day-long workshop that hits both the front- and back-end of web application development in a whirlwind introduction to everything from version control to HTML5 to Node.js.
As I rhetorically (sort of) asked in the body of the proposal, “Is this a tall order for a day’s work(shop)? Absolutely. But the goal here is to provide participants with a full, end-to-end overview of the art of web application development, and make, by demonstration and example, the argument that we need to invite more of our computers and writing colleagues to work in these areas.”
Anyway, so as to the moment of confession from one of the reviewers. He/she remarked, “Personally, I would not be best served from this kind of workshop since I do not really have the freedom to use any CMS I want or house applications or programs on my office computer, but I also believe that many in the field can benefit from this type of workshop.”
I want to weep for this reviewer. First, because the reviewer has so little control over the computing resources that, presumably, surround his/her teaching and research. That’s just sad. But second, I weep because of the defeatism here. In that second clause, it’s other people in the field who can benefit. Not the reviewer, who (presumably) cannot possibly be served by (meaning, what? learn something?) this workshop because of a heavy-handed set of campus IT policies, or something.
Perhaps I’m reading way too much into this one-off confessional quote (the reviewer did recommend acceptance of the workshop, and indeed the workshop was officially accepted), but the implication here—I don’t control my own stuff, I can’t make my own choices and so therefore, I don’t have to learn this, but perhaps others do—is one that I find really troubling. And it’s also the kind of statement that perhaps only can be aired under the cloak of anonymity, particularly in an area like computers and writing.
In the Words of Me:
No design idea is ever so great that accessibility shouldn’t be a primary factor in the design itself.
My field is rhetoric and writing, and particularly the digital side of that field.
As I’m thinking about how to talk about digital craft, it occurs to me (obvious though it might be to others) that when it comes to making digital things, how and why you do them is quite different than that you do them.
There are, for example, no shortage of people who create websites within rhetoric and writing. Or who assign website creation to their students. Yet that those websites are being made is really no longer interesting. Or even important. Can those makers talk about the how and the why, in a way that gets at the materiality of what they’ve made? (E.g., specific choices made at the level of source code, versus a WYSIWYG…which is in a lot of ways a technology that lets you create rudimentary websites with a pre/overdetermined how realized through arbitrary menus and dialog boxes).
I really want to start blogging again. Truly.
So, now I’m set up with Tumblr (still using someone else’s template, but that’ll change), which my blog.karlstolley.com subdomain now points to.
In the Words of Malcolm McCullough, Abstracting Craft: The Practiced Digital Hand, p. 193:
When the tools are complex, when the artifacts produced are abstract, or when tools provide the only means of access to the medium (all conditions in high technology), it can be difficult to say where a tool ends and a medium begins.