Blog Post

Basically Clueless

A graduate student and I are attempting to teach ourselves Objective-C and iOS development this semester.

It’s an important and frustrating exercise to be cast back into the role of a beginner. While I have a fair amount of skill in programming (especially Ruby and JavaScript, and PHP if I absolutely have to), my experience is mostly limited to languages for web development and building the occasional command-line application.

Which is just to say that, while I’m not an absolute beginner (I can tell you all about general programming concepts, constructs, and paradigms), it sure feels like it. In Ruby or JavaScript, there’s not really an IDE that compares with Apple’s Xcode. All of the languages that I’ve written in so far are interpreted, which means there’s no compiler either, as is the case with Objective-C (and yes, Objective-C is hyphenated.)

For our first week of learning, my intrepid student and I worked through the intro tutorial and some supporting docs (mostly UI guidelines) from Apple’s iOS Developer Library.

Here are a few lessons I learned or relearned this week:

  • I have a deep distrust of GUIs that generate code. Working through Apple’s intro material, it was difficult (though not impossible—see the next lesson in this list) to keep straight the code it would kick out and the code I’d be responsible for writing myself. And I do think that the Xcode GUI made it seem worse; I am totally cool with using the rails generate commands in Rails, which do effectively the same thing. But add a pointer device and drop-downs and things I have to click, and I’m pretty sure I’m learning the kindergarten version of iOS development.
  • Git is an indispensable learning tool. Yes, professional developers use source-management tools like Git to collaborate and make it easier to change their minds, but when you’re learning, Git makes it equally easy to run back and forth through the code you’re writing. And when it comes to putting your trust in the hands of any code-generating mechanism (XCode, or Rails, or anything else), Git provides an unvarnished view of exactly what is going on with the interfaces mucking with your source code. Make a commit at each step of what you’re doing, each thing you click or command you run, and all it takes is running git diff to see what’s changed, or git status to see the new files that have been added.
  • I want to learn the code, not the IDE. I knew going in that Objective-C is the language of choice for iOS as well as Mac development. I also knew that there are tight integrations between Xcode and Objective-C. But both my graduate student and I wanted a whole lot more code and instruction in Objective-C right from the get go. The intro tutorials felt too much like a Disneyland tour of Xcode (then again, this student and I are rare and hardcore birds when it comes to wanting to wallow in and wrangle with source code).

So this week, we’re focusing our efforts in a few obvious areas (to us, at least at the moment). The first is that rather than going blindly through Apple’s tutorials, my graduate student and I are working on our own app ideas to guide our learning. I’ve always emphasized this in the classes I teach on digital design and development; turns out I tend to benefit when I follow my own pedagogical advice. Have a project in mind, and work to realize that vision. And don’t compromise.

The second is that we’re diving head-long into Objective-C as a language. The two of us did some research and agreed that we’d start by reading the second edition of Aaron Hillegass’s Objective-C Programming: The Big Nerd Ranch Guide.

It’s amazing to me the lessons I have to relearn: years ago I embarked on a similar plan to learn Rails, and thought I’d just dive right into Rails without spending any time trying to get to know rudimentary Ruby. Big mistake. It turns out having some foundations in Ruby was a big help towards conceptualizing what Rails was doing, let alone being able to dive into the different domain-specific languages that Rails uses for models, views, and controllers.

The end goal of all of this learning, for me, is to one day be able to offer a course in iOS development. But as I don’t have the kinds of pressing, research-/career-aligned need to build iOS apps the way I do web design and dev stuff, I’m grateful to my graduate student for helping me to stay motivated and committed to learning this. I’m going to try and blog about the experience each week, and really reflect on the feelings of cluelessness and helplessness my students have when I teach this sort of thing.

Blog Post

God Willing and the Bits Don’t Rot

Johanna Drucker’s Pixel Dust: Illusions of Innovation in Scholarly Publishing at the Los Angeles Review of Books notes that “the fate of the humanities is being influenced by a campaign of misinformation.” Turns out, she’s participating.

There’s plenty to challenge in Drucker’s piece, but I want to focus on one part that was pointed out by my friend Bob Stein, who posted it on Facebook asking about the veracity of this specific claim by Drucker:

the fact is that every use of a file degrades and changes it, that “bit rot” sets in as soon as a file is made, and that no two copies of any file are ever precisely the same as another.

What follows is an expanded version of what I posted in response to Bob on Facebook:

Drucker misunderstands the nature of digital representation (as well as internet jokes; I’ll get to the latter in a moment). The actual bits that make up a file—take something like a text file in 8-bit ASCII—do not spontaneously rot or change from routine access and therefore become somehow different from one copy to the next. A file can be corrupted, for sure, but it does not simply “rot.” And there are checks against corruption, which I discuss below.

You can download hex editors that will allow you to see the actual 0s and 1s that make up something like an ASCII text file (eight 0s and 1s per character, or byte, in 8-bit ASCII). And with painstaking work, you can compare those actual bit representations and will see no difference, file to file or copy to copy over time. You can also compute a checksum on files, which are mathematical formulas that help ensure a file’s integrity in a transfer from Point A to Point B, or as a quick means to ensure that a file has not been tampered with. Authors of open-source software typically publish the computed hash for a given file or set of files, so that users of the software or fellow hackers can check its integrity. If spontaneous bit rot were possible, then something like a checksum would be useless for anything other than saying, “Yes, that’s probably a copy of the file. Probably. Maybe.”

What “bit rot” probably refers to, as Drucker is using it, is the physical degradation of storage media. But it’s not really the “bits” that rot, but the media itself (the electromagnetic surface of a hard disk; the polycarbonate and foil material of a CD-ROM). And because of course the density of bits is so extreme, when media fails, its far more than the individual bit here or there that gets lost—but whole huge swaths of data. Thus your garden-variety hard disk failure, etc.

But as far as a files bits changing upon access and Drucker’s claim that “no two copies of any file are ever precisely the same”: that’s simply not true (at least in binary computing; quantum computing is a different matter). Files may get moved from place to place over the course of their lives on a single drive (especially in the old days of defragmenting drives), but there’s nothing in regular read/write ops to change a 0 to a 1 or vice-versa under normal usage. A miswritten bit is a read/write error—but not “rot” in any organic/material sense.

The problem of ensuring the integrity of digital data—that is, checking against read/write errors—is practically as old as digital computing itself. One of the earliest forms of checksum computation was the parity bit.

Fun fact: the 1963 ASCII specification was for a 7-bit character set. It would have been theoretically possible for an 8-bit set at that time, but for one problem: transmission of digital data often took place on paper tape. And paper tape machines were engineered to support things like the simpler 5-bit Baudot codes (Google it).

To ensure the integrity of transmitted data, the tape machines actually supported 6 bits: the 5-bits of data, plus a parity bit. The parity bit was a rudimentary form of data checking on the number of 1s transmitted/holes punched for each byte (line of holes) in the data. If someone were checking for even parity, then the parity bit would be punched on all lines with an odd number of 1s (and remain unpunched on lines with an even number of ones). If the line/byte added up to an odd number, then the machine operator knew immediately that there was corruption in the data.

So why did paper tape force a 7-bit ASCII set (and thus marginalize even other Western alphabets with more than 26 letters)? The answer is forehead-slappingly funny: paper-tape engineers determined that the paper tape could accommodate 7 bits plus a parity bit, for a total of 8 bits. But to try and cram 8 bits and a parity bit onto paper tape would result in the paper tearing. So seven bits reigned supreme

As data transmission and storage have grown more sophisticated, so too have the specifications for data checking. For some light beach reading, check out the IETF’s RFC for computing Internet checksums on TCP/IP and other protocols.

Again: bits don’t rot. They can become corrupted, typically to the loss of the entire file (and more), but individual bits do not spontaneously change state from 0 to 1.

Drucker’s Wikipedia page lists her various positions and honors, including various prestigious digital humanities fellowships. Her faculty page at UCLA notes her expertise in a single word: Preservation.

The word “digital” refers to a mode of representation. In binary computing, it refers to representation as 0s and 1s. Whether those 0s and 1s are stored in an old-school punch card or in the cloud or on an M-DISC (the latter promises data storage lasting up to a thousand years), bits don’t rot. Someone with Drucker’s caliber of expertise in DH and preservation should know this.

But don’t take my word for it: the US Library of Congress maintains an entire website on digital preservation at digitalpreservation.gov/. Had Professor Drucker run a simple Google search for “bit rot” limited to the digitalpreservation.gov domain, she would have found numerous white papers and PDF’d presentation slides discounting the notion of “bit rot.” There’s this presentation by Cory Snavely, Library IT Core Services manager at the University of Michigan, who describes “bit rot” as a “non-issue.”

There is even a white paper authored by none other than Clay Shirky, who observes even more directly that “In Internet lore, [there is a] problem…called ‘bit rot,’ a humorous and notional condition in which digital data is said to decay with time.”

Humorous and notional. “Bit rot” is actually a joke—one used by software developers and programmers to pass off blame for old, no longer functioning software on a phenomenon that’s a fiction. The digital equivalent of “My dog ate it.”

Blog Post

Meet the new device, same as the old device

Here’s the opening line from the first email I saw this morning.

Using smart phones for word processing in the English classroom is becoming more and more common in the 21st Century.

Oh, god. Really?

Forget the unwieldy noun phrase. Forget also the tired tech-rhetorical appeal to what century we’re in (got it, thanks) and what’s “becoming more and more” (a nice turn on my favorite phrase to hate, our increasingly digital X).

Let’s focus instead on the perverse image of the classroom this email invites us to imagine. It’s a classroom filled with students equipped with smartphones. And of all the interesting, innovative things that those students could be doing with their devices, they are using them for word processing—a term which, if Wikipedia is to be believed, The New York Times labeled a buzz word. In 1974.

Read on...

Blog Post

Prioritizing Email: Focus, Next, Base

I’m drowning in email this semester. And it’s only the second week.

In his book The Laws of Simplicity, John Maeda describes the familiar Pareto Principle, also known as the 80/20 rule: “in any given bin of data, generally 80% can be managed at a lower priority and 20% requires the highest level,” adding that “Everything is important, but knowing where to start is the critical first step” (14).

This comes in the middle of his discussion of SLIP: Sort, Label, Integrate, Prioritize.

Following that paragraph is a photo illustration of Maeda’s own system of prioritization, including the labels Focus, Base, and Next. Focus being those things immediately at hand; Next being the (duh) next set of tasks; and Base being the great unwashed slop pile from which all attempts at having free time for oneself are thwarted.

I’ve decided to get rid of my usual email prioritization scheme (the stuff in my inbox, and the purgatory of a mythical todo label) and give Focus, Base, and Next a try. I’ve already this afternoon gone through my remaining email in the inbox and todo label, and sorted things into the three new categories.

Pro Tip: I’m a Gmail user, but many email clients will list hyphens at the top of any folder/tag list. So, my labels are actually ----focus, the alphabetically next ----next, and the three-hyphen and thus alphanumerically and conceptually lesser ---base.

Things already feel more manageable. But then—like a new shelving system or a new productivity app, isn’t that always the feeling? Let’s see if this lasts.

Blog Post

Why and how I put syllabi on GitHub

I’ve gotten some shout-outs on ProfHacker regarding posting syllabi on GitHub.

I wanted to take to my own blog to talk about my choices behind building Git repositories for each of my courses. I also want to offer a few different approaches instructors might take for using GitHub for sharing syllabi. I’ll also preview how I’m planning to use GitHub to run a course this fall (rather than just hosting the repository course materials there). I plan to detail that process in a future post, for those instructors looking to do the same.

In his questions about the practice of posting syllabi to GitHub, Mark Sample claimed that GitHub only understands plain text, and that it’s a laborious task to convert one’s syllabi into plain text form. That’s a somewhat incomplete description of GitHub, and Git itself. And because of that, putting syllabi under Git control may not be all that laborious.

It’s more accurate to say that Git/GitHub best understands any format that exists as plain text. That includes, of course, the vanilla plaintext file, .txt, but also other forms including Markdown, HTML, and so on. (Pretty much any markup and programming language is plain text; its file extension—.txt, .md, .html, etc.—only serves to cue software as to how to interpret the file. Making matters somewhat more complicated, Windows and Mac OS both hide file extensions for known file types in their respective file/folder views.) You can put Word documents and PDF files and other binary format files under Git control. But what you lose are diffs: changes represented by a particular commit from the previous commit, or between two arbitrary commits anywhere along a repository’s timeline of development. Those diffs help to confirm and illustrate the change as described in a commit message.

But I would argue that if your syllabi are not in some plaintext form, in a flat file (versus a database entry in WordPress or Blackboard or something), they really should be.

I can make that argument from my own experience. I came to rely on GitHub for my syllabi and course-website needs from frustration with the wikis I had been using to run course websites. Every semester, I would create a new instance of WikkaWiki (which is a really nice and relatively lightweight wiki, if you’re truly in need of a wiki). But then would come the inevitable copying and pasting of materials over from previous semesters. Not to mention also the problem of how to archive the old wiki (all of my past wiki-driven courses are currently inaccessible, for exactly that reason; and I’m too paranoid to just leave the things running in case an exploit should be discovered that compromises not just the wikis themselves, but my entire server that runs them).

So I considered what needs I had that wikis were filling, and whether there wasn’t some other way to address those needs while moving to something without the overhead and security issues of a full-on per-class wiki installation. I’m developing this list in retrospect, but basically:

  • Fast editing. As an instructor, it was nice that I was able to jump in and change something on the calendar or a project description at a moment’s notice, without all of the overhead of an SFTP client or whatever.
  • Open editing. It was equally great to have a system that students could edit themselves, minus things like the course policy statement (which I used access control locks in WikkaWiki to prevent).
  • Versioning. I liked the version history of individual pages that WikkaWiki provided (although if my memory serves me correctly, the version history only went about ten revisions deep).

Git and GitHub meet all three of those needs (on top of my long-standing demand to never keep a course’s materials behind some private wall or Blackboard-like roach motel). Better still, I’d meet my needs without the overhead and risks of a database-driven system (anyone who’s installed a WordPress installation has probably experienced the phrama hack and other ills). And best of all, I needed some kind of external motivation to continue improving my own skills with Git, and to expand in what I thought were some imaginative ways for using Git and GitHub.

So, here’s how I’ve approached posting course materials on GitHub, and some new ways that I intend to try out this fall already:

Read on...

Blog Post

Beyond Wishing: What I Try to Teach My Dissertators

I’ve been amped up all afternoon after having read this blog post, “6 Things Your Dissertation Director Wishes You Knew,” on the Chronicle.

Although there are some nuggets of somewhat useful advice to be had there, there are two serious problems in the post’s approach.

First, the title is presumptuous. When I consider my own dissertation advisees who are most likely to read this, they are already the most nervous and uncertain group to begin with—conscientious and already worried about my time, etc. So no, advisees of mine: I don’t wish that you knew these six things. You know my parameters and expectations, and I’m always willing to clarify them.

Second is the problem of “wishing,” which pairs nicely with mind-reading as a pedagogical strategy. If any piece of information is so vital that students absolutely must know it, then it deserves a better treatment than being the contents of a wish.

I don’t have anything I wish my advisees knew. I was the first person in my family to go to graduate school and I can say that 1) nobody is born knowing how to be a graduate student, let alone a dissertation writer, and 2) I remain grateful to this day for my own mentors’ explicit setting of boundaries and expectations in getting me through the program.

Instead of wishing things, I have those things that I need to teach my advisees. And I do my best to do just that (granted, sometimes they need gentle reminders, and granted, sometimes I could be better at it).

So what do I tell my advisees? What do I try and teach them? Here’s the main list. If you’re a capital-A academic, some of it may offend your sensibilities. Scandalize you. I’m a pragmatist, though. I’ll be frank, though: most dissertations suck, and aren’t worth the anxiety and effort to make them “great”—especially when doing so delays graduation by a matter of months or years. There’s nothing great about that. But maybe you will find this more helpful than the Nike-like “Just do it” regarding writing the dissertation:

  1. It’s just a dissertation. All of this talk about “original contributions,” sometimes even “significant original contributions” either in lore or sometimes even in graduate handbooks just builds anxiety. It’s just a dissertation. It’s not your life’s work, and it’s certainly not your best life’s work. It’s not even your first work as a professional; it’s your last work as a student. It has to be your own, of course, in that sense of “original”—but you are not going to revolutionize your field of study with your dissertation.
  2. If the topic seems too big, that’s because it is. You will never hear a (reasonable) dissertation adviser say, “There just isn’t enough material to make this topic viable.” The number-one thing I spend time working with my advisees is paring down and focusing. Again: it’s just a dissertation. You aren’t going to have everything to say that there is to say about your topic, let alone a stack of related sub-topics. Keep stripping down your topic until you can express it in a single commaless sentence. That becomes your guiding star for judging the reading and writing you’re doing.
  3. You don’t need to write your dissertation, or that chapter. You just need to write something toward your degree today. Most of the paralysis I feel in my own struggles to write, and that I detect in my own students, is that it’s so tempting to think in terms of units that are bigger than the lived experience of writing: nobody really “writes” chapters, dissertations, or books. The only unit we really control with writing is time, the moment-by-moment composition of sentences, then into paragraphs; be gentle enough with yourself to accept that some days are better than others.
  4. Share work early and often. This is a hard one. I still struggle to let things go in early form. I don’t want people to think I’m dumb, and the last thing dissertation advisees want is to be seen as dumb. But submitting early and often has a two-fold benefit: as an adviser, I’m much more capable of quickly responding to smaller chunks than entire chapters. And for my advisee, I can catch problems and encourage good work earlier on, before there’s an entire chapter (or more) to rework and rewrite—one that an advisee has already spent far too much time massaging and polishing.
  5. Mindless work is still work. Keep up with citations, formatting, all of that stuff. Some days, the writing just doesn’t come. Take the time to do something with your diss, even if it’s the thankless work of conforming with university style guides or APA or whichever. If you’re lucky, dispassionately going through the text looking for formatting problems will cause some bit to catch your eye, stir some thought—and then you’ll be writing again. Even if just for a moment. And even if you’re not so lucky, that work has to be done. You really don’t want the frustration of fighting with Microsoft Word as the final act before depositing your diss.
  6. A great diss, a good diss, and a passable diss all get you the same degree. I’d love for my students to write great dissertations, or even good dissertations. But that ultimately doesn’t matter to me. I want done dissertations. My own dissertation was not very good (it’s posted here). It was never destined to be a book; and frankly, once I made my final edits after my defense, that was it. It’s only purpose was to spring-board me into my research that I’d do after leaving graduate school. What mattered wasn’t the diss as an artifact, but the diss as a bed for ideas that I could talk about in job letters and interviews & pursue, in different forms, as a faculty member. Put simply, it’s better to be done and to get onto your next project than to think you have to write something that’s ready to be a book—when you yourself are still an apprentice researcher. I have plenty of friends who did turn their dissertations into books, but I don’t think they had any easier of a time with their books than I did writing mine from scratch.

Blog Post

Why don’t writers gravitate toward code?

That question has been a long-term puzzlement for me.

Although the last thing that the world needs is another definition for literacy, mine is fairly simple: Literacy is the manipulation of symbols. If we want to talk about mathematical literacy, computer literacy, or a traditional alphabetic literacy, I think that my definition holds up. “Manipulation” is a broad enough word to cover reception (reading) and invention (writing), as well as rearrangement (remixing, if you’d like).

But “manipulation” in my simple definition also implies artful, studied symbolic action (hat-tip to Kenneth Burke). The New Oxford American Dictionary on my Mac here adds the qualifier “typically in a skillful manner” in its definition of manipulate.

This semester I’m teaching a course called Humanizing Technology. One of my opening remarks to the class at its first meeting last Tuesday night was that the course’s title is kind of a lie: Technology, specifically the digital/computer technology that is the focus of the class, isn’t waiting for us to humanize it; it’s already a bit too humanized, in so far as technology reflects human imperfection, as all symbol systems do, as well as a human desire to control and perfect—or what Burke describes in his “Definition of Man” (which students are reading this week) as both being “goaded by the spirit of hierarchy” and “rotten with perfection.”

Symbolicity rules the terms of engagement with digital technologies. And digital technologies, no matter how magical, are both the product and the enabling force of specific symbol systems: computer languages.

Of all the challenges that I will be putting in front of students this semester, working through Bruce Tate’s Seven Languages in Seven Weeks: A Pragmatic Guide to Learning Programming Languages (7L7W for short) is the one that already has students most on edge (judging by the number of emails I’ve received this first week of class alone).

As I’ve reassured a number of students this week, 7L7W is not on the reading list because I think it will (or even can) make anyone a programmer; it’s there because I want to achieve two goals.

The first is to demystify programming languages, and the learning of them, by simple (over)exposure. Seven languages is a lot by any measure, especially spread out over seven weeks (more like ten because of how the class is structured). Especially in a course offered in the humanities department. (For some students in the course, particularly my undergraduates coming in from the Information Technology and Management program, that demystification may not be so profound. Though I’m hoping the learning that Tate encourages in the book will.)

The second and, to my mind, more important goal is to help students come to see programming languages (as well as other symbol systems, from number systems like hex and octal right down to binary and ASCII and its modern day Unicode supersets) as designed things. Just as its tempting to think that spoken and written languages came down directly from the gods, so too is it tempting to think that computer languages themselves have their own origins in divinity, or something outside of human invention and cooperation.

One of the features that sold me on the 7L7W book (I’d also considered Chris Pine’s excellent Learn to Program) was the interviews that Tate arranged with the creators or lead developers of the languages in the book. It’s one thing to slog through the syntax and application of, say, Ruby; it’s another to do so alongside the words of Ruby’s creator, Yukihiro “Matz” Matsumoto, who recalled:

Right after I started playing with computers, I got interested in programming languages. They are the means of programming but also enhancers for your mind…

The idea of moving from play to an interest in programming languages, I believe, is unusual. Matz further observed that “the primary motivation” to designing Ruby was “to amuse myself.”

And this is where I return to my original question: Why don’t writers gravitate toward code? Writers, and not necessarily even good ones, all share a certain love for how amusing written language is. Otherwise, we wouldn’t have puns, double entendre, and other forms of word play. We likewise wouldn’t have style, or at least a sense of it. And less obviously, we probably wouldn’t have writing period unless it were well funded (Boswell’s Life of Johnson is quotable here: “No man but a blockhead ever wrote, except for money.”)

There are few writers I can think of, though, who would find much amusing about HTML5 or Ruby. And that is unfortunate, especially when there is so little to lose in being exposed (as my students will be) to computer languages. I’m not crazy enough to believe that all writers will become programmers and developers. But some might. And quite possibly to good effect. Although at first glance a computer language like Ruby is lacking in the flexibility and ambiguity of a natural human language, it is nevertheless full of subtlety and elegance—and has a self-consciousness and even humor about itself that is absent in natural human language (minus maybe a fake one like Pig Latin).

Put another way, the distance between writing and the digital-material conditions and technologies that support writing should be of far more concern than past distances between, say, writers and typists, or scribes and makers of parchment or stone tablets.

Why? Because the digital-material conditions that make Facebook or Tumblr or even Microsoft Word possible are grounded in symbolicity that, like traditional alphabetic writing or even media-based writing (images, film), has a certain grounding in language itself. There is a continuity between communication through and programming of digital technologies, despite the assumption that writers/communicators and other lute-playing humanistic wood nymphs comprise one camp, and cold, calculating, logical (a- or anti-humanistic) programmers compromise another. It’s much more complicated than that. And I still don’t have an answer for my question.

Quote

In the Words of Steve Jobs:

I think great artists and great engineers are similar, in that they both have a desire to express themselves. In fact some of the best people working on the original Mac were poets and musicians on the side. In the seventies computers became a way for people to express their creativity. Great artists like Leonardo da Vinci and Michelangelo were also great at science. Michelangelo knew a lot about how to quarry stone, not just how to be a sculptor.

Blog Post

The Anonymous Review as Confessional

One of the wickedly fun things about anonymous, academic peer review is what I have come to think of as the reviewer confessional.

More than a few times I’ve submitted conference proposals or academic articles that somehow are taken very personally by the nameless reviewer on the other end, who then makes a sort of confessional digression from the actual review of my work.

Often this looks like someone taking something way too personally. For one of my articles, I remember something to the effect of, “Well, I would never personally want to learn how to write code…” Which is strange to me. When I review other’s research, I’d never write “Well, I would never personally want to…” followed by some component part of the research, or even some ancillary detail. What difference does it make? And yet still—that my work would be taken so personally, to the point that anonymous reviewers feel the need to defend themselves, or somehow say that my research is good for everyone else, but really just not applicable to the reviewer him/herself: that’s fascinating.

So I suppose I wasn’t surprised when this happened with a workshop I proposed for Computers and Writing 2012 that, like most of the things I propose for conferences, is a bit of a stretch. Titled “End-to-End Agile Web Application Development from Basically Nothing,” it’s a day-long workshop that hits both the front- and back-end of web application development in a whirlwind introduction to everything from version control to HTML5 to Node.js.

As I rhetorically (sort of) asked in the body of the proposal, “Is this a tall order for a day’s work(shop)? Absolutely. But the goal here is to provide participants with a full, end-to-end overview of the art of web application development, and make, by demonstration and example, the argument that we need to invite more of our computers and writing colleagues to work in these areas.”

Anyway, so as to the moment of confession from one of the reviewers. He/she remarked, “Personally, I would not be best served from this kind of workshop since I do not really have the freedom to use any CMS I want or house applications or programs on my office computer, but I also believe that many in the field can benefit from this type of workshop.”

I want to weep for this reviewer. First, because the reviewer has so little control over the computing resources that, presumably, surround his/her teaching and research. That’s just sad. But second, I weep because of the defeatism here. In that second clause, it’s other people in the field who can benefit. Not the reviewer, who (presumably) cannot possibly be served by (meaning, what? learn something?) this workshop because of a heavy-handed set of campus IT policies, or something.

Perhaps I’m reading way too much into this one-off confessional quote (the reviewer did recommend acceptance of the workshop, and indeed the workshop was officially accepted), but the implication here—I don’t control my own stuff, I can’t make my own choices and so therefore, I don’t have to learn this, but perhaps others do—is one that I find really troubling. And it’s also the kind of statement that perhaps only can be aired under the cloak of anonymity, particularly in an area like computers and writing.

Quote

In the Words of Me:

No design idea is ever so great that accessibility shouldn’t be a primary factor in the design itself.

Blog Post

That you do, versus how you do

My field is rhetoric and writing, and particularly the digital side of that field.

As I’m thinking about how to talk about digital craft, it occurs to me (obvious though it might be to others) that when it comes to making digital things, how and why you do them is quite different than that you do them.

There are, for example, no shortage of people who create websites within rhetoric and writing. Or who assign website creation to their students. Yet that those websites are being made is really no longer interesting. Or even important. Can those makers talk about the how and the why, in a way that gets at the materiality of what they’ve made? (E.g., specific choices made at the level of source code, versus a WYSIWYG…which is in a lot of ways a technology that lets you create rudimentary websites with a pre/overdetermined how realized through arbitrary menus and dialog boxes).

Blog Post

Another New Blog

I really want to start blogging again. Truly.

So, now I’m set up with Tumblr (still using someone else’s template, but that’ll change), which my blog.karlstolley.com subdomain now points to.