Operation: Read Everything By Philip K. Dick in Chronological Order

I’ve had Radio Free Albemuth sitting on my nightstand for a while now. I went to pick it up to read it recently, when I had the harebrained idea that I should instead work my way up to it, by reading everything that Philip K. Dick wrote in chronological order. Now, I’ve read a pretty fair bit of PKD’s work already, and not in any kind of sensible order. But I’ve done this chronological thing before: when our first child was born (and I was spending a lot of time sitting on the sofa, being held down by a sleeping infant) I started reading all of the Spenser novels by Robert B. Parker in publication order. It took me about 3 years to finish… or actually maybe 5, since his final novel came out in 2011. It took 3 years to catch up with the novels then currently published, anyway. (And if you’re interested, the best one is A Catskill Eagle, IMO.) So anyway, I tweeted about this PKD-in-chronological-order idea, got some amusing feedback, and then of course felt like I’d committed myself. So here, for the record, are the rules of engagement that I’ve decided on… after-the-fact, as I’ve been going along, but that I’ve decided to stick to from here on out.

  1. I will read novels in the order in which they were written, not the order in which they were published. Unfortunately, this started me off with Gather Yourselves Together (written 1950, published 1994, finished it last week) and Voices from the Street (written 1952, published 2007, reading it now), neither of which is very good. So, not an auspicious beginning. I’m really looking forward to PKD’s first novel-length foray into actual science fiction, Vulcan’s Hammer.
  2. I will read short stories in sets defined by the years in which they were written. I haven’t been able to find a source that tells me in what precise order short stories were written… and maybe no one knows. So I’ll just lump all short stories written in a particular year together, and read them in some arbitrary order. Probably I’ll do it alphabetically, just because that’s how they’re listed on the Philip K. Dick bibliography page on Wikipedia.
  3. I will read the novels published in a particular year first, followed by the short stories published in that year. Why novels then short stories, in that order? It was a completely arbitrary decision.
  4. I will use Wikipedia as the authoritative source for the order in which I should read works. I wanted to use PKD’s official site, but unfortunately that site lists his novels in order of publication date, not when they were written.
  5. I will not read the entire PKD corpus without break. First of all, I have too many other books on my nightstand. Second, I think reading nothing but PKD for, how long would it take, a year? more? really would make me insane. So this project may take as long or longer than the Spenser project. Don’t hold your breath, beloved audience.

It’s going to be a long time before I allow myself to read Radio Free Albemuth (written 1976).

Update: I stand corrected. It is known in what precise order short stories were written. Though I will use this source (replacing Wikipedia) as the authoritative source for the order in which I should read works: it’s probably less reliable than Only Apparently Real, but it’s more complete. It’s also clear that I’ve skipped a few works from the very early days of PKD’s career, which I now need to double back to, before I move on.

Posted in General | Tagged , | Leave a comment

Parsing WordPress URLs for fun and profit

I’m writing this post at Yvonne’s urging, as she suggested that this might be of broader interest than just me feeling pleased with myself and bragging to her.

What monumental achievement have I achieved? Only this: I’ve just made my grading easier, by figuring out how to parse WordPress URLs.

Oh BTW, by way of background: I use WordPress as my courseware platform, almost exclusively. About the only thing I don’t do in WP is give grades: for that I use Sakai, and provide a link on the course WP site to the course Sakai site. I’d like to use WP for grades, and in fact the admin of the campus WordPress instance and I spent some time last year trying to get KB Gradebook working, but we were never able to resolve a server-side permissions problem. But that’s neither here nor there. Point is: WordPress = course platform.

I have an assignment in my Digital Libraries course that I call Environmental Scanning: students are required to post to the course WP site at least once a week, on any topic having to do with DLs. This is actually kind of a gimme assignment, since once you start paying attention, almost everything in the ILS news (and half of the stuff in the mainstream news) has to do with DLs (or at least with collections of digital stuff, and the management thereof). But the purpose of the assignment is to get everyone contributing interesting stuff to the Great Conversation that is the course, and I’ve been pleased over the past several semesters in which I’ve used this assignment how well it works. I can only pay attention to so many news sources, blogs, Twitter feeds, etc. Distributing the effort brings in so much more interesting stuff than I could find on my own. In fact, if I have a criticism of this assignment, it’s that it works too well: students’ posts come in so thick and fast that I can barely keep up. And if I can barely keep up, I’m sure that many students are just ignoring many posts, and thus missing out on some interesting stuff. I’m thinking of changing the assignment to require bi-weekly posts instead of weekly. But anyway…

Now that it’s the end of the semester and it’s all grading all the time, I’m faced with the problem of having to figure out whether each of 24 students posted an Environmental Scan post at least once per week. (Yes, I should have been keeping up with this weekly all semester, and mostly I was… but I still need to check for the past few weeks, when I’ve let it slip a bit. Plus because I’m compulsive I feel the need to double-check the whole semester.) Of course the WP Dashboard gives you a list of users and the number of posts each has made. So I could just go through the list one student at a time, and count posts week by week. But really, let’s face it, that’s a total pain in the tuchus.

So I thought: There must be a better way. And so I decided to try to figure out how to parse WordPress URLs, to see if I could figure out a way to identify: posts by a specific student, within a specific week, tagged with a specific tag. And in fact there are two ways to do this: from the WP front end, and via the Dashboard. Like this:

http://{blog}/author/{username}/?tag={tag}&y={year}&w={week_number}

https://{blog}/wp-admin/edit.php?tag={tag}&author={author_number}&y={year}&w={week_number}

To provide a specific example: My DL course site is inls740.web.unc.edu (INLS 740 is the course number — not a very creative site name, but hey, at least it’s unique). Say I want all the posts by user jpom (me), made in the week of February 6 (the week of the iConference), with the tag iConference. Here are the URLs for that query:

http://inls740.web.unc.edu/author/jpom/?tag=iconference&y=2012&w=6

https://inls740.web.unc.edu/wp-admin/edit.php?author=308&tag=iconference&y=2012&w=6

Though of course you won’t be able to resolve that second URL if you’re not a member of the course site, since you have to be logged into web.unc to get access to the Dashboard.

That aside, let me break those URLs down. My username is jpom, & username is used on the front end. In the Dashboard, author number is used instead, & I’m number 308. (I dont know why it’s different on the front end & the Dashboard, but it is.) The tag in question is iConference. Year is 2012. Week is 6; that is, the 6th week of the year. (In the Settings, I have Week Starts On set to Monday; I haven’t experimented to see if w= changes according to this setting.)

As an aside, you can specify month using m=#, for something like this: …&y=2012&m=2 or more simply …&m=201202. To use categories instead of tags: replace tag={tag} with category_name={category}.

So now I just insert the usernames of my students, the tag environmental-scanning, and the appropriate week of the semester in the URL and voilà. Grading just got that little bit easier.

Posted in General | Tagged , , | Leave a comment

Udacity Certificate

Bear with me for a moment while I’m insufferable. Because…

Highest Distinction, baby! Hellz yeah!

So ok, seriously now. These are the levels of certificates for CS101, as articulated in the announcement email:

Certificate of Completion: you completed the class, and demonstrated that by either getting at least one question correct on the final exam, or solving at least 3 questions correctly on Homework 6.

Certificate of Accomplishment: you solved at least 3 questions correctly on the Final.

Certificate of Accomplishment with High Distinction: you solved at least 9 questions correctly on the Final.

Certificate of Accomplishment with Highest Distinction: you solved all 11 questions (including the 3 starred questions) correctly on the final, or you correctly answered over 80% of all the homework questions correctly and at least 6 questions correct on the final.

I did not get all 11 questions correct on the final; in fact, I got 88% correct (7 out of 8 questions) on the main part of the final, and 67% (2 of 3) on the starred questions. (The starred questions were the extra-challenging ones. Starred questions were a standard feature of the homeworks throughout the course.) But I did average an 85% on the homeworks. So there you go.

Ok, I was feeling all insufferably smug there for a while, until I actually ran those numbers, and realized that I’m a solid B+ student. Oh well.

Anyway… I don’t have any of my actual degrees hanging on my office wall. But I’m sure as hell going to hang the Udacity certificate.

Posted in General | Tagged , | 2 Comments

Redesigning the Reference course

Please spread this post far and wide (I ask of the 4 people who are reading this)… I’d like to get feedback on this from as many corners as possible.

I’m one of the two faculty instructors for INLS 501, the Reference course in the School of Information and Library Science. That’s not to say that only the two of us ever teach the course; the School has several other instructors, but they’re all adjuncts: PhD students and librarians from the various campus libraries. I say that not to be disparaging to adjuncts (quite the opposite: I’d like to see more professional librarians teaching courses in library school), but just in the interest of clarity. And in fact, I haven’t taught 501 in a long time: since Spring 2010, to be precise.

But now it’s caught up with me: I’m on the slate to teach 501 in the Fall 2012 semester. Which is long enough ago now that I need to rethink my approach to the course. And anyway, librarianship in general and reference in particular is changing so much that what was a timely and relevant course 2 years ago would look pretty stale now. I mean, when I taught 501 last, social Q&A was a big deal, and I had an assignment to match. Now, Google has killed Aardvark, and I’m not sure social Q&A is as interesting from a reference standpoint as we (or maybe just I) thought it would be. That’s just an example; there are many others. I’m going to restrain myself from trying to list them, because that’s kind of the point here: I want your input on what’s interesting from a reference standpoint; I don’t want to potentially bias your input by telling you what I think is interesting.

So anyway, here’s the point of this post: in a few months I’ll be teaching Reference for the first time in a few years. I’ve thought for a long time that INLS 501, and Reference courses in general, need dramatic revamping… and if that was true two years ago, it’s even more so now. (Whether Reference should any longer be a required core course in LS programs is another issue entirely, one about which I have strong opinions, but which I will not address here. Another rant for another time.) So now’s my big chance: I have the summer to completely redesign 501. The problem is, I’m not on the front lines of reference and other customer-facing services in a library these days. So, gentle readers, I need your help.

Do you work on a reference desk as any part of your job? Do you do any form of reference-like work? E.g., liaison librarianship, research consultations, etc. I’m sure there are other reference-like things I’m not thinking of… which is, of course, the point here: I need help in identifying what the State of the Art is for reference and reference-like activities.

I want to teach a reference course that will prepare students to go out and do that kind of work in real environments, and be aware of the issues and trends that will face them over, say, the next 5 years. What I don’t want is to teach the same course that I’ve taught before. In my own defense, I make some changes to every course I teach every time I teach it… but in the case of the reference course, that feels like tweaking around the edges (not to say rearranging deck chairs). My syllabus for 501 is still fundamentally the same framework as the course I’ve been teaching for years. I want to break the frame, really seriously re-envision what a reference course can be and should be. And for that I need your help.

Speaking of my syllabus, here it is. As it says right at the top: Please note that this syllabus is under development. In particular, I do not plan to use all of those assignments. I’m considering using each of those assignments, or some variation on them, but not all of them. I think that would be too much, both for the students (who are, after all, taking more courses than just mine) and for me (who, I admit it, tends to be slow in evaluating and grading student work). For another thing, I’m thinking of dropping Bopp & Smith as the course text… maybe using selected chapters, but not requiring that students drop almost $50 for it.

Also, here’s the course schedule: a link to the Google Calendar (back it up to Spring 2010, remember) and to a PDF export of same. The readings and other notes for each class session are in the Description field. I apologize for the poor readability of that field in the PDF.

In terms of the structure of the course, here are some things I’m thinking about. I’m fond of project-based courses: witness my Digital Libraries and Library Assessment courses. I’m thinking of making Reference project-based as well, though maybe not a semester-long über-project like those two other courses, but smaller projects, like organizing a street reference event. Thoughts? If reference can’t be taught by apprenticeship (which, honestly, I believe would be the best approach), perhaps an active learning / action learning approach would be second-best.

Further, let’s talk about case-based education. As usual, Kevin Smith totally nails it: in his recent LJ article he makes a case for law school-like case-based education in library school. I’ve had that thought myself, but have always been stymied by a dramatic lack of existing cases to use, which means I’d have to write them all myself. Thoughts on what would make good cases for a reference course? Anyone want to write one?

So: Lay it on me. Do your worst. Topics, order of topics, assignments, texts, basic structure, you name it. I want feedback, suggestions, ideas, proposals on all of it. I’ll acknowledge all input on the syllabus. I’m also thinking of contributing my syllabus to GitHub.

Posted in General | Tagged , | 14 Comments

Another post about Udacity CS101, post-exam but pre-final grade

It’s been several weeks since I posted anything about my Udacity CS101 course… mostly because over the past several weeks I’ve been spending every evening working on the course and not blogging about it. But now I’ve completed all 7 units, and the final exam, the grading robots are hard at work, and I’m waiting on my exam grade.

While I haven’t been blogging about CS101, I have been making notes. So now let me try to stitch my notes together into a semi-coherent post.

First of all, there’s been a hell of a lot written about Udacity, MITx, and disruptive innovation in higher ed generally, both in the higher ed press (The Chronicle, Inside Higher Ed, etc.) and the mainstream press (Wired, the NY Times, etc.). I won’t even try to sum it all up. But if you’re in higher ed and you’re not keeping up on these developments, well, all I can say is, you’re part of the problem.

There are two pieces that I will mention, however: One, Yvonne wrote a post on the CIT blog about her experience in MITx 6.002x: Circuits & Electronics, MITx: A view from the inside. Two, for my money, the most interesting piece on disruptive innovation in higher ed that I’ve read so far is Kevin Carey’s piece in The New Republic, The Higher Education Monopoly is Crumbling As We Speak.

Now on to CS101.

It was in about week 5 that I started to feel like a real student: the self-assessments were getting difficult, and I was actually concerned about getting the correct answers on the homeworks. Even though I enrolled in the course largely to experience their instructional design, I very quickly came to actually care about doing well in the course. I was also in it for the Python, of course, and so my geek pride demanded that I do well on that front. And, on that front, I have learned a good deal about Python. Evans, the instructor, kept it as simple as possible, and as a student I appreciate that, and as an instructor understand why he did so. But even so, we still learned a lot of Python, even if we only scratched the surface.

And now I’m a total Python convert. For the homeworks we often had to write code, and of course for the final exam. I tried as much as possible to use only the Python that we had learned in the course. Even though it was clear that not all students were doing that, as evidenced on the discussion boards, and even according to a comment that Evans and his TA Peter made in one of the office hours videos. But at certain points I thought it would be easier to just “cheat” and look up Python functions that we hadn’t learned in the course and use those. But honestly, I don’t really consider that cheating: if (part of) the point of the course was to learn Python, then teaching myself more Python surely isn’t cheating. But I’m telling you this as a way of explaining why I’m a Python convert: because every time I thought, “hm, maybe there’s a built-in function that will allow me to do X,” I went looking, and lo and behold, there is such a function! I read a comment in some discussion board post to the effect that “Python makes the difficult simple, the impossible feasible.” I totally buy it.

And in that vein… I found myself really enjoying programming again. It’s been yonks since I’ve done any programming of significance, anything more complicated than a one-line Unix script. I mean, who has the time? But I was really enjoying spending significant amounts of time in the evenings getting my head into coding again. I’d almost forgotten what that headspace felt like — where I used to live so much of the time, in a previous previous life — where I could lose time coding, and even after I stopped, some part of my mind was always churning on the algorithm I’m working on. I really enjoy that, and now I think I need to find reasons to do more development as part of my research, so I can have some of that back.

But anyway, on to the instructional design.

Each Unit has an associated set of notes, which is easily accessible throughout the entire Unit: below the video / Python interpreter block (they occupy the same real estate in the browser), there are 2 tabs, Instructor Comments and Supplementary Material. A link to the unit notes are always in the supplementary material. Anyway, I found it incredibly helpful to have these to refer to. For a while I had the notes from Units 1, 2, & 3 in 3 different browser tabs, just for reference. In I think Unit 4, they added a Python Reference document, which contains all of the functions that we’d covered up to that point. With each new Unit, they updated that Reference document. I kept that document open all the time, and that was incredibly helpful.

As the course progressed, I found that I backed up videos more. In the first few units, I watched videos once and moved on. But as the content got more challenging for me, I found that I had to re-watch bits. I rarely re-watched a whole video, though I did do that from time to time, but I frequently re-watched parts of videos. I found that these re-watchings fell into 2 categories:

  1. Backing up a video to make sure I understood some point, because I tuned out momentarily or didn’t fully understand something; or
  2. While I was working on a quiz or a Python exercise, re-watching part of some previous video that explained some salient point for completing the assessment.

I also found myself pausing videos to mentally replay steps to make sure I got something, and then backing the video up to watch that bit again.

Anyway, not that I was skeptical about the pedagogical value of short instructional videos before, but I’m really sold on the pedagogical value of short instructional videos now. I’ve read in several places that one of the most useful things about video instructional content is the ability to back up and replay something. It’s one thing to read that as an instructor; it’s quite another to experience it as a student.

On the subject of videos, however, I do have one issue. The video solutions to the homeworks were posted after their due date, which was usually a day or 3 after I finished them. By which time I’d usually started on the next unit. So the details of what I was thinking, and how I solved the homework problems from the previous unit were no longer fresh in my mind. So I found it a bit disorienting to go back and look at the homework video solutions. I found myself wishing they were available immediately.

Finally, I found that as the units progressed & got harder, I was looking at the discussion boards more frequently for help on the self-assessments and homework. I kept up with the course, but there were students who were clearly devoting way more time to the course than I was: by the time I got to the discussion boards for any given issue, there were always several threads on the topic. Yvonne is finding the same thing for her MITx course. On both the Udacity & MITx discussion boards, posters can earn badges for various things (Good Question, Good Answer, Civic Duty, Editor, etc.), which I think is a nice touch. To be honest, I was a complete free rider on the discussion boards: I read threads, but I posted no questions or answers. Maybe if I had more time to devote to the course I would have participated; I don’t know. I treated the discussion boards more or less the same way I treated the many sites on the Intertubes that contain Python Q&A and code snippets, and the Python Software Foundation’s documentation: as a resource to fulfill my information needs. I suppose this is very selfish of me, and very not community-minded (Wikipedian? Crowdsource-ish?). But there it is. As an instructor, I need to think about whether it’s important to try to draw out lurkers who could potentially be valuable contributors (I suppose I’m flattering myself that I would have been a valuable contributor). In a course on the scale of Udacity, with thousands of students, you only really need a smallish percentage of those to actively participate in discussions, to be useful to us lurkers. And I suppose, like everything else in human information-related behavior, there’s always going to be your basic long-tailed distribution of participation. Of course, to get students to do anything, you make it a graded assignment… not to be cynical, but it’s true. That was of course not the model for grading in CS101, and it might not be feasible on that scale anyway. (And there’s another question: is there a way to semi-automate evaluation of contributions to online discussions?) But for my teaching online, for small (compared to Udacity and MITx) courses, it is an issue. I suppose the question is: is it important to try to draw out lurkers? At what course size is it important, at what course size does it become unnecessary?

Posted in General | Tagged , | 3 Comments

More thoughts on Udacity CS101, upon completing Unit 1

I wrote my last post when I was about a third of the way through Unit 1 in CS101, and before the homework for Unit 1 was posted (which happened last Thursday morning). Now that I’ve finished Unit 1, both content and homework, I thought it was a good time to take stock & post again.

First, I have an issue. The instructor, David Evans, made his videos using some technology I’m not familiar with, but it’s clearly some kind of smartboard- or tablet-and-smartpen combination. The way it appears in the video is like this: he writes on a white surface using this groovy pen, and what he writes shows up in various colors, just like on a smartboard. The weird part is that what he writes seems to be floating above the pen and his hand, rather than appearing below it. That is, not above in the Y axis on the plane of the screen, but hovering above on the Z axis, so that the writing appears to be closer to the viewer’s eye than the pen. It’s kind of distracting. Actually I’ve more or less gotten used to it, and I can tune it out now, but it was very distracting for the first few videos.

This of course is Rule #1 for the use of technology in teaching: Never let the technology overshadow the actual pedagogical purpose. And I wouldn’t go so far as to say that this tech overshadows the content. But it is distracting.

And this raises a question for me, for the videos I make for my courses: handwriting or Powerpoint? Evans has so far made all the videos for CS101 using his own handwriting. (Except the bits in the Python interpreter, of course.) The videos that I’ve seen from Sebastian Thrun’s AI course last semester were all done with handwriting. Khan Academy videos are all Sal Khan’s handwriting. (At least I assume it’s his handwriting.) Obviously there’s a trend here: making educational videos using handwriting. And I can see the advantage of that: it makes it feel personal, like the instructor is writing just for you, like you went to David Evans’ office hours and he’s jotting on his whiteboard while you talk. I get that.

But here’s my problem: my handwriting sucks, and these videos are edited heavily. Why edited? To fast-forward in time, so you don’t have to watch Evans form every single letter. Which brings me to Powerpoint. I made a few videos for my Digital Library course using Powerpoint: I wrote a script and created a slide deck to accompany it, more or less in tandem (the script usually slightly preceded the slides), then I used Powerpoint’s Record Slide Show feature to record the timing of the slides and my narration, and exported that YouTube. It was super-easy. Powerpoint is definitely the low-bar way of creating videos. And for getting time-consuming stuff done, I tend to prefer low-bar. Good enough is good enough. But I do fear that Calibri is a poor alternative to handwriting. Does using a font make the video feel less personal? Or can the voiceover compensate for a lack of handwriting? If anyone is reading this, I’d welcome your feedback on this weighty issue.

My second issue is that some of the videos feel slightly pedantic, especially on the quiz reviews. Evans proceeds through each quiz option in somewhat excruciating detail. Which is, of course, better than the opposite. And I understand why he does it this way: these materials are being prepared for 70,000 (or more) students. As in any course, the instructor has to make things as clear as possible, which often means going slower than more advanced students would prefer. I figure I’ll probably mind this less as the content gets more advanced, and I stop being an advanced student. And, I have to think, this is probably what half the students in my courses feel like a good bit of the time. I need to be careful of that in my classroom teaching. But for making videos, I think Evans makes the right call: better to err on the side of being slightly pedantic than to lose your audience within the span of a 3 minute video.

That’s one gripe and one sort-of gripe. Now the good stuff: I’m finding the course so compelling that I want to work on it all the time. I had a hard time this week stopping once I’d started. I even found myself wanting to work on the course during the day while I was at work, and the one time I actually succumbed to that temptation, it took a student walking into my office to make me veer off.

The downside of the course being so very compelling is that I found myself rushing through it: I would watch video after video, and spend some significant time working on the quizzes and Python exercises. On the one hand, this is good, because, well, education should be compelling. But on the other hand, by Wednesday I found myself concerned that I’d finish Unit 1 too quickly and lose the thread before Unit 2 was posted. In the end, that didn’t happen, as I should have known it would not: the homework was posted on Thursday, and that took me several hours, plus the simple fact of having a real job and a family slowed me down sufficiently.

But this did make me realize that this is no different than “traditional” classroom-based courses, where there are sometimes several days between class sessions. I teach a Monday/Wednesday course this semester, so my students have a 4 day gap between class sessions. Hopefully they’re doing work for the course during those 4 days, but I don’t really have a way to force that to happen. In an asynchronous course like CS101, there’s no way to force it either. I’m a big believer in project-based courses (both of the courses I’m teaching this semester are based around semester-long projects), so I have to assume that my students (most of them, anyway) have their head in the game during non-class days, otherwise they’d never get their project deliverables finished by the due dates. But CS101 has made made me appreciate the value of homework and other small self-assessments, which I tend not to use much in my courses. Something to reconsider for next semester.

And on the subject of self-assessments… the quizzes and Python exercises. These are automatically evaluated. It’s not clear what this looks like on the back end, though I imagine they’re fairly simple algorithms. It seems like it would be quite easy to automate evaluation of a multiple choice quiz. As for the Python, if the value of such-and-such variable (and Evans tells us what to name the important variable) equals the correct value, then the exercise is evaluated as correct. It’s not clear to me, at this stage, if the code that gets to that point is evaluated.

But my point is, it’s difficult to imagine assignments in Information Science that could be automatically evaluated. I know of some instructors in this field who use multiple choice exams in their courses, though I’m not one of them, and in fact I have a hard time even imagining what a good multiple choice question would look like in the courses I teach. Though maybe that’s a failure of imagination on my part. I can think of one or two assignments that I could use in my courses that could be automatically evaluated, and in fact I plan to set up one such assignment for the next time I teach my Digital Libraries course. (Assignment: Set up an OAI-compliant metadata repository. I’d have to create a harvester. If the harvester successfully harvests the student’s metadata records, then the student has successfully completed the assignment.) But the point is, I can only think of one or two assignments that could be automatically evaluated that I can use in my courses. I’ll probably think of more as time goes on. But I have a hard time imagining that I’ll ever be able to come up with enough such assignments to cover a whole course.

A lack of assignments that can be automatically evaluated means that my courses (any courses in ILS? any courses in the social sciences?) cannot scale to 70,000 or 160,000 students, or whatever. Without the ability to automate evaluation of all, and I mean all assignments, that scalability is just impossible. Because no automation = a human grading 70,000 assignments. And by “a human” I mean “me.” Now, I don’t expect that 70,000 students are suddenly going to rush out and take my Digital Libraries course online. (I should be so lucky.) But if I make my videos available, & I make whatever assessments I create available, there’s no reason why a student not enrolled in SILS shouldn’t be able to use them. And am I going to evaluate that student’s performance? No I am not. And so I feel that my teaching — and maybe the entire field of ILS, maybe the social sciences generally — hits a wall fairly quickly, in terms of the scalability of online courses. Some courses can probably be automated better than others. But fundamentally, there will probably always be some courses for which evaluation cannot be fully automated (or automated at all?). And so I feel a bit stuck. Again, maybe this is a failure of imagination on my part. If anyone is reading this, I’d like to hear your thoughts on this.

Posted in General | Tagged , | 2 Comments

My experience so far with Udacity CS101

I’m really fascinated by the (fairly) recent boom in new models of online education: Khan Academy, MITx, Udacity, etc. And in fact I registered for, and have started Udacity’s CS 101, Building a Search Engine. Actually I already know how to build a simple search engine, though I’m sure I’ll learn more. Really I’m registered to see how they manage a course on that scale, how these models can impact my teaching, the future (or lack thereof) of the higher ed system, etc. I’m sure lots of others are registered for the same reason. Phil Edwards, for example, is registered for the first MITx course, 6.002x, Circuits and Electronics, for similar reasons.

I plan to report back on my experience with Udacity’s CS 101 here. I have not given myself a set of groundrules like Phil has… instead, I’ll just write. I’ve found that if I give myself too much structure, or try to write lengthy posts, that I never write anything on this blog. So I’ll just write as inspiration strikes. So here we go.

I’m about halfway through Unit 1 of CS 101. (For those of you enrolled, I just watched the video on Grace Hopper and took the first variables quiz.) And so far I’m not disappointed. Far from it, I’m fascinated and compelled. Here are my thoughts so far.

One. I haven’t learned much that’s new to me so far, except that Grace Hopper carries around nanosticks, and a bit about Python syntax. I leaned to write Hello World-style algorithms at age 11, and CS 101 is starting about at that same level. Which is good, since the goal of the course is to introduce CS concepts, and, though the instructor David Evans doesn’t say it in so many words, computational thinking. And that’s fine; I didn’t enroll in this course because it’s all new material to me… I enrolled in it because it isn’t all new to me, and I can therefore pay attention to the mechanisms of the course more closely. I am learning about Python, though, which is a language I’ve thought for a while that I should learn. So that’s good.

Two. The longest video so far has been 5 minutes. Most are 1-3 minutes. That feels about right to me… the 4 & 5 minute-long videos feel long to me. I know that sounds strange, but for whatever reason, some sort of time dilation happens when you watch videos online. Less is more. This is an especially important observation for me today, as I’ve given the students in my Digital Libraries course this video to watch as today’s “reading”… a video that weighs in at a whopping 45 minutes. When I said this to Yvonne about the 1-3 minutes thing, her comment was that most faculty would say that they can’t say anything in 3 minutes… to which the only possible response is, really? Try. David Evans has thus far done a very good job of chunking the content into 1-5 minute segments. Are those segments lacking in some way because they’re not longer? No.

This is more or less what I want to do with some of the content in my Digital Libraries course. For the course last summer, which was entirely online, I made a bunch of videos (very amateurishly), and collected a bunch more, into a YouTube playlist. I’m pretty happy with the content of the videos I made, and have plans for more, but I’m not happy with the videos themselves. They’re too long, for a start, weighing in at mostly 8-10 minutes. For another, they’re too sprawling; I try to cover too much in each video. I suppose this is an argument for more chunking. Clearly I should take some tips from the Udacity instructional designers.

Three. A word on assessments. Assessments of various types are embedded in the course site, in the same space that the video displays. Assessments take two forms: quizzes and writing Python code. The quizzes are single- or multiple-answer multiple choice questions using, radio buttons or checkboxes. These are not factored into the student’s final grade; they’re just self-assessments. While, as I’ve said, the content is so far not new to me, I’m still finding these useful for slowing things down & making me articulate what I know, even if only to myself. As for writing Python code: again, in the same space that the video displays, a Python interpreter appears and you have to write one simple program per assignment, run it, and when you’re satisfied with the result, submit it. It’s then automatically evaluated, which I assume means some algorithm checks that you got the right result. It’s not clear to me if the code itself is checked. Maybe that will become clear later as the assignments get more complex.

Anyway, embedding the assessments in the course site in the same space that the video displays is a neat trick. It makes the experience of the course very clean and seamless, since it’s all right in the same screen real estate. And I’m really curious to know how they embed quizzes into the videos. Yvonne tells me that there are several tools that can do this, including Camtasia Studio, which I have a license for. So I’ll have to experiment with that.

I’ll post this now, before I run out of steam. Stay tuned, gentle reader, for the further adventures of me in CS 101.

Posted in General | Tagged , | 2 Comments

Ripping books

I have ripped my first book: In The Age Of The Smart Machine: The Future Of Work And Power, by Shoshana Zuboff.

Why In The Age Of The Smart Machine, in particular? Well, first of all, it’s not really my book, or at least I didn’t buy it: it was bought by Robin Peek, my colleague, mentor, and friend from Simmons College (and my ripped version retains her name written in it on the flyleaf), and I inherited it when she cleaned out her office probably over a decade ago. So I didn’t have the emotional attachment to it that I might if I’d bought it with my own money. Plus, while I didn’t (and didn’t feel I had to) run it by Robin to get her permission, I think she’d approve. Also, it seemed appropriate: what could possibly make for an age of smarter machines than large collections of digitized texts?

Ripping a phonebook

I’ve wanted to rip my books since I first read this post, in which Alex Halavais describes ripping his own books (to an audience at an AAUP conference, brave man!), in the interest of saving space at home and having his materials available on the road. Genius, I thought… why didn’t I think of that? And then I was reminded of my grand ambitions when I saw Jason Griffey’s recent post, Ripping your books.

My early forays into the paperless office were not books, however, but papers in my file cabinet. That task is now mostly completed, thanks to my (no doubt very bored) GAs over the past 2 academic years. I’ve also wanted to build the DIY book scanner for a long time, and I even got as far as some semi-serious discussions on the topic with Cristóbal Palmer… but we ran up against the fairly basic problem of simply not having anywhere to build the thing.

And so it’s taken me this long to get around to ripping my books, using more or less the Alex Halavais method. And here’s where I talk about my technique.

First I literally ripped the book. The copy of In The Age Of The Smart Machine that I have is a paperback, with just glue holding the spine together. I broke the spine, and tore the book into chunks. The book is about 470 pages long, and I tore the book into chunks of about 20 pages. Why 20 pages? Because that’s about as thick as the paper cutter in the SILS library will accommodate. I then sliced the inside edge off the chunks, to remove the glue of the binding. That took less than a quarter-inch off the width of the page. Reassemble the book in page order, and stick on desk for a few weeks.

Eventually I got around to the actual format-shifting. Here’s how that worked. I took a chunk of pages and stuck them on our office photocopier. Each stack was around 100 pages, because that’s as thick a stack as the copier could handle (the grade of paper is fairly thick, thicker than printer paper). Oh and BTW, I had to turn each page individually before putting the stack on the copier, because there was some residual glue connecting some pages. Those pages pulled apart easily… but easier to do that pre-scanning than to go back and re-scan later. Because the cut edge of the book was a little rough, I had to put the stack on the copier upside down, so it fed the outside edge of the book in first. That just prevented the copier from jamming on the rough edge.

Our office copier will scan to PDF, and email the PDF. I love that feature. So I scanned the whole book in chunks, and received a bunch of PDF attachments to emails from the copier. I discovered that, slightly annoyingly, the maximum length of a PDF that the copier will send is 50 pages, so for most batches I received 2 or 3 files. Download the attachments. Open in Acrobat.

Because I scanned the book upside down, I had to rotate the pages 180°. Easy enough. I combined the (many) PDF files from the copier using Insert Page. When the whole book was one big PDF file, I ran Acrobat’s built-in OCR on it. Unexpectedly, the file size shrank after OCRing, from around 16K to around 13.5K.

So there you have it, folks. I now have a fully searchable, PDF version of a book. And it was a big book too, so I think it makes for a good proof of concept. I wish it were in a more flexible format, rather than PDF, but I’ll take what I can get. And proof of concept it is too: I’m now faced with the decision of which of my books (that I paid for with my own money this time) get the axe first.

I now have a stack of paper that used to be a bound book sitting on my desk. I’ve considered just dropping it in the recycle bin, but I just can’t bring myself to do that. I think I’ll put it on the table in the SILS lobby. Surely some student will want it. If not, then the bin. And that should tell us something about the value of the print format.

Posted in General | Tagged , , | 2 Comments

What Then Must We Do? (Part 1)

I was recently asked to talk to the UNC library staff about open access & copyright issues. Why was I, who am neither a lawyer nor a scholar whose research focuses on publishing and OA, invited to talk about this? Because of my recent run in with Taylor & Francis. Truly, I have achieved internet fame through my blog: I am famous to 15 people. Also, apparently the way to get invited speaking gigs is to have war stories.

Anyway, in an effort to appear as if I actually had a clue when I talked to the library staff, I started writing not one but two posts, in an effort to get my thoughts in order on the topics of copyright, OA, and academic libraries. Specifically, I wanted to be able to answer the question that I believe at least partially motivated my invitation to speak: librarians asking themselves, What Is To Be Done?

So this post is part one of two. Mostly it was written before my talk, as I said, in an effort to get my thoughts in order. But I’ve updated it some now that the talk is over, and I post it here for your edification or amusement. I suppose I’d be happy with either.

It seems to me that this topic has two sides to it: libraries’ collections and subscriptions, and faculty publication habits. Both of these are large, complicated, and thorny issues, which is why I decided to address them in two posts. In this post, I’ll discuss the former.

The position I started from on this is the belief that I’ve held for some time: I think it may be impossible for academic libraries to extricate themselves from the full-nelson stranglehold that commercial journal publishers have them in. Why? Because until faculty stop publishing in, reading, citing, and assigning articles from commercially-published journals, libraries have to subscribe to them, or else cease to fulfill their mission to support the research and teaching at their institutions. And by “subscribe,” I mean either or both purchase to own and have on the shelves, or rent access to online. These are not the same, of course, and have different implications for the library doing one or the other, but I’m going to ignore that for now.

(As an aside: Is the full-nelson a stranglehold? I should probably avoid using sports metaphors, since I have no idea what I’m talking about.)

Of course, libraries have been dropping subscriptions to journals, have for years. The serials crisis, yada yada, nothing new to report here. Academic libraries simply can’t afford to subscribe to everything, so as subscription costs rise, libraries drop titles. (To step up on my evaluation soapbox for a moment: one hopes that these are the titles that receive the lowest use locally, that libraries actually use usage data to make these decisions. But that’s another rant.)

So academic libraries, in order to fulfill their missions to support research and teaching, must subscribe to commercially-published journals… but they’re already falling down on that job by dropping subscriptions. Which is mostly fine, because the subscriptions they’ve dropped are mostly not missed anyway (cf. previous comment about lowest use locally)… and besides, we have ILL networks, so can get those materials, as long as you’re willing to wait a few days.

Now, I’ve been reading about academic libraries dropping their subscriptions to the Big Deals offered by publishers — “offered” is a word which here means, “forced upon” — and instead subscribing to journals one by one. This is, I believe, a step in the right direction. Commercial publishers have academic libraries in a stranglehold, but — to really do violence to this metaphor — ditching the Big Deal is analogous to libraries saying, “You thought you had me by the throat? Ha! That’s my leg!” Libraries don’t need the Big Deal; they’ve just been sold a bill of goods by publishers, in the guise of convenience, because libraries have had neither the data nor the negotiating skills to ask for anything different. As I say, I think this is the right direction to take. Moving away from the Big Deal demonstrates that what’s important for libraries is materials that get used (books are for use), not materials for their own sake. I’ve said it many times: libraries need to (a) get better evaluation data, and (b) grow a pair where negotiating with vendors is concerned.

The problem with the ditching-the-Big-Deal trend is this: commercial publishers aren’t stupid. (Malicious, maybe, but not stupid.) Publishers didn’t get the better of academic libraries by accident; it was the result of careful market analysis and manipulation, over the span of decades. So, for the sake of argument, let’s say that many or most academic libraries drop the Big Deal, and subscribe to titles individually, over the next few years, so that this becomes the norm. Over the span of years, publishers will be perfectly capable of doing data analysis on their sales & usage data, and identifying which individual titles are the most used. And then we’ll be back to more or less the same situation we’re in now: publishers will charge astronomical subscription rates for those individual titles, to milk every last dollar they can out of the library market. Of course what the high-use titles are will vary somewhat by institution… but publishers are perfectly capable of identifying those market differences, and have. And much of the time, publishers have better data about libraries’ usage of online titles than the libraries themselves do. And publishers have demonstrated that they are perfectly willing to use non-disclosure agreements to hide the details of their contracts with individual libraries or consortia, so other libraries cannot benefit from that information. So, in other words, publishers hold all the cards in negotiations with libraries. Publishers are essentially monopolies (in that they are the exclusive purveyors of the specific scholarly content they publish), so it should hardly be surprising that they engage in monopolistic behaviors. The one card that libraries hold here is, of course, the money card. Libraries can vote with their wallets, so to speak. That is, of course, the most important card in the deck (to overextend yet another metaphor). But, as consumers, libraries are operating from an informational disadvantage: they only see their own, and maybe a few others’ consumption behavior, but the vendor benefits from aggregate data.

Let’s recap: subscribing to journals in sets isn’t working for libraries, and subscribing to journals individually won’t work in the long run. Commercial scholarly publishers have demonstrated that they are bad actors. So the only conclusion I can come to is this: libraries have to stop doing business with commercial scholarly publishers altogether. Which, unfortunately, brings us back to libraries not being able to fulfill their mission to support the research and teaching at their institutions.

Of course, libraries will never do this, precisely because doing so would hinder their ability to fulfill their mission. And if libraries cannot fulfill their mission, they risk irrelevance, which means loss of users and loss of funding. If libraries don’t subscribe to the journals that faculty and students at the institution use, then the faculty and students will get those articles from other sources: Google Scholar or other commercial services. At which point, the college or university administration would be perfectly justified in asking, Why are we pouring all this money into supporting the library again? Of course, to be fair, without journal subscriptions, the library’s budget would be quite a bit smaller.

Which brings me to an idea for a study: A library should identify every article that’s downloaded from all journal subscriptions from the publishers’ sites (that is, not from a third-party database), and identify what it would cost if the library didn’t have those subscriptions and each article had to be purchased individually. Then compare that cost to the cost of all of those subscriptions. My hypothesis is that the cost of buying articles individually at the paywall would be lower — quite a bit lower — than the cost of maintaining all of those subscriptions.

Assuming that this is true, I see a new role for academic libraries (and one that librarians will almost certainly hate): as a funding source for purchasing individual articles at paywalls. What I envision is this: some clever setup on the the campus network, so that any time anyone hits a paywall for an article, there would be an option, Do you want to purchase this article? Click yes, and it would be purchased and downloaded, and the library fund debited. Make it as easy as online shopping. I challenge some library somewhere to do this.

There are, of course, 2 problems with this: First is that, again, if this were to become the norm for libraries, publishers would jack up the cost at the paywall, and we’d be back to square one. Second is that any barrier to access, even if the user is not the one paying, would probably decrease the number of articles accessed, and so we’re back to libraries not fulfilling their mission to support research and teaching.

I’ve started writing the conclusion of this post several times, and I keep finding myself slipping into the realm of faculty publishing behaviors. But I’m going to set that aside for my next post. For now, let me try to focus on answering the original question I set for myself: What can academic libraries do in the arena of copyright and OA, in light of collections and subscriptions?

The position I’ve been advocating for throughout this post is this: drop all subscriptions from commercial publishers of scholarly content. Subscribing to third-party databases of scholarly content might still be viable, I’m not sure. I haven’t thought that one through very extensively. (Note to self: make that the topic of a future post.) “Collect,” by which I mean link to in the OPAC and via any other mechanism, any and all OA publication venues that might be relevant to the local community.

Of course, no library would do this (except for the linking to OA pubs in the OPAC part, which many libraries do in fact do), for the reason stated above: failure to then fulfill their mission. Doing so would essentially kill the function of the academic library. And so I’m back to where I started: impossible for academic libraries to extricate themselves from their completely dysfunctional relationship with commercial journal publishers.

The only hope that I see for academic libraries is actually almost completely out of libraries’ hands: for faculty and other scholars to radically change their publishing behaviors, and start only publishing in OA venues. And I mean only: if any commercially-published journals are left standing, we’re back to square one for libraries. As I wrote at the start of this post, I’ll deal with publishing behavior in my next post.

Am I being overly dramatic or doom-and-gloom about this? Maybe. Maybe I’m just having a failure of imagination. But I just don’t see any way to get from here to there without burning the village.

Posted in General | Tagged , , | 1 Comment

In which Pomerantz responds to his loyal fans

I’ve gotten a lot of love and kudos from the interwebs from my recent post, in which I document in nauseating detail my taking a principled stand on retaining copyright to an article I and a colleague wrote, and ultimately telling the publisher Taylor & Francis to kiss my shiny metal ass. I haven’t done any data analysis to back up this claim, but my sense is that this was my most-retweeted tweet and most commented-upon blog post ever. I thank you all for your support.

But I’ve also gotten some flak, mostly in the comments on that post. That criticism falls, more or less, into 3 categories:

  1. Why don’t you publish your paper in an OA journal?
  2. Why don’t you put your paper in your universities’ institutional repositories?
  3. T&F and all publishers have more generous contracts in their back pocket, if only you know to ask.

Let me respond to each of those in turn. Because I think those are all fair criticisms, and all get to important issues.

Why not publish our paper in an OA journal?

Well, for one thing, my co-author Diane and I were pretty well sick of dealing with journal editors and publishers by the end of our saga, so we didn’t really want to start all over again from square one. Second, as I wrote, neither of us needed that paper to be published for professional reasons. UNC (my institution) has post-tenure review, and Duke (Diane’s institution) has performance reviews, and publication record is part of those review processes. So another publication would be useful professionally. But this one article won’t make or break either of us.

When I got tenure I seriously considered taking a vow (though to whom, I’m not sure) to only publish in OA journals. (The only reason I ever got involved with T&F in the first place was because my friend and colleague Lorri asked me to write something for her special issue… that will teach me.) But I realized very quickly that taking an OA-only stance in this field is almost completely untenable. There are simply not enough A-list OA journals to choose from. And I apologize if you’re the editor of an OA journal in ILS… nothing personal. Obviously yours is one of the great ones.

Still, despite my protestations, there are plenty of OA journals out there to choose from in ILS. And some of them do have good reputations. And, let’s face it, the only way a B-list journal becomes an A-list journal is if good scholarship is published in it. This is a slow process, like most things in academia: one good article in a second-tier journal won’t do the trick, it takes lots of good articles over the span of years. Which probably means a concerted effort on the part of the editor(s) to raise the journal’s profile. Still, this can be done, and I’ve seen it done, with at least two journals in ILS. (To give credit where it is so abundantly due, the editors who took on this probably pretty thankless reputation-building task for their respective journals are Candy Schwartz, and the team of Michelle Kazmer and Kathy Burnett.)

So: I hereby vow (though I’m still not sure to whom) to only publish in OA journals from now on… to the extent that is possible. I reserve the right to publish in commercially-published journals, in the event that a friend and/or colleague asks me to contribute to their special issue, or something like that. (Lorri will, I feel certain, never ask me to contribute to anything ever again, now that she knows what a pain in the backside I am.) Anyway, that is my vow: to the extent possible, publish only in OA venues. And you, dear reader, should do the same. Be the change you want to see in the world.

Why don’t you put your paper in your universities’ institutional repositories?

Yes, we should do that. And, when I’m less sick of thinking about this paper, I will do that. (I don’t want to speak for my co-author.)

But here’s my problem with IRs: putting a paper in an IR gets me nothing more than self-hosting it, and self-hosting it is easier. Not that it’s difficult to put a paper in an IR, but it’s just so much easier to put it up on my own site. In the case of our paper on liaison librarianship, all I did was make the Google Doc public, and link to it here. Easy peasy.

As for the “gets me nothing more than self-hosting”: don’t get me wrong, I think IRs are a good idea, in principle. But I have to say it, IRs are a perfect example of one major thing that’s wrong with many digital library projects. (And I think I get to say this, given that my research is, in large part, on DLs, and I teach the DL course in my School, so I therefore spend a lot of time thinking about DLs.) This one major thing is this: IRs are, by and large, hidden silos. Part of the point of OA publication is that the publication is freely accessible to the reader, but equally important is that it’s discoverable. Freely accessible without discoverability is, quite frankly, close to useless. The problem with most IRs is that they their contents are not discoverable through Google. Try this sample search. For whatever reason, someone thought that it would be a good idea for the Carolina Digital Repository to exclude search engine bots. So the only way to know that my paper is in the CDR is to search in the CDR, or for me or someone else to link directly to it. This is of course an easy fix: The CDR could simply be opened up to Google. But it isn’t currently. While my site is. And so, obviously, is Google Docs. So if I put our paper in the CDR, it’s in principle publicly available, but in practice invisible.

Publishers have more generous contracts in their back pocket, if only you know to ask.

Well, I know that now. And now, so do you. Others have very generously told their stories in the comments on my previous post: see those to learn about what to ask for.

But look, it’s completely ridiculous that authors have to ask at all. (And, in my case, browbeat.) The ALA has two copyright agreements, copyright assignment and copyright license, and they give you the option up front, no hassle. Why is that not standard practice?

Others have said it before, many times, but let me say it again… Authors donate content for free to publishers. Reviewers and editors donate their time for free to publishers, to add value to that content. Commercial scholarly publishers take that content, typeset it, and sell it back to us for insanely high profit margins. It’s exploitation, pure and simple. And while they’re at it, they want to own that content forever, in every medium currently known or that will ever be invented between now and the heat death of the universe, and prevent authors from ever using it again? Oh, but no, don’t exaggerate, Pomerantz… We’ll give you your copyrights, of course we will. The forms are in the bottom of this locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of The Leopard.” What’s the problem? Well, I’m sorry, but fuck that. Play nice or get out of my house. Commercial publishers, what have you done for me lately?

Posted in General | Tagged , , , | 2 Comments