It’s About Time

I’ve been thinking a lot about time lately, and trying to change my view of what is possible for me to accomplish in a day. I say this as I’m occasionally waking up in a panic, completely sure I don’t possibly have enough time to finish what I need to within the deadlines I have. This is also on my mind because just today I created a new Google+ Community for my virtual writing group. Our goal with this group is based on the accountability models proposed by Kerry Ann Rockquemore’s National Center for Faculty Development and Diversity and the coaching company Academic Coaching & Writing. These organizations contend that out of the three areas that most tenure-track faculty are judged on (these being service, teaching, and research), the first two have external accountability measures. That is, if you’re on a committee, there are other people counting on you to pull your weight. You will receive emails reminding you of meetings, and deadlines are typically part of the job. Teaching also has high external accountability—you have to show up 2-3 times per week and give class about something, so you are forced to prepare and then to assign grades, etc. Research, however, has little external accountability, except for the BIG AND SCARY (but vague) deadline of tenure review. So, we must build communities and measures to create that accountability. In my group, we set daily, weekly, and semester goals and check in with each other to report our project. It’s a little nudge that often does the trick. If I know my friend is going to ask me, “Did you finish editing your White Whale paper? It’s not going to write itself you know!” then I’m more likely to do it.

But what about the minutes and hours I have to fill doing this work that I alternately love and fear? I have so far tried lots of tips and tricks to get myself through the work day, having accomplished more than just accumulating likes on Facebook, retweets on Twitter, and photos of the awesome snacks I make as procrastination projects treats for myself for a job well done.

Behold, the Monkey Jitters! (banana, coffee ice cubes, soy milk, agave nectar)
Behold, the Monkey Jitters blended beverage! (banana, coffee ice cubes, soy milk, agave nectar)

These include setting a timer, which often words to motivate me. Timers I have tried: the Tomato Timer; Pomodairo; and just today, Toggl (which is more of a system than just a timer; more on that later). On my phone: Apple’s built-in timer for iPhone; and Seconds Pro. Some of these timers have been great, but the reason I keep trying new things is just me being human, I guess—I’m always looking for that magic bullet that will really motivate me to stay on target.

Because I don’t trust myself, I also use an extension for Chrome called StayFocusd, which I set to limit my Facebook and Twitter time to 30 minutes total between the hours of 9 and 5 on weekdays. Of course, there’s the fact that I could just go get my iPad, but let’s face it, I’m lazy, and all I really need is that temptation of “just five minutes on Facebook and then I’ll really get down to business…” to be taken away from me. StayFocusd does just that, though I may curse it as it cuts me off from my precious, precious social media fix.

The main reason I haven’t tried one of these new all-encompassing stay-on-task systems, like RescueTime and Toggl, is that I’m afraid of what I will discover. These apps track your time on your computer at varying levels, depending on the features you select and whether or not you pay for the premium versions, which allow you do to things like track your time when you’re away from your computer. Toggl allows me to time myself and to name the task I say I’m working on. I can organize these into projects, so my day today looks like this:

Screen Shot 2015-05-14 at 4.32.31 PM

Toggl will also give you nifty reports and a timeline (if you download the desktop version) so you can see the peaks and valleys in your day, work-wise. So, when I checked my stats on Toggl right before clicking over to WordPress to write this entry, I saw that I had put in 3 hours and 50 minutes of work activities today. Not terrible, but also not terribly impressive. Where did the rest of my time go? I know 30 minutes was sacrificed to the social media gods, because I got locked out by StayFocusd before noon today (yikes). I did make the delicious snack pictured above, and I took an hour for lunch. That accounts for about an hour and 40 minutes of my day, which would bring me up to 5.5 hours. Aaaand, my first entry of the day, “Book, chapter 3” began at 9:23am, and I know I was basically surfing the web for those missing 23 minutes. That brings me up to nearly 6 hours. I’m fairly sure some of the unaccounted-for time was spent on MOAR EMAIL, and the balance on looking up whether Britney Spears is playing Vegas in July (she isn’t), when I’ll be there for my sister’s bachelorette party, and looking up prices for competing acts Boyz II Men and Mariah Carey.

NOT IN JULY. Because life is unfair.
NOT IN JULY. Because life is unfair.

I’m not quite done for the day yet–it’s conceivable I could do a bit more editing after I finish this post and go back to my White Whale of a paper that simply REFUSES TO WRITE ITSELF (so unreasonable). But if I’m honest, it’s unlikely.

So where does that leave me? I’m close to spending about 40 minutes on this blog entry, which counts as productivity in my book, so I’ll finish the day with about 4.5 “on” hours. Is that enough? Is that typical of your average worker who had to put in time at an office? If you know, please tell me—as long as answer puts me at “average” or better. Otherwise, I think I’d rather not know.

ETA: Here’s my full day on Toggl, all 4 hours and 47 minutes of it. Turns out I spent nearly an hour on that post! Sheesh.

I spent more time on Research than Service. (Yay!) That email count isn't accurate though because I forgot to start the timer a few times. (Boo!)
I spent more time on Research than Service. (Yay!) That email count isn’t accurate though because I forgot to start the timer a few times. (Boo!)
It’s About Time

Inventing New Media

This week’s readings for my New Media Seminar encompass how we understand time (according to McCloud’s Understanding Comics, which I am proud to say I read way back in 1993 when it came out in graphic novel form). 

Those who know me well know that I have a very complicated calendaring system using Google Calendar across all of my screens—laptop, iPad, and iPhone. I also have a paper calendar but pretty much the only time I look at it anymore is when I’m in the kitchen writing out a check for my babysitter. Anyway, the newest iOS has finally matched how I have always thought about calendar time in my mind. For as long as I can remember, I have visualized time, and myself in time, as existing on a limitless continuum of months—just as they are represented in Calendar on my iPhone 5s. Of course, there are some differences: I visualize frames around each date (similar to how McCloud describes the frames around comic panels); I have always visualized the months stretching out in an unbroken horizontal line, rather than vertically, as Calendar has them; but these differences are relatively unimportant. At any rate, I always have a sense of the months than have gone before as how much of the present year has been used up, and the months to come as relevant chunks of time. I’ve also always worked on an academic calendar, for example, so I think in chunks of time such as quarters, semesters, summers, winter breaks, etc. When I have the option, as on my laptop or iPad, or by turning my phone horizontally, I choose to view my particular place in time in “weekly” view.

I fear this is getting boring, so let me amuse you a bit more (or perhaps not) with a glace at a typical week in my calendar:

  
Gosh, I look busy, don’t I? And (to my eternal chagrin) the iPad picture doesn’t show the fancy color-coding I use while accessing Google Calendar via a web browser. I use color-coded chunks of time so that I can tell at a glance which proportion of my week is dedicated to research (Book, BPS, CP Paper, meetings with various RAs, etc.), service (attending a thesis defense) and personal (school events, rallies attended, etc.) 

I’d be lying if I said I didn’t check my calendar at least once per hour. But it keeps me on track, helps me get my things done, and as I set my to-dos in the “all day” portion (so they appear at the top), I also am able to remember things like calling my grandma on her 94th birthday (Hi, Grandma!).

So what does it mean? Has this digital format changed my conception of time? Probably. I think of time in chunks even more than I used to—down to the 15-minute chunk I allotted on Wednesday for picking my dog Appa up from the groomer. I can’t say that it’s better or worse, but it works for me. And I can even fit in a pickling workshop.

Inventing New Media

Illich and the Deschooled Society

As a sociologist of education, I often assign students to read Ivan Illich’s “Deschooling Society,” and it inevitably starts the same discussion: how would a large and complex society such as the United States function without formal schooling? Of course, my university students—benefactors of and true believers in schooling structures for the most part—may put it a little more bluntly: Is this guy crazy or what?

Illich makes some excellent points about the darker effects of schooling. Schooling is socialization; schooling routinizes education and teacher-child relationships; schooling often stifles children’s natural creativity; schooling becomes a stylized system that shapes children into students. When these conversations happen with my students, my job is to lead them away from a very understandable rejection of Illich’s hyperbole, and to guide them into thinking about what we can learn from Illich, and how we can apply it to the task of—if not demolishing the educational system—making schools more welcoming for children, particularly those whom society is more likely to deem “in need” of socialization. In different schools and different places, these groups may include racial/ethnic minorities, children from low-income families, immigrant children, or any other groups that have historically been denied full birthright into the “American dream.”

So what have I learned from Illich? Is there any school that Illich would deem worthy of keeping? From his description of what learning should entail, it seems like Montessori might fit the bill, though this particular reading does not seem to admit that any formalized structure would do. As a mother, would I pull my children out of school and “deschool” them? Much has been made of the tiny minority of parents who choose to “unschool” their children, but I would not choose this for myself—I have a job! Responsibilities! Bills to pay! In most cases, unschooling/deschooling remains an option for the privileged, as it all but requires a stay-at-home parent. When we cannot jump into Illich’s deschooled utopia with both feet, there are many opportunities to see how Illich’s vision is gaining a foothold in some areas of life.

To take one example that is both a “reference service to educational objects” and a “skills exchange,” I put forward the Oakland, CA library system’s Tool Lending Library. With a 5-star average across 28 reviews on Yelp, the Temescal Tool Lending Library (located in a rapidly gentrifying neighborhood of Oakland) is described as “everything you imagine it to be!” Says one rapturous user, “The staff is friendly and helpful – need them to hold something? Call! Not sure what tool for the job? Ask! The TLL is an amazing resource for when you don’t really want to hire a professional for lack of tools or buy tools yourself. They have high quality tools and are very helpful.” Raves another, “I’ve gotten everything from wrenches and paint scrapers to heat guns and sawzalls to a concrete breaker and chainsaw (chainsaw is a Friday-only thing–note).”

TLL
Chainsaws on Fridays only, people!

Of course, the new “sharing economy” boasts dozens of sites like Skillsbox as well, an online community where with a few clicks, you can trade your talents for credits with which to “buy” other kinds of knowledge and skills. Recent swaps include basketball lessons, bricklaying, tutoring, and web design.

I myself was an active swapper on the old Swaptree site, swapping videogames and CDs. We estimated we saved well over $200 swapping items we were no longer wanted or needed. But Swaptree became Swap.com (to swap clothing and the like) and then became the NEW Swap.com, an “online consignment” site where I can “make sales” and “get payouts,” much like eBay with a little swapping on the side. Innovations that “democratize” services like taxis (Lyft, Uber) and room-renting (Air BnB) often come under fire for capitalizing on the sharing economy. The gloomy conclusion is that the almighty dollar still creeps in where it’s least wanted, leaving those previously marginalized by the powers-that-be in little better position than where they started, as others have documented.

So what’s the upshot? Does it always have to be so gloomy? I argue that the answer is no. There is plenty we can take from Illich, even if we aren’t ready to give schools the old heave-ho. The explosion of new media and the world wide web leaves open endless possibilities, and though we in the higher education community are certainly suffering MOOC fatigue, my final, hopeful example, Khan Academy, promises to teach you a little of anything “For free. For everyone. Forever.” For someone like me who missed out on high school physics, that’s a pretty good deal.


Newton’s first law of motion: Basic primer on Newton’s First Law of Motion

https://www.khanacademy.org/embed_video?v=5-ZFOhHQS68

P.S. I had to choose either physics or AP Biology because of scheduling conflicts. Illich would laugh, or maybe cry.

Illich and the Deschooled Society

The Whole Action: Videogames vs. Movies

In case this is shocking to anyone, I’ll put this up front: my spouse and I are gamers. Granted, our enthusiasm has waned over the years, especially since having children. We also tend to enjoy different types of videogames now than in the past. While I might have delighted in solving complex and sometimes nonsensical puzzles in the past, my patience for them is now greatly reduced when I have maybe 90 minutes between the kids’ bedtime and my own of free time.

CannedJuice
I have to toss this canned juice down a garbage chute to dislodge the Old Man Coin in Silent Hill 2? Kay…

So, I’m more likely to enjoy, say, Legend of Zelda: Windwaker (a game with an immersive, cinematic world with multiple discrete tasks) than Legend of Zelda: Link to the Past (complex dungeons, difficult-to-beat bosses, and you can’t save any half-completed dungeons—if you quit, you have to start the dungeon all over).

34385-Legend_of_Zelda,_The_-_A_Link_to_the_Past_(USA)-10
Link to the Past: Get the chickens!!!
66445-The_Legend_Of_Zelda_The_Wind_Waker-3
Windwaker: Immersive world

Which brings me to my next point: rather than playing any game at all, Eric and I are MOST likely to just watch a movie or TV show, and the reason is what Aristotle (and Laurel, in my reading for this week) defined as the magnitude of the “whole action,” or plot. If one is forced by circumstance to devote only 90 minutes per day to a game that might take more than 10 hours to complete (20 hours and up for the kind of games Eric used to favor, role-playing games or RPGs), and there are constant interruptions like parent-teacher night or gymnastics lessons, than the narrative thread of a videogame is easily lost. Many times I’ve started up and thought, Now what was I doing again? Where am I supposed to go next? What is happening here? This doesn’t happen to me with movies or most TV shows (though Boyhood, at nearly 3 hours, did start to make me ask those kinds of questions; The Wire, with its complex multidimensional plot, is an exception to the TV rule).

So for me, the answer to this week’s query is the magnitude of the whole action is most critical for me, as far as enjoyment of a human-computer (or human-media) interaction is concerned. And whether or not the controls are simplistic enough to let me eat popcorn whilst slaying monsters.

The Whole Action: Videogames vs. Movies

Viola and the Triumphant Porcupine

Bill Viola’s essay, “Will there be condominiums in data space?” examines the sacred and the profane in…well, in data space, in visual art, in electronic art and media, in mantra, in life as a whole.

We were challenged to reflect on Viola’s enigmatic parable of the porcupine trapped and defensive in a car’s headlights (more on that in a minute) but also to think about how data space could be diagrammed or envisioned.

This brought me back to seventh grade, and our year-long assignment to illustrate history from the perspective of an animal. At the conclusion of each unit–I remember Greek and Roman history the best–we had to write our chosen critter into history, using their perspective to explain and interpret the events we learned about. I chose the housefly as my animal, for obvious reasons. While my fellow classmates struggled to work in how kangaroos and bears might peek in on Caesar Augustus, my fly was able to get in anywhere (though not always with a warm welcome). At the end of the year, we were to put all of the pieces together, and my brilliant mother, who had something of a Macintosh addiction, came up with the idea of teaching me to use the brand-new HyperCard application and to turn in a HyperCard “stack” as my final project. This was really a precursor to the way the modern World Wide Web works: you view a static image, representing an index card. However, you can include images, words, or whatever that, when clicked, play a sound, take you to a new card, or allow something to otherwise pop up or change. Links!

To plan out this epic project, my mom had me plot out the stack and how it would branch using actual index cards on the living room floor. It really took me a while to “get” what this was doing and why it was interesting. Now, this seems intuitive–I click on something online, something happens. But it was not at all clear to me then how or why anyone would do this or why I should care.

While I am sure a copy of my fly-on-the-wall stack exists on a floppy in my parents’ house somewhere, there are other, more famous “stacks” if you care to take a look. I am pretty sure we owned a copy of the famous The Manhole game for children (I know, this does not sound like it is for children–go tell the developers), though I do not remember if I played it to completion. Another famous example is the game Myst, which I also owned but never completed myself, though my roommate and this guy she was crushing on stayed up all night to finish it back in 1996. Unfortunately, they had to use my behemoth desktop Mac clone to do so, which made it hard for me to sleep. Ah, the things we do for love (and roommates)!

Alas, Apple withdrew HyperCard in 2004, because, basically, “What was this thing?” venture entrepreneur and coder Tim Oren wrote. “Programming and user interface design tool? Lightweight database and hypertext document management system? Multimedia authoring environment? Apple never answered that question.” HyperCard’s inventor lamented that he’d missed the mark:

If only he had figured out that stacks could be linked through cyberspace, and not just installed on a particular desktop, things would have been different…”I grew up in a box-centric culture at Apple. If I’d grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser. My blind spot at Apple prevented me from making HyperCard the first Web browser.”

Read more about HyperCard and try out some data viz circa 1991! 

I feel like I got a little off-track, so back to the porcupine: the poor porcupine, dazzled and defensive, has her view of the world as full of startling monsters and sudden attacks. The driver has his or her point of view–seemingly omniscient, but still lacking in perspective no matter what method s/he tries. And that whole condominium thing? Must have been some leftover animus about little boxes. That’s all I got.


 

Viola and the Triumphant Porcupine

The Message is Stronger than the Medium

Reading Marshall McLuhan this week has caused me to reflect on the media forms that have had the biggest impact on our world. Though it’s hard to beat the printing press for sheer revolutionary impact, the first thing that popped into mind was social media. Of course, the distractive properties of Twitter and Facebook are well known, and each probably eat up way too many hours in my day. But each also has revolutionized the way we know and interact with friends. It has been observed that today’s “digital natives” will not have the experience I did of “rediscovering” long-lost friends on Facebook–they will grow up never having lost contact with their childhood friends in the first place. Their entire lives, they will know what their friends are doing in real time (if they follow them on Instagram, SnapChat, Twitter or whatever) from the moment they meet them and exchange screen names.

Coloured-Social-Media-Icons-Round

The other difference I note in myself based on about five years of daily Facebook use and one year of intermittent Twitter participation is how these media have changed me. Each has decreased my tolerance for engaging with media that takes longer to engage with than a minute or two. If a friend links to a New York Times opinion piece, that’s probably the gold standard of getting me to actually read and comment on something. I’ll feel like I’ve learned something, and it takes little time to read. But if a friend tweets a link to a three-minute long video? I’m not watching that thing, that’s way too long! I have things to do, do you know how long my feed is? I’ll confess this too–the reason my use of Twitter is so intermittent is that I get frustrated with my inability to keep up. I can’t seem to shake my completist mentality, formed in my early months with Facebook, when it was possible to “finish” catching up on my feed within a few minutes’ time.

Now I’m rambling, so I’ll quit, but you get the gist. Twitter and even Facebook have well-known abilities to increase engagement, make us feel connected, and reduce the social distance between us and our idols, whether they be star academics (in my case) or pop stars. Twitter has been credited with assisting real-time organization amid revolution, such as during the Arab Spring and the Ferguson protests. In the more mundane day-to-day, however, social media also take up increasing amounts of time, while leaving us feeling like we’ve got little to show for it. I can finish a book, a newspaper, or a TV show, but my Twitter feed is a perpetual hamster wheel of news creation and opportunities for engagement. The potential for liberation is there, but on any given normal day, it can feel like a trap.

The Message is Stronger than the Medium

Computer/Lib

This week I stumbled through Ted Nelson’s 1970s imaginary “Computer/Lib,” and was struck by the ongoing tension between ease-of-use and complexification. And as I pondered, I considered: is the Apple Watch, or iWatch, a contender for each category? As a watch, it’s needlessly complex. It’s also needless machine (don’t we all have phones to attend to?), but possibly a needful machine (yet one more thing to interact with). No one needs one, but neither did we need iPads until the need was created.

iWatch

Isn’t it pretty, though?

Computer/Lib

Augmenting the Human

I just read some excerpts of Engelbart’s 1962 report, “Augmenting Human Intellect: A Conceptual Framework.” While the piece was interesting, I found it kind of a hard slog at the same time. Maybe it’s because I was distracted by trying to read while hanging out with my friend’s new baby, but I think part of it is that it’s just difficult for me to imagine this framework outside of the context of the present world. Engelbart describes concepts that became today’s cut-and-paste, today’s computer mouse, today’s networks, and today’s hyperlinks, but I find it difficult to parse descriptions of these notions in a pure theoretical form, without imagining their current digital or physical form.

I’m not a true digital native; in immmigration/migration parlance you might call me a member of the 1.5 generation. My mom bought a Commodore 64 when I was about 8 years old, joined a Commodore users group, and taught herself how to code. I played computer games from a young age, and wrote a report using a word processor on the Commodore (which by then had an early version of Windows-like software, which I navigated with a joystick) when I was in the seventh grade. I never composed any piece of writing by hand after that, apart from a brief try at poetry in high school. From the age of 12, then, I’ve relied on the kinds of writing practices Engelbart dreamed of:

I found rather quickly that the job of extracting rearranging, editing and copying new statements into the cards which were to represent the current set of product statements in each grouping was rather tedious. This brought me to appreciate the value of some sort of copying device with which I could transfer specified strings of words from one card to another, thus composing new statements from fragments of existing ones. This type of device should not be too hard to develop and produce for a price that a professional man could justify paying, and it would certainly facilitate some valuable symbol-structuring processes.

Although I maintain an affection for the printed word—I dislike reading books on electronic devices, for example—it boggles my mind to think of writing a journal article, let alone a book or a dissertation, without the benefit of cut-and-paste. This ability to piece thoughts together and rearrange them on the fly is integral to my writing process. So is my intelligence augmented? I guess Engelbart would say yes.

Augmenting the Human

Thinking Machines, Creative Machines

This week I read Turing’s “Computing Machinery and Intelligence,” featuring Turing’s arguments against resistance to the idea of thinking machines, as well as his predictions for what digital computers of the future would be able to do.

These days, many humans still have the same objections as those Turing argued against. We find thinking machines to be threatening, as in the anguish in some quarters when Deep Blue beat Kasparov in chess in 1997 (though this win has been contested as caused by a bug in the software). After the supercomputer nicknamed Watson beat out two human contestants to win at Jeopardy!, Conan O’Brien jokingly “hired” Watson to replace Andy Richter as his sidekick and announcer, much to Richter’s horror.

Determining whether a machine is truly “thinking” also invites us to explore the contours of humanity. Objections Turing dismisses, such as that machines cannot “enjoy strawberries and cream,” “have a sense of humor,” or “make someone fall in love with it” made me immediately think of the thousands of humans out there, with or without disabilities, who are unable to meet these requirements. Humans on the autism spectrum, for example, may demonstrate limited capacity to “use words properly,” and many of us—those of us writing grant proposals or academic research articles, for example—ponder whether we can actually “do something really new.” But does this mean that some humans are less than human, in the same way a machine might be? Or that some machines are more “fully” human than some humans?

Turing’s extended section on “informality of behavior” in particular reminded me of an episode of This American Life that reported on the experiences of one David Finch, diagnosed with Asperger’s Syndrome as an adult, and his struggles to relate to his wife appropriately. To cope with the difficult-for-him task of exhibiting empathic behavior, he created an elaborate “Journal of Best Practices” (now published as a memoir) that constituted essentially a “definite set of rules of conduct by which he regulated his life,” making him, in the argument Turing entertains (and rejects) “no better than a machine,” in the eyes of some. The blurb on Finch’s book describes his process this way:

His methods for improving his marriage involve excessive note-taking, performance reviews, and most of all, the Journal of Best Practices: a collection of hundreds of maxims and hard-won epiphanies, including “Don’t change the radio station when she’s singing along” and “Apologies do not count when you shout them.” Over the course of two years, David transforms himself from the world’s most trying husband to the husband who tries the hardest. He becomes the husband he’d always meant to be.

In becoming the “husband he’d always meant to be,” he transformed himself, essentially, into a learning machine. Training himself to be empathic required a ruthlessly logical and machine-like process, with the end result that Finch became more human. Fincher explained the revelation of his diagnosis this way to Ira Glass on This American Life:

I mean, it was as if somebody finally handed me a user manual for myself. You know, here’s how you operate, and if you read this manual, everything that was difficult in life before is going to be a lot easier now, because it makes sense and you can learn how to control certain things.

So where does the line between human and machine truly lie? I think we still don’t know.

Thinking Machines, Creative Machines

“Man” and Machine

I’ve just finished reading Vannevar Bush’s 1945 article in the Atlantic Monthly, “As We May Think,” which makes some prescient predictions about the future of computing, science and industry in the wake of the massively destructive bombs brought to life during World War II by some of the brightest scientists of that generation.

A great many passages struck me as I read, including Bush’s description of a conceptual machine called the Supersecretary, which would “take dictation, type it automatically and even talk back if the author wanted to review what he had just said.” The concept drawing looked like this:

image

I thought to myself, well, I used such a fantastical device just a few months ago to record a theoretical hook for the academic book I was writing a proposal for. The hook occurred to me all of a sudden, while driving (as these things do) and lest I lose the thought or run over an undergraduate student, I used my amazing device to capture a voice recording and have it read back to me.

Okay, okay, maybe you don’t want to watch the video (even though it’s short) so I’ll clue you in. It was my iPhone 4s, the first with Siri and the ability to accurately convert speech to text. Well, mostly. Here’s what the uncorrected text looked like:

The structure of the economy has changed. The job market is unstable. Low income and entry-level workers are seen a disposable. Individuals do not keep jobs of the life course anymore. So it should not surprise is that Highridge occasion as being you differently and our conception of marriage kitchen is changing in Butte these other structural changes the economy, jobs, our ideas of class-based appointment.

Unless I’m much mistaken, marriage kitchens in Butte are much the same as always. Still, it’s amazing technology in the palm of my hand, and I did manage to preserve—and correct—the actual idea.

But on to the second thing that struck me, as a gender scholar, about Bush’s piece: the prodigious use of the pronoun “he” to describe these future inventions to benefit “men,” replacing the “girls” of the day who, for example, operated a stenotype by “strok[ing] its keys languidly.” Men created, “girls” operated. And in many fields it is still that way: 89% of practicing engineers are men; 9 out of 10 contributors to Wikipedia—a Memex beyond Bush’s wildest dreams—are also men. We’ve made some progress, certainly. In Bush’s day, half of the world’s population could not even bring their pronouns to the table, while today we at least lament the state of affairs. Of course, we too often flirt with victim-blaming in our search for the causes of the imbalance. Still, hope springs eternal, and here I am, not-so-languidly touch-typing on a Bluetooth keyboard synced to my iPad Air, “establishing useful trails through the enormous mass of the common record” (Bush, 1945).

“Man” and Machine