Author Archives: Rachael Cayley

Putting it in Your Own Words

My six-year-old son loves this blog. Well, not the actual blog itself, but the stats page. And he’s merciless about slow days. On Sunday mornings, he will often report, in a tone of morbid satisfaction, “Only nine views! You are doing terrible today!”. (I hope in your head you can hear the word ‘terrible’ being stretched out for maximum emphasis.) He understands that the spikes in activity—which are the whole point, to his quantitative way of thinking—are caused by new posts, so his suggestion is usually that I should sit right down and write something. Unfortunately, his desire for me to write is in direct conflict for his desire for me to play endless games of Monopoly. Even more unfortunately, I don’t have the heart to tell him that blogging is actually way more fun than Monopoly.

Given all this, you can imagine his pleasure when I told him that I might be able to use a new joke he’d repeated to me in the blog. Here goes:

Q: How can you fit a 10 page article on milk into 5 pages?

A: You condense it!

Hilarious, right? Welcome to my summer vacation!

As summer vacation slowly turns into preparation for fall, I’ve been mulling over how to improve the way I teach one of my least favourite topics: paraphrasing. I’m sure my discomfort with this topic is connected to the fact that paraphrasing necessarily brings up issues of plagiarism, a topic that we all feel anxious about. The immediate stakes are high for students when I talk about effective paraphrasing, in a way that isn’t the case with discussions of transitions, semicolons, or sentences. If I’m wrong about those topics or even if I just do a poor job explaining my intent, the implications aren’t particularly significant. But if I handle paraphrasing badly in the classroom, a student might go on to provide a weak paraphrase in their own writing, an act that can have consequences.

It is also the case that explaining a good paraphrase can be pretty hard to do; even if you ‘know one when you see one’, it can be hard to craft enduring principles to use in future writing situations. The classroom conversation often ends up centred around whether sufficient changes have been made. This is a legitimate issue for students to worry about, but I think that the notion of ‘sufficient changes’ is ultimately a problematic one. Conceiving of any writing task as a matter of making sufficient changes to someone else’s text seems risky to me.

To address this risk, I like to shift the focus away from the whole notion of making sufficient changes. Of course, the idea of putting something into your own words is a commonplace in academic writing, but I think the resultant emphasis on changing words can lead students to feel that they are engaging in a meaningless technical task. What’s worse, students who don’t write in English as their first language often feel that they end up with something less elegant and effective than the original formulation. But what if we were to think less about word or phrase replacement and more about how we can effectively use someone else’s ideas in our research; that is, not so much just ‘put it in your own words’ as ‘reframe the idea in your own words so that it helps you to explain your research aims’.

For sound advice on paraphrasing, you can try the OWL at Purdue site or the Writing at U of T site. In keeping with my advice to think about paraphrasing as part of a broader issues of talking about the literature, I also suggest looking the Academic Phrasebank from the University of Manchester. This site is one of my favourites, and I will return to a discussion of its many merits another time. For now, I point to it because it provides a helpful range of ways to talk about the scholarly  literature. Asking ourselves why we are talking about another person’s work is often the first step to deciding how to phrase our explication of that work. The Phrasebank, by offering a range of sentence templates, can help us to decide whether we are interested in a text because of its author, its methodology, its topic, its time frame, etc. Those decisions can help us with the broader issue of how to structure a lit review, but they can also help us talk effectively about other people’s work at the sentence level.

Communication and Content

This interesting article by Jacques Berlinerblau in The Chronicle of Higher Education discusses the future of the humanities. This is well-covered territory, obviously, but I was interested in the way he discusses the role of communication skills. His argument is that the humanities can be ‘saved’ by greater engagement with the general public and thus by a greater emphasis on communication skills. Berlinerblau suggests that we ought to “impart critical communication skills to our master’s and doctoral students. That means teaching them how to teach, how to write, how to speak in public.”

Needless to say, I am in complete agreement with that sentiment. However, I am puzzled by the next step in his argument: “this plan will result in far less time for the trainee to be immersed in seminars, bibliographies, and archives. That this will retard the absorption of deep knowledge at an early stage of one’s career is undeniable.” While I understand that this may be a strategic concession designed to allow him to get back to defending his core idea, I do not understand why he allows that time spent on communication skills necessarily has a deleterious effect on disciplinary knowledge. To be clear, Berlinerblau is definitely saying that this trade-off—greater capacity for communication, diminished grasp of content—is worth it. But why is he so sure that this is a trade-off? I see no reason to believe that graduate students who devote time to improving writing or speaking skills are actually taking time away from their disciplinary studies.

There can be no doubt that we all feel that way at times; we all feel that we must try to balance time spent on process with the more urgent demands for production. This sentiment can be particularly pronounced in graduate students. I often hear from students that they would like to visit the writing centre, but they just don’t have enough time. And I am not denying that—in any given day—putting writing on hold in favour of visiting the writing centre may not get you tangibly closer to the goal of a finished piece of writing. But graduate students can and must think in longer increments of time: over the course of their degree, they genuinely do have time to improve their communication skills.

More importantly, formulating this relationship between communication and content as a trade-off contributes to the problematic notion that our communication skills are somehow distinct from our disciplinary knowledge. I would argue that the two are in fact closely intertwined. Effective communication is not valuable only to the recipient; as we improve our capacity for communication, we necessarily improve our own understanding of the topics about which we are communicating. The better we communicate, the more we engage others; the more we engage others, the more we learn from them. And when we strive to explain ourselves better, we inevitably come to a better understanding of what we thought we knew. The artificial division of content knowledge and communication skills needs to be resisted. Knowing what to say and knowing how to say it aren’t distinct. Graduate students who address themselves to the crucial matter of communication aren’t diminishing their content knowledge, they are enhancing it.

Earlier in the summer, I had the great pleasure of participating in the GradHacker podcast. I spoke with one of the hosts, Alex Galarza, about this blog and about academic writing more generally. The audio is a bit wonky in places near the end, but I hope you’ll listen and I hope you’ll return to the GradHacker blog and podcast; they are both great resources for graduate students. You can find the podcast on their site or you can subscribe in iTunes.

Finally, Rob J. Hyn­d­man from Monash Uni­ver­sity has created a helpful list of research blogs (in which he kindly included this blog). Not only did he create this list, he set it up so you can subscribe to all of these blogs as a bundle: one stop shopping for enhanced insight into many facets of the research process!

My links posts are a discussion of things (articles, news items, or blog posts) that I have recently found interesting. I choose things that are connected—sometimes closely, sometimes only tangentially—to academic writing. Responding to other people’s ideas allows me to clarify my own thoughts and to draw your attention to other approaches to the issues central to this blog.

My Very Own Blog

I think it’s probably a bad sign when your own blogging delinquency becomes your subject matter for a post. As I’ve said before, I understand that neither apologies nor explanations of the worthiness of my non-blogging activities are of any interest. But I do actually have something to say about blogging and not blogging.

Patrick Dunleavy and Chris Gilson, in an interview in the Impact of Social Sciences blog, said recently that we ought to move past the idea of single-authored blogs. They make a good argument for the value of the multi-authored blog: the group blog benefits from the emphasis on collaboration and from the sharing of the responsibility among many writers. The collaborative approach, they suggest, also benefits the audience since a multi-authored blog is much more likely to be updated regularly. According to estimates that Dunleavy and Gilson mention, eighty per cent of single-author blogs are either inactive or rarely updated; they call these ‘desert blogs’, which is such a sad phrase (I tried to find out if this was their coinage or an established term, but all I found were a lot of gorgeous dessert blogs!). Reading their stark assessment of single-authored blogs made me want to defend my little solo project, but I was constrained by my awareness of the stale post that had been sitting on my site for over a month.

What does all this mean for my neglected single-author blog? I assume that nobody is wasting their time checking my blog for new content; you all have better things to do and anyone who cares to can receive some form of notification (all the notification options can be found in the left sidebar: email, RSS, Facebook, Twitter). Given the rhythms of my work life, I accept that this blog will need to be a some-times-more-than-other-times proposition. Despite the inevitable fallow times, I know that this blog benefits from having a single author.

A blog that offers an approach to writing needs authorial consistency to allow readers a chance to evaluate its value for them. I think you’d be crazy to show up here out of the blue and accept what I say. Taking the writing process seriously means not accepting one-size-fits-all ‘writing tips’. You need to find a source of writing insight that addresses your general writing situation and that resonates with your specific approach to writing. I’m not saying that a group blog on writing can’t be valuable, of course; the Lingua Franca blog is a great example of a blog that is consistently updated and consistently excellent. But the multi-author model doesn’t encourage the same sustained interaction between a single author and the audience. Given the nature of my project, that sustained interaction is important to me.

All of which is to say that I’m committed to this blogging format. But this ‘commitment’ raises an obvious question: why don’t I write here more often? The reason isn’t a lack of enjoyment—writing here is one of my very favourite things to do. And the enjoyment isn’t just derived from the creative process; I love knowing that the posts are read, shared, and used across a wide range of networks. The act of writing these posts is also very helpful to me as a teacher; crystallizing my thoughts about writing allows me to teach these topics more effectively in the classroom.

This rosy picture makes blogging sound like a winning proposition all around. But there is, unfortunately, a predictable impediment: this blog is all mine and thus I’m not responsible to anyone else for what happens here. Everything else in my professional life involves some degree of obligation to other people: preparing to teach; looking at student writing; handling administrative responsibilities; meeting with colleagues; making presentations. These are all things that I genuinely enjoy doing, but the manner in which I do them means that a failure to do so would negatively affect someone else. Blogging—or not blogging, as the case may be—is something that matters only to me. As such, it is the first thing to go when I am busy. Sound familiar? Where does writing fit into the have to/would really like to split in your own life?

The problem for many graduate students is that they have to write, but their lives end up organized in such a way that writing is neglected in favour of other things that feel more urgent. That sense of urgency is real, of course; the non-writing activities of graduate students aren’t just hobbies. Often those activities generate essential income or develop key professional competencies. But successful writers will find a way to place writing within the confines of their essential activities. I realize this sounds like a very self-serving conclusion: it’s okay if I can’t find time to blog, but it’s not okay if you don’t find time to write your thesis. Convenient, perhaps, but also true. The more you can think of writing as an obligation, the more progress you will make towards the goal of a completed thesis.

Finally, I know I said I wouldn’t bore you with what I’ve been doing while neglecting the blog, but I can’t resist sharing a few photos from my recent trip to Savannah. I was there for the International Writing Across the Curriculum conference, and the conference and the city were both delightful!

Links: Time Enough, What Not to Do, Writing Across Boundaries

This advice from the GradHacker blog is excellent: wanting and needing more time aren’t necessarily the same thing. We almost always feel that we could use more time with our academic writing projects; when there is no objective way to determine that something is finished, we are often reliant upon our own assessment of quality. Since our assessment of our own writing is frequently negative, we end up feeling that we need more time. But it is important to ask ourselves hard questions about our writing and to be aware of the potential for diminishing returns on time spent with that writing. What is the piece of writing? Who will read it? What sort of expectations will that reader have? In other words, what is ‘good enough’ in each case? We may lack the ability to make this judgement because we habitually only stop tinkering with a piece of writing when we are out of time. But think of the value of being able to determine sufficient quality on our own. With a developed sense of what is sufficient in each case, we are more likely to devote the appropriate amount of time to each of our writing projects, without getting stuck in a cycle of unproductive revisions.

Here is a great post from The Thesis Whisperer blog on mistakes that rookie researchers make. (This post is a few months old, and I would urge everyone to read her current post on thesis writing despair, too.) I think her specific advice here is very good, but I particularly like the overall approach of the ‘reverse list’; “I like a reverse list because it highlights the problem more than the suggested solutions, leaving you free to choose your own.” The beauty of being told what not to do is that you get forceful advice that doesn’t assume it has all the answers. In other words, bossyness that doesn’t tell people what to do. Getting a PhD, needless to say, has as many ‘right ways’ to do it as there are doctoral students; while this is true, thesis writers—in their periodic moments of quiet writing desperation—aren’t looking for relativism or anodyne truisms. They want real advice, and I think the reverse list is a great way to deliver that advice: it doesn’t presume to tell you what to do, but it does use the benefit of experience to highlight some consistently unproductive paths.

The Writing Across Boundaries project from Durham University offers reflections on academic writing from established academics. These reflections—varied and extremely interesting—give novice writers a chance to see that polished and assured academic writing is still accompanied by struggle and self-doubt. These reflections all come from social science writers, but I think their insights are applicable to the broader academic writing community.

My links posts are a discussion of things (articles, news items, or blog posts) that I have recently found interesting. I choose things that are connected—sometimes closely, sometimes only tangentially—to academic writing. Responding to other people’s ideas allows me to clarify my own thoughts and to draw your attention to other approaches to the issues central to this blog.

Best Laid Plans

I’ve talked a lot in this space about the importance of extensive revision. Today I’d like to go a bit deeper into one of the tensions that can emerge during that revision process. As I go through a piece of writing with a student, we often find significant discrepancies between the plan articulated at the outset and the subsequent text. Obviously, such discrepancies are common, especially if we are liberal in our use of explicit signposting in our early drafts. But this observation leads to an interesting question: when the plan and the actual text start to diverge, what should we do?

Let’s take a generic example. Imagine an introductory passage of this sort:

Our discussion of this issue will revolve around three key themes. We will begin by discussing X. This treatment of X will lead us into a consideration of the importance of Y. The obvious tension between X and Y will necessitate a discussion of a third theme, Z.

This piece of writing will now head into a discussion of X. Everything will run smoothly until X doesn’t in fact lead into a consideration of Y. Instead, it may lead into a discussion of W. This introduction of W then leads away from the notion of a tension between X and Y and necessitates a discussion of the way W and X affect of our central issue. Once editing begins, we’ll have to choose between our roadmap and our actual text.

Depending on the state of our editing abilities, we will either register this disjunction consciously or just feel a general discomfort with the text. If you tend to fall in the latter camp, try something like the reverse outline to help you figure out what might be triggering your discomfort.

Once you have sorted out that a discrepancy exists, the next step isn’t necessarily clear. Should the plan be changed to reflect the ideas that emerged through the writing or should the text itself be changed to reflect the original plan? Since each case will be different, I have no across-the-board answer to this question. However, I do think it is worth giving some thought to a general understanding of the way this tension manifests itself in our writing. For some writers, the writing itself is generally more significant than the plan. This emphasis on allowing ideas to emerge through writing is in line with my general emphasis on writing as a form of thinking. But there are some writers whose writing process simply takes them too far afield; given a free hand, these writers can end up so far from where they started that the text can no longer fulfil its intended function.

If you are such a writer, you  may wish to approach the reconciliation of plan and text somewhat differently. In fact, you may wish to take steps to avoid a dramatic discrepancy. One technique is to transform the original plan into a series of in-text directions to yourself. Once you have laid out that business about X, Y, and Z, write yourself a few brief sentences (or sub-heads) that will serve as a reminder to remain within certain parameters as you write. It isn’t that you shouldn’t stray, but if straying is your natural mode of writing, you may be struggling with scattered texts. If that is the case, it can be helpful to put  some tangible reminders of the original plan in place. In other words, take steps to make it harder for you to take unanticipated directions in your text.

The key here is coming to an understanding of your own writing practices: do your drafts naturally evolve beyond your early planning or do they need that early planning to keep them on track? Once you have a sense of that, you can decide how to position yourself in relation to the provisional plans that guide your early drafts.

Letting Go

In two different contexts recently, I had reason to discuss the challenge of deleting material from our own writing. In both cases, I noticed that students appeared to identify strongly with what I was saying: there was a great deal of nodding and grimacing. For lots of writers, writing is so hard that throwing away ‘perfectly good writing’—i.e., writing that is both finished and marginally coherent—is difficult to do. This attachment to our own writing often means that there are elements in a draft that are left in just because we can’t bear to part with them or can’t bear to see a document shrink instead of grow. But it can be very hard to take a draft to the next level when we haven’t expunged the parts that aren’t working. Editing, especially at the early stages, requires a great willingness to jettison material. However, if you found it hard to put the words on paper in the first place, deleting them can be genuinely painful.

One response to this pain—one that, admittedly, gets me some sceptical, easy-for-you-to-say looks from my students—is to think more broadly about the purposes of writing. We don’t write just to satisfy a certain word count or page limit: at a deeper level, we write to sort out what we need to say. That beautiful paragraph you agonized over may have been written for you, not for your reader: you needed to formulate those ideas in proper sentences to understand them properly but the reader may be satisfied with nothing more than a brief mention of what you sorted out. Accepting this broader purpose of writing can lessen our attachment to particular sentences and paragraphs.

If we do come to the realization that a certain passage is no longer serving a purpose in our text, we still need to decide what to do with it. The delete key is too extreme a response for most of us. It’s like a game of Love It or Hate It: faced with a stark binary choice, many of us choose to ‘love’ our first drafts. My solution is to create a place to put all the things that I am not sure of, a place where I can save bits of text that have outlived their usefulness. Saving them means that I might have the chance to use them in some other context. Truth be told, I’m not sure I’ve ever gone back to these old writing fragments, but knowing that they are there gives me to the courage to be a more ruthless editor. Having a good system for managing subsequent drafts is also a good way of increasing your editorial resolve (the ProfHacker blog has a great post on version control that may help you with this). In the end, your writing will thank you for developing the habit of letting go.

This ability to let go can also help with writing efficiency. If we are somewhat steely during our early structural edits—if you don’t know how to start that process, try a reverse outline—we can avoid unnecessary fine editing of material that we might have to remove later. Indeed, the sunk cost of premature fine editing is one of the things that causes us to hang on to text that we no longer need. Having devoted time to improving a particular passage, rather than to thinking about how it serves the broader text, we can find ourselves unwilling to remove that passage.

In sum, remaining alert to the potential benefits of removing passages from our texts can help us to avoid wasted editorial efforts and can leave us with a document that is ultimately stronger and more cohesive. Finally, this brief post from the GradHacker blog talks in a similar vein about the need to delete the stuff that isn’t working for us.

Links: Attrition and Writing Support, Effective Job Talks, Understanding Journal Boycotts

Here is a recent piece from The University of Venus blog on graduate students and attrition. The author, Anamaria Dutceac Segesten, begins by allowing that some attrition is probably beneficial: some people will inevitably decide that graduate study isn’t right for them. But she argues that even those who are in the right place would benefit from additional support from sources outside their departments. She divides that support into two types of ‘services’: psychological support and research and writing advice. This notion of additional support is great, and Segesten provides a helpful list of suggestions for managing the writing process. But I think it is worth noting the implications of treating writing as a problem in need of a solution. In this framework, writing is treated as a problem—akin to other life or organizational problems—to be solved rather than as an activity at the heart of the academic enterprise. Treating writing difficulties as mere matters of organization (or approach or determination) can lead students to feel that their difficulties ought to be more manageable than they are. When writing is treated more as a life skill than an academic skill, a student can be left in a difficult position: their weakness is characterized as minor but their experience of that weakness can be extremely unpleasant. Being a weak writer is rarely a ‘minor’ problem for a graduate student, and the solution to such difficulties are rarely simple.

This post from The Professor Is In blog discusses delivering effective job talks. Kelsky’s post is full of great advice, all of which would be helpful to anyone preparing for an important talk. In particular, I wanted to highlight her discussion of the text necessary to support an effective talk. Her advice is ‘read but don’t read’, and most people can only achieve that apparent paradox with a well-designed written text. Nothing gives polish to a formal talk better than a prepared text: most speakers cannot achieve the necessary level of articulacy off the cuff (especially in a high-stakes situation when nerves are more likely to be an issue). At the same time, nothing weakens a talk more than seeing nothing but the top of the presenter’s head as a paper is read word-for-word from the page. As hard as it sounds, we all need to find a perfect blend of textual support (to avoid inarticulacy) and rehearsed confident delivery (that doesn’t appear to rely on a written text). Here is an earlier post that suggests some ways to create a text that will support a sophisticated and fluent talk without the appearance of reading.

We all know that we can’t read everything and that we can’t follow every story that comes along. When a story is new, we all make decisions about whether a story warrants immediate engagement or not. Sometimes, inevitably, we guess wrong and end up feeling as if we’ll never grasp all the nuances of a particular story. I thought (or maybe just hoped) that the boycott of Elsevier was one of those stories that I could ignore. Then, of course, it wasn’t! So I was very happy to find this helpful post from Barbara Fister writing at Inside Higher Ed. She starts at the beginning, documents the important steps along the way, and draws valuable conclusions. The comments on her post are also surprisingly constructive and interesting.

Every other week, this space is devoted to a discussion of things (articles, news items, or blog posts) that I have recently found interesting. I choose things that are connected—sometimes closely, sometimes only tangentially—to academic writing. Responding to other people’s ideas allows me to clarify my own thoughts and to draw your attention to other approaches to the issues central to this blog.

Commas and Relative Clauses

Our task for today is to understand how we punctuate relative clauses. In the simplest terms, a relative clause is a clause that begins with a relative pronoun (which,  that, who, whom, whose). Let’s begin by looking at this example of a sentence with a relative clause:

CNCP patients, whose complaints of pain are not adequately addressed, start to display aberrant drug-related behaviours that are mistaken for addiction.

This sentence—taken directly from student writing—is not incorrect as written, but it doesn’t say what the author intended. Here is what the author meant to say:

CNCP patients whose complaints of pain are not adequately addressed start to display aberrant drug-related behaviours that are mistaken for addiction.

The difference? The second version of the sentence shows that it is about a subgroup of CNCP patients ‘whose complaints of pain are not adequately addressed’. There are many CNCP patients in the world and only some of them suffer in this manner. The first version says that all CNCP patients have complaints of pain that are not adequately addressed. Because of the commas, we have to read the relative clause as supplementary information about all CNCP patients. Technically, our first sentence could be reworded as follows:

CNCP patients [all of them] have complaints of pain that are not adequately addressed. CNCP patients start to display aberrant drug-related behaviours that are mistaken for addiction.

Rewording the sentence in this way reflects the fact that the original sentence portrayed the relative clause as supplementary. But the author’s intention was not to provide extra information about this group of patients; instead, the author wanted to define a particular group of patients under discussion. The lack of commas in our revised version indicate that the information following the relative pronoun is integral to the antecedent noun:

CNCP patients whose complaints of pain are not adequately addressed start to display aberrant drug-related behaviours that are mistaken for addiction.

The bolding emphasizes the integration of the relative clause. This integration is conveyed to the reader by the absence of commas. When we do use commas, we are telling the reader that we are providing supplementary information.

I chose this example because it is easy to see—even without being familiar with the subject matter—that the punctuation in the original sentence was probably misleading. One of the great difficulties in explaining how to punctuate relative clauses is that context matters. I always tell students to take whatever I have said about punctuating relative clauses home with them: only in applying those principles to their own sentences—sentences that they themselves fully grasp—will they come to understand whether a relative clause is integral or supplementary.

If you are familiar with this topic, you will notice that I am not using the traditional terminology (restrictive and nonrestrictive) or the usual variants (defining and non-defining, essential and inessential, identifying and non-identifying). It is possible, of course, to explain what is meant by these terms, but I have never found the common terminology to be particularly intuitive. More recently, I have noticed people using the terms integral relative clause and supplementary relative clause. I find these terms to be more intuitive, which is why I have started to use them in my classroom teaching. I would be interested to know if anyone has thoughts about whether this different terminology is helpful or just confusing.

Now let’s look at some more examples to reinforce the distinction between integral and supplementary relative clauses.

There are many narratives that can be used to illuminate the psychological concept of extraversion.

The relative clause is integral to the meaning of ‘narratives’. The sentence isn’t just telling us that there are many narratives. It is telling us that there are many narratives that can be used in a particular fashion.

The philosophical approach that is articulated by Rorty will set the tone for the proceedings at the conference.

This sentence is telling us what will set the tone for this conference. And it isn’t just any philosophical approach: it is the philosophical approach that is articulated by Rorty. Again, the relative clause is integral to the meaning of ‘philosophical approach’. Now let’s look at some examples of supplementary relative clauses:

Given the educational conditions in Malawi, which is located in eastern Africa, creative teacher training programs are essential.

Using transactional memory, which requires special hardware or software support, will address the problems associated with using locks.

Theorists argue that gender equity, which is defined here in economic terms, is a crucial component in any attempt to address the global AIDS crisis.

In each of these cases, the antecedent of the relative clauses is completely sufficient without the relative clause. A country is a useful example since it is easy to see that you don’t need any additional information to know what is meant by Malawi. Its location within its continent is obviously supplementary information. Likewise, ‘transactional memory’ is a fully defined term: the fact that it requires special hardware or software support is extra information. Take that information away and the term itself is just as informative. In the third example, even though the supplementary relative clause claims to be defining ‘gender equity’, it is doing so in a supplementary way. The sentence is telling us that gender equity is crucial and it is also clarifying what gender equity means in this context.

Here is a final example, one that gives three different versions of the same sentence:

The articles, which stem from the 1970s and the early 1980s, show Lefort intent on persuading the reading public about the totalitarian nature of the Soviet Union and the countries of the Eastern bloc.

The articles that stem from the 1970s and the early 1980s show Lefort intent on persuading the reading public about the totalitarian nature of the Soviet Union and the countries of the Eastern bloc.

The articles which stem from the 1970s and the early 1980s show Lefort intent on persuading the reading public about the totalitarian nature of the Soviet Union and the countries of the Eastern bloc.

The first two sentences follow the pattern I have been discussing. I chose this example because it shows how easily ambiguity can arise when we’re not clear about the punctuation we need. The first sentence is discussing a group of articles and using its relative clause to give us extra information about when they were written. The second sentence, on the other hand, is using its relative clause to identify a particular subset of articles. The implication of the first sentence is that all the articles were written in the 70s and early 80s. The implication of the second sentence is that there is a broader group of articles (presumably spanning a broader time frame); the author is drawing your attention to a subset of that broader group. Needless to say, it is important for the author to clarify which is meant. In my own experience, the decision about how to punctuate relative clauses often helps me to clarify my own meaning. Similarly, in discussing this issue with students, it often emerges that they aren’t quite sure what they were hoping to convey through their punctuation choices.

But what of the third sentence? Is it the same as the second sentence or is it different? In other words, is it okay to use ‘which’ to introduce an integral relative clause? Yes, it is. But while I would love to leave it at that, I feel I should say something about how I view this issue. The good news is that we have already covered the important part: you must signal your intention to your reader through your use of commas. If the information is integral, skip the commas; if, on the other hand, the information is supplementary, show that with your use of commas. Simple enough. But you do need to choose a relative pronoun and, for many, that decision raises a certain anxiety. When I ask students about their habits in this regard, I get a range of replies (often involving something a high school English teacher once said): guessing and then feeling bad; turn taking (first ‘which’, then ‘that’); using ‘which’ because it is more formal; never thinking about it. For a fairly typical prescriptive discussion, see this post from APA Style. For a more nuanced, historical view, try Stan Carey’s excellent post on this topic (as usual, Carey also provides a very helpful roundup of what others have said about this issue).

Given the general uncertainty this topic engenders, what should we do? My own preference—and that is all it is, a preference—is to use ‘that’ without commas and ‘which’ with commas. The first part of this practice is unexceptionable: nobody uses ‘that’ to introduce supplementary information. It is the second part that causes heartache. Look at this simple table:

integral supplementary
that YES NO
which ?? YES

My preference is to replace those two question marks with a ‘NO’. Not, to repeat, because I think this use of ‘which’ is wrong, but only because I like the clarity and simplicity of reserving ‘which’ and ‘that’ for different uses. I start with the important question—do I need commas or not?—and then use that as the basis for my decision about what relative pronoun to use. I explain this to students in just these terms: once they have sorted out the important issue of how to punctuate, they are free to choose their relative pronouns however they wish. But I do stress that this distinction is often treated in more absolute terms in advice on scientific writing. Whether or not this is true across the board, I do suggest that students preparing scientific papers consider reserving ‘which’ for instances in which they are using commas to convey supplementarity. For me it all comes down to this principle: if our audience might find a particular usage to be ambiguous—even if we know that it is perfectly acceptable—it can make sense to avoid that usage.

There is much more that could be said, but this post is already far longer than a blog post should be! If there is anything that you see as needing further explanation or elaboration, I would love to hear about it in the comments.

This post is the fourth in a series of posts on comma use. The first post dealt with commas and coordinating conjunctions. The second dealt with non-standard commas and punctuating for length. The third dealt with the importance of knowing when you need a pair of commas.

Impactful Pet Peeves

Everywhere I’ve been over the past week, people have been sharing this list of ‘grammar mistakes’. You don’t need to click on the link to know the sort of thing: a list of errors that are terribly egregious despite the fact that everyone makes them all the time. I am fascinated by the mindset that is unmoved by the prevalence of  such ‘errors’. The pleasure of being right when everyone else is wrong seems to be so great that it obscures any sense that we should view the prevalence of a particular practice as relevant.

I generally try to avoid linking to things that I find as unhelpful as this list; you surely don’t need my help finding shoddy advice on the Internet. But I went ahead and did so because I want to point to two key issues with this list. First, very little on this list is grammar (and the bits that are grammar are either wrong or dismally explained). This observation is more than just a quibble. The perception among students that their writing problems primarily involve grammar means that they often view their path to improvement as both narrow and fundamentally uninteresting. Not to say that grammar is actually uninteresting (obviously!) but rather that students might engage more readily with the task of improving their writing if they conceived of the task as having a broader intellectual basis. Improving your writing isn’t just fiddling with technicalities and arcane rules; it is a matter of thinking deeply about your ideas and your communicative intent. Calling it all grammar can be both dismissive and uninspiring.

The second—and more important—issue is the reasoning that underlies this list. A list like this says ‘all educated people should know these things, so avoid these errors lest you seem uneducated’. This edict misses an opportunity to talk about better reasons for avoiding certain usage patterns. For example, should you say ‘impactful’? It is meaningless to say that it isn’t a word: it is so obviously a word (if you aren’t sure, contrast it with ‘xsxsjwcrt’ and you’ll see the difference). But that doesn’t mean the world needs more instances of ‘impactful’. Use it at your own risk: most people find it icky and its presence in your writing may make them think unkind thoughts about you. Moreover, if something is having an impact on something else, you can likely convey that more effectively with a clear subject and a strong verb. Your writing will improve much more decisively if you disregard unnecessary discussions of legitimacy and instead think more about why certain usage patterns are so widely disliked.

After I had written this, I found a great roundup on this topic from Stan Carey. He discusses a range of these sorts of lists and provides his usual insightful response. He concludes with an excellent warning about grammar pet peeve lists: “Read them, if you must, with extreme caution, a policy of fact-checking, an awareness of what grammar isn’t, and a healthy disrespect for the authority they assume.”

Lastly, I really enjoyed the inaugural episode of the new language podcast from Slate, Lexicon Valley. The highly entertaining and wide-ranging conversation about dangling prepositions ends with an amusing discussion of Paul McCartney’s famous double preposition. A preposition at the end of a sentence is generally permissible, but it is probably best not to split the difference in this fashion: “But if this ever-changing world in which we live in/Makes you give in and cry/Say live and let die”.

Fear of Error

Before the holidays, I wrote a brief post commenting on something Stan Carey had written in the Macmillan Dictionary blog about adopting a forgiving attitude towards mistakes. I concluded that post by saying that “Better writing will come not from the fear of error but from the appreciation of the power of great prose.” Although I now wish I had been a bit less pompous, that is an accurate reflection of how I feel. At least it is what I tell others they should feel. But I had an interesting moment of further reflection recently that made me wonder how well I practice what I preach. I was reading the Facebook comments on a Huffington Post article. Early on in the comments, someone pointed out two ‘errors’ in Lisa Belkin’s article (a misused hyphen and case of improper capitalization). Belkin graciously acknowledged both errors, thanked the person who had caught them, and tried to shift the conversation back to the topic at hand. But the allure of discussing editorial fallibility was too great. People began piling on and soon someone asked whether HuffPo was without editors (you can imagine the tone in which that question was asked). To her great credit, Belkin pointed out that they do indeed have editors and that they also have hundreds of extra editors, a system that worked pretty effectively in this case. Mistakes were made, mistakes were identified (by those elusive fresh eyes that editing demands and that are in such short supply), mistakes were eliminated. A happy ending, unless you believe that someone somewhere dies a little bit every time a mistake is seen by the public.

I was so impressed by the sanity of this response. Rather than wishing nobody had ever seen her mistakes, she was glad that someone caught them. I wish I could adopt such a sanguine attitude about the possibility of error in my own writing. I have to keep reminding myself that errors aren’t ultimately what matters; reception and engagement are what matters. If we are read by lots of people, there is more chance that our words will have an impact and more chance that those people will come back to us with interesting and challenging reactions. And there is more chance that at least one smarty pants will come along and happily point out our mistakes. In this vein, I love reading the New York Times’ After Deadline blog in which an editor discusses all the stuff that got past their editorial staff. I’m always amazed by how much their editorial staff care about all this and by the fact that this impressive commitment in no way prevents them from missing all sorts of problems. I think devoting a blog to the acknowledgement, correction, and dissection of those errors is a great way to handle them. This sort of treatment shows that mistakes are inevitable, fixable, and often very interesting.

I was hoping that this post was going to be about the use of commas in restrictive and nonrestrictive clauses, but that just didn’t happen. Maybe next week will be more conducive to thinking deeply about commas!