Not exactly, but it's the same old industry perspective - this time in the context of evaluation. A recent article in the Chronicle of Higher Education states that standardized and other forms of testing are not effective measures of a graduate's skills. It's the term "skills" that I find interesting here. The article states that employers thought, in general, new graduates need to improve in areas of global knowledge, self-direction, and writing.
I know there's a long-running debate within Composition and Technical communication about what exactly it is we're teaching students in the classroom (art, analysis, critical thinking, mechanical skill, etc.), but when I look at the three areas of improvement, I'm thinking "one of these is not like the others."
From an instructional perspective, I know that global knowledge and self-direction can be taught in a multitude of ways within a writing course. But how does one go about evaluating that knowledge? The CHE article dismisses multiple choice as inappropriate ("this isn't a multiple choice world"), but identifies project portfolios and essays as some of the most useful tools in assessing a potential candidate. So if the emphasis is on the textual (yes, I'm privileging the textual here), could poor writing "skills" mask a graduate's knowledge on any subject (let alone an ability to self-direct)? What I'm getting at is, if candidate evaluations are primarily based on textual artifacts, should we not be emphasizing the effective authoring and management of those artifacts? Should we not be teaching students the specific mechanical skills they need to author the documents and essays they will use to transition into professional roles?
I've muddied my own thinking here, I know. But there is some nugget here that has to do with where we spend our time in the writing classroom, and what we're asking our students to consider as both academics and pre-professionals.
No comments:
Post a Comment