Today’s post is a Guest Post from a faithful reader and client on the tenure track, and also on the job market, who discovered some interesting points about “assessment” while she was at some interviews this year. More and more often, candidates find themselves being asked about assessment, and indeed, just today I was assisting with a postdoc app that also required the applicant to discuss assessment strategies in their teaching. I recommend that all job candidates familiarize themselves with some of the ideas and terminology surrounding this increasingly common term, reflecting as they do the pressure the academy is under to “rationalize” its practices and “prove” its legitimacy and effectiveness. Readers, please add your own experiences with this interview theme in the comment thread below.
I recently had two phone interviews with major universities, and one campus visit. Questions about assessment came up in both phone interviews, and in the campus visit. My guess is that assessment is on the minds of search committee members as more accrediting agencies emphasize assessment in the review process.
The first question in the phone interviews asked how I would assess a course. The second asked how I would incorporate assessment in curriculum development. I’m reproducing my responses below because I think that it would be helpful to blog readers to have a response ready should similar questions be asked of them.
In response to the first question, I answered that I use several assessment strategies in my courses:
— Scoring and instructional rubrics to help students to focus on content and to guide them in developing presentations and written and oral reports
— Concept maps in order to help students to understand the big picture
— Cooperative learning assessment to encourage peer-to-peer learning
I also use multiple assessment tools. Assessment tools that are common to my courses include:
— Concept Tests
— Oral presentations
— Written reports
— Peer review
— Research projects and papers
— In project-based courses, performance assessment
In response to the second question, about incorporating assessment into curriculum development, I would argue that curriculum development initiatives should incorporate a combination of formative and summative assessment within the curriculum development process.
- Formative assessment activities are used to provide feedback, evaluating learning progress in order to motivate students to higher levels.
- Summative assessment activities are used to judge final products for completion, competency and/or demonstrated improvement.
Formative assessment can be used during planning and implementation of courses,through the use of tools like surveys and student focus groups in order to ensure that individual course and curricular objectives are being met.
Upon completion of individual courses and/or a program of study, a combination of formative and summative assessment can be used to evaluate competency and solicit feedback in order to ascertain whether goals and objectives are being met.
My UK experience: As a candidate, I have never been asked explicitly about assessment in an interview. The closest question I consistently got was a version of “this institution has an increasingly number of international students, how would you make learning more inclusive for such diverse cohorts?” which is possibly the most difficult question I ever got because I felt it is very general and has many hidden dimensions. After stumbling on this question a couple of times, I planned an answer where instead of focusing on providing some specific solutions; I went for a more conceptual approach to the question, using knowledge from my field to articulate my answer. I spoke about the need to understand the underlying assumptions about learning as well as cultural approaches to learning. I also spoke about developing strategies that challenged otherness in the classroom, and promoted discussions that fostered the use of situated knowledge and sense-making to aid group learning. I also positioned this as part of an institutional approach rather than a one-scholar battle and took that opportunity to mention any relevant institutional initiative that I had come across –in one case, they had a UG mentoring programme where third-years would mentor first-years. I spoke of developing this at course level as part of groupwork. It felt like a politician’s speech (topical but so unbelievably general!) but it actually went down well the times I used it –I asked someone who was in the appointing panel for my current post about this and they said this stood out positively because somehow my answer was not generic and it gave the impression that my teaching was clearly been informed by my research. I won’t say that I planned it to that extent –I honestly was a bit desperate and simply thought about how to tackle a question of this sort in a way that would allow me to talk about it confidently.
My two cents, as it were are as follows …
I think I’ve been asked about assessment in interviews, but can’t remember what I said which is somewhat of a shame (not in the sense that I might be able to provide some blinding insight, but because my memory is getting worse and worse). So, the suggestions in the post above are obviously clear and detailed, but I think that the answer might be a little bit dry (e.g. summative, formative etc.). As a result of moving institution and country, I’ve had to think about these sorts of things a lot more than usual recently and come to a particular way of thinking that might be helpful.
I’ve come round to the perspective that it does not necessarily matter what we do as teachers, it matters what we get the students to do as learners. Hence, assessments are about what we ask students to do and why we are asking them to do specific things; e.g. we need to be clear about the rationale for getting them to carry out tasks (e.g. essay writing, presentations, etc.) and make sure each task has a purpose that helps student understanding (e.g. creating cogent arguments, verbal presentation, etc.).
I’m not going to claim credit for this view, since I picked it up off some Danish videos about teaching, which are available free on YouTube (and in English): http://www.youtube.com/watch?v=iMZA80XpP6Y
Once you get past the ever-so-slightly cheesy direction and voice over, the video actually starts to make sense. Students have different abilities, for sure, but they can also learn how to do similar things. We can’t expect every one of them to learn the same way or to the same standard, but they can at least pick up some elements of the underlying activities.
Not sure if that helps at all….
(PS Karen, I really enjoy the website, by the way. A great service for graduates, and one I’m trying to tell everyone I know about. What about a post on how to teach? Would be very helpful)
A word of advice, especially about formative assessment and especially if you are in the sciences:
If you don’t know about clickers or who Eric Mazur is, you should learn now. He began asking multiple choice conceptual questions in his physics class at Harvard as a part of his lecture, and with the advent of personal response devices (like clickers) this can become instantaneous feedback about the status of the class’s understanding of a particular topic. Coupling this technique with peer instruction can provide even further learning gains as evidenced by this article.
Because of my own focus in physics education research, talking about using clickers and peer instruction in class is a part of my teaching philosophy. It is also a common technique used in physics classrooms (especially introductory physics classrooms) all around the country. These techniques are so ubiquitous in the sciences that if it weren’t in a teaching philosophy for a science TT position that would cause my advisor to put that application in the “reject” pile when he is on the search committee.
Time is not a constant says
We ask about assessment in every single interview we do. First, assessment is required for accreditation purposes. Second, it is vital to the teaching process. You cannot teach effectively if you fail to have tools to test your effectiveness.
Third (and possibly most importantly), we are a teaching college. Individuals who focus on research and those freshly out of school don’t do assessment and/or don’t understand how to do it effectively. They often get it confused with evaluation, which is completely different.
As a result, the assessment quetion tests your knowledge of actual classroom practices– not just general knowledge about your field. For this same reason, we also give classroom behavior scenarios.
As a note, this is doubly important for individuals applying for a dean or discipline coordinator position. If they don’t understand assessment, we do not hire them.