Search DU CTLAT Blog

Tuesday, January 8, 2013

Tomorrow's Professor: The Three Most Time-Efficient Teaching Practices

http://derekbruff.org/blogs/tomprof/2013/01/07/tp-msg-1218-the-three-most-time-efficient-teaching-practices/



The Three Most Time-Efficient Teaching Practices

A recent study (Bentley & Kyvik, 2012) found that faculty in the United States spend on average over 50 hours per week on the job, and of those hours, over 20 are spent in teaching activities. These hours can be much higher for faculty at certain stages of their career or at certain kinds of institutions, but regardless, we spend a lot of time at our work. But more isn?t necessarily better?we don?t measure productivity in academia in terms of hours logged. What are we gaining by the time spent? And are we finding the time we spend meaningful and rewarding?

 

What constitutes productivity in teaching is a point of debate, of course, but many of us agree that we want to facilitate student learning. When faculty are challenged to change traditional teaching practices to promote better student success, all we may see looming before us is additional class preparation time. The best kept secret, however, is how much more time-efficient some of these touted teaching practices are. Below I discuss three of these best practices and the positive impact they can have on the way we spend our time teaching.

 

Begin with the end in mind.

 

The principle of backward course design (Wiggins & McTighe, 1998) echoes one of the late Stephen Covey?s Seven Habits of Highly Effective People (1989). In this model, course design begins by determining what it is that we really want students to be able to do or feel or think long after the final exam is over. Then we make every other aspect of the course serve those goals. Once we have articulated our goals, whether lofty or pragmatic, our next step is to determine how students will demonstrate to us that they have indeed achieved the kind of learning we want of them (assessment). Lastly we turn our attention to the class format and activities that would facilitate that achievement. Aligning these three facets of course design (goals, assessments, and activities) builds in a coherence and synergy in the course that creates greater opportunities for students to learn what we want them to learn.

 

Clear course goals that communicate the nature of our disciplinary work to students tend to take the form of ?At the end of this course students will be able to: explain concepts such as?, develop a thesis (or hypothesis), analyze data (or texts or images), contrast points of view on issues, or write cogent arguments based on research (or analysis).? We may assess students? achievement of these goals through exam questions, papers, or projects, checking that the level of thought or skill that we want students to gain is represented in those assessments. Key to alignment, however, is making sure that we give students opportunities on lower stakes assignments or during class activities to practice the same skills we want them to ultimately demonstrate on our assessments.

 

The backward design approach helps focus our course efforts, not only generating better chances for students to learn what we want them to learn, but also saving class preparation time in at least three ways. First, we spend less time deciding what readings and assignments to include in our course because we now have targeted criteria to use to make that determination?our course goals. Second, we design our assignments around those course goals, so we spend less time grading or responding to assignments that don?t accomplish what we had hoped and are, in essence, busy work for our students and for us. And third, we are more apt to restrain ourselves from taking on too much in the course. Articulating our goals rather than masking them in a generalized descriptive statement (e.g., ?In this course we will discuss the effect of global economics on world trade?) helps us see more clearly the demands we are placing on the novice learners in our disciplines.

 

Generate criteria or rubrics to describe disciplinary work for students.

 

Once we have clear course goals we can use them to generate criteria or rubrics, a time-efficient approach to grading. We faculty know what quality student work is when we see it?but our students do not. Disciplinary work is a mystery for students. As faculty we may have forgotten what it was like to be a novice learner in our field (a phenomenon known as expert blind spot), or we may have been more intuitive about these processes even as students. In all likelihood, we faculty were not representative of the other students in classes with us at the time. We are a self-selected group that shares little in common with the vast and diverse array of contemporary students. Providing students with criteria or rubrics gives them a glimpse into the way that we think.

 

Sharing with students the criteria that we will use to evaluate their work both models disciplinary thinking for them and helps them develop the ability to evaluate their own work. Although we may think that these kinds of guides ?give it away? and make our assignments too easy for students, rarely is this the case. Instead these sets of criteria or rubrics can be a motivator for students. They make the assignment less of a mystery and make the students? own success seemingly more under their control. For examples of rubrics in many disciplines see Walvoord & Anderson, 2010 (pp. 195-232).

 

Using criteria or rubrics to grade student work saves time by: helping students produce better quality work (and better quality work is both faster and more pleasurable to grade); allowing us to assign points more quickly and consistently as we grade; and providing clear criteria for us to use in talking to students about their grades. When a student comes to appeal a grade, we can ask her to explain how the work meets the criteria. So the session becomes less about faculty defending their judgment, and more about helping the student learn to evaluate work from a disciplinary perspective. Although it does take time to generate really useful sets of criteria or rubrics, we can use them over and over and adapt them to multiple purposes.

 

Embed ?assessment? into assessments.

 

Generating criteria for student work also serves another purpose that is time-efficient?it helps us in our assessment of student learning outcomes for institutional purposes. Although assessment of student learning in terms of assignments, tests, and papers is second nature to faculty, ?assessment? in the sense of tracking student learning outcomes is often considered a four-letter word. In reality, determining precisely what students are learning in our classes focuses our scholarly minds on our teaching. Just as we look for evidence to make arguments for our theses or hypotheses in our discipline, when we assess student learning outcomes we determine if our courses are accomplishing what we planned in terms of student learning. Based on what we learn, we can change our courses to make them more efficient in producing the outcomes we want.

 

Assessment processes have been criticized for consuming time without producing results. Because data collection often occurred at levels at the institution beyond the course (or possibly even department) level, it seemed removed from day-to-day course activities and needs (Hutchings, 2010). Some of the most meaningful assessment, however, is data that we as faculty collect about what activities engage students most productively, what concepts and skills students find most challenging, and what interventions advance student progress. The key to making the assessment requirement work for us is to embed our assessment of student learning outcomes into regular class assignments, exams, papers, and activities.

 

Faculty are accustomed to assessing student work with a grade. When we think about student learning, however, a grade represents a composite accounting of all the knowledge and skills we ask students to demonstrate on a piece of work. Assessing student learning outcomes requires us to deconstruct or unpack what that grade represents. What specific kinds of knowledge and skills did students demonstrate on a graded piece of work? For example, if our goal is to develop students? critical reasoning abilities in our discipline, we may record the level of students? performance on certain test questions that are specifically directed at that goal. These questions may be multiple choice or short answer, in which case we keep track of correct student responses. Or we can examine students? performance on an essay using criteria (or a rubric) that capture the elements of critical analysis that we want students to demonstrate (see above). We then keep track of students? rubric scores to

 determine what aspects of analysis they have mastered and what aspects they need to improve.

 

The information gained from monitoring students? performance makes our teaching more time-efficient by directing our choices on class activities and assignments. For example, rather than lecturing on all aspects of course material, we focus class activities on those areas that students find most challenging. Likewise, we spend our preparation time designing and responding to assignments that are targeted more directly at developing key skills in students. The time we spend is more likely to produce the kind of learning we want in students.

 

Summary

 

Redesigning our teaching based on recognized effective teaching approaches does require an investment of time upfront. But that investment pays off every day, all year long. And our time is often spent in more intellectually satisfying interactions with students, increasing our sense of productivity, and making the time more meaningful and rewarding.

 

References

 

Bentley, P.J., and S. Kyvik, S. 2012. ?Academic Work from a Comparative Perspective: A Survey of Faculty Working Time across 13 Countries.? Higher Education, 63: 529-547.

 

Covey, S. 1989. The Seven Habits of Highly Effective People. New York, NY: Simon and Schuster.

 

Hutchings, P. April, 2010. Opening Doors to Faculty Involvement in Assessment. National Institute for Learning Outcomes Assessment Occasional Paper #4. Retrieved from www.learningoutcomesassessment.org

 

Walvoord, B., and V. J. Anderson. 2010. Effective Grading (2nd ed.). San Francisco, CA: Jossey-Bass.

 

Wiggins, G., and J. McTighe. 1998. Understanding by Design. Alexandria, VA: Association for Supervision and Curriculum Development.

 

CONTACT:

 

Linda C. Hodges

 

Associate Vice Provost for Faculty Affairs

Director, Faculty Development Center

University of Maryland, Baltimore County

1000 Hilltop Circle

Baltimore, MD 21250


Share/Bookmark

No comments:

Post a Comment