Copyright 2001 by Kelvin Seifert. A version of this paper was presented at the annual meeting of the American Educational Research Association, Seattle, WA, April, 2001. Do not quote or distribute without permission of the author.

Is There a Best "Wait Time"? (1)

by Kelvin L. Seifert,
University of Manitoba (2)

The modest project described in this paper used the basic strengths of Internet communication to help students to assess the simple, but important idea of "wait time" (WT)-the length of time a teacher allows for a student to respond before moving on to another student or question. The Internet, and particularly a moderated listserv e-mail, worked well for this purpose because the technology allowed observations to be tallied quickly from a diversity of classrooms, and because it encouraged dialogue for interpreting observations almost as soon as they emerged. The advantages stemmed from the intrinsically distributed character of the Internet, as well as from to the study of this particular educational research problem.

The Problem: Is Longer Better? Educational psychology texts often recommend that teachers need to increase their WT when questioning students. While more detailed and research-based discussions give a balanced assessment of WT (Tobin, 1987; Ormrod, 1999, pp. 305-307), the emphasis in the textbook literature is more often on the advantages of longer waits (3 seconds or more). Complicating circumstances are noted, but only in passing. Conventional textbook summaries (i.e. the wisdom usually distributed to education students) implies that a single desirable WT exists, and is not influenced by cultural diversity in discourse styles, the cognitive requirements of different tasks, the size of a group witnessing a WT, or the maturity particular students. The fact that wait-time research and recommendations are primarily based on observations of secondary science classes is rarely mentioned or discussed. Complicating factors are either ignored or described explicitly as unusual, and therefore treated as relatively unimportant (e.g. Rowe, 1987; Woolfolk, 1998, pp. 491-492).

The technology-based activity described here allowed students studying educational psychology to assess for themselves whether the "long wait-time rule-of-thumb" is indeed appropriate classroom advice. It did so by taking advantage of the diversity of education students' experiences in schools, as well as their diversity as interpreters of classroom processes. When these were linked with a highly "distributed" communication tool, the Internet, the results were much stronger generalizations and interpretations than students could have achieved individually.

The Procedure: Following a brief introduction in class to the idea of WT, education students were asked to observe the WTs enacted by at least one teacher in any single, particular classroom activity (e.g. small-group discussion, whole-class discussion, or a group transition time). Because of the program in which the education students were enrolled, the teachers observed all happened to teach elementary school and were relatively experienced (more than five years in the classroom). For the activity observed, each student measured WTs for at least ten interactions, simply using a wristwatch to estimate times to within the nearest half-second. Although measurement therefore was not precise, the procedure proved quite satisfactory for half-second accuracy. Students also recorded the context of each observation-the number of students participating in the activity, their grade level(s), the apparent purpose of the dialogue or activity, and any other information the observer considered relevant.

The Role of Technology in the Activity: All observation results were sent via e-mail to a classroom-based moderated listserv (with myself as moderator), where the results were immediately tallied and the cumulative results were resent to all members of the class for inspection. The cumulative tally included more than the wat times as such; it also provided current overall means and medians WTs for each teacher and for the study as a whole, as well as students' comments contextualizing the observations.

Students discussed the accumulating results using the same classroom-based listserv. By instructor fiat, each student contributed at least twice-though many contributed much more often. The first time, he or she proposed a thoughtful generalization or two about emerging trends, even before data were complete. The second time, the student critiqued classmates' proposals about the WT trends-pointing out complicating factors, using more recently posted data, or noting contextual information that had been overlooked. As the instructor, I participated in the listserv discussions at key points to get dialogue moving or to provide conceptual balance. I tried, for example, to counteract "me-too-ism" in students' comments ("I agree with the last 10 writers," etc.), to balance extreme relativism ("You can't make any generalizations about WT at all"), or to provide public support in case a student was criticized unfairly (though this never proved necessary). Overall, however, I participated in discussions of the results much less than the students. At no time was class time used for discussion of the WT observations-a benefit that gave me more class time for other purposes.

Results

The project yielded provided two kinds of information: the results found by the education students themselves, and insights about the use of technology for teaching educational psychology. The latter information-about use of technology-was the main reason for writing this paper, but let me begin with the former-the results found by the students-in order to place the technology insights in context.

Students' Observations and Interpretations In general the students' observations of WT did not support the research literature strongly. About 25 students participated in the study, and the observed WT from their observations varied from ½ second to 19 seconds, with a median WT ofojust over 3 seconds. The median figure of 3 seconds, as it happens, is the minimum amount of WT recommended by most of the research literature on this topic, and about twice as long as what the research literature has actually observed and reported (Stahl, 1994). Contrary to the usually published trends, therefore, more than half of these particular teachers did wait ample amounts of time.

The observations made by students, of course, may not have been as reliable as those previously published, since the student-observers of this project were relatively untrained. Individuals may have defined WT variously (one student reported only later, in her interpretations, that her WT began when the teacher designated a speaker rather than when the teacher finished speaking; another student reported the converse). They were also motivated to a significant extent by course requirements rather than by intrinsic interest in WT as such; as with science labs, some students may have been more concerned about turning in plausible-looking data than about finding truths about WT.

Even though students' had read summaries of the research and of its recommendations about WT, they frequently departed from this "received wisdom" in interpreting their observations. Longer WT's, for example, were not necessarily regarded as better for students; some students considered large groups boring for classmates, or even humiliating if directed to a child who eventually failed to answer a question correctly. On the other hand, shorter WT's were not necessarily regarded as less desirable; when material was easy or familiar, in particular, a fast pace was regarded as more appropriate, and a WT in particular of less than 3 seconds was sometimes praised explicitly. With smaller groups of children, it was thought, longer WT's might be more acceptable than shorter ones (because fewer people are waiting for an answer)-except that longer waits might also be less necessary in small groups since a teacher might know how to pace herself better. And so on. The only interpretative generalization made by students consistently, in fact, was that context mattered more than WT as such. Although none said so explicitly, many students clearly implied that the canonical educational research on this topic was distinctly unhelpful because it amounted to overgeneralizations.

Insights About Using Technology To Teach Wait Time The procedures for this investigation worked well because they matched the "distributed" character of practice teaching with the distributed character of internet communications. As educational commentators often note, teachers tend to be isolated in their work, in spite of being surrounded by students. An analogous problem plagues education students doing practice teaching: each works in a separate classroom, or even in a separate school building, and he or she faces logistical difficulties in sharing teaching experiences. Under these circumstances the Internet is well-suited to bring students together: from separate terminals distributed among their schools or homes, students can create a common fund of knowledge quickly and easily, and create a common dialogue about it as well.

In this case students used the Internet to consolidate their (distributed) knowledge about WT. In general they found the concept of WT meaningful and the task of measuring it feasible. Automating the tallies on-line made it possible to amass large amounts of data rapidly even with large classes: a class of 35 students would produce 35 x 2 x 10 = 700 observations of WT! A large, diverse body of data strengthened the results and made trends, if they existed, more likely to be detected.

The interpretative e-mail discussions often included critiques not only of the merits of different WTs, but also of the research methods themselves. Some individuals argued that the wait timings were measured unreliably, or that the contextual comments were not well-chosen and therefore not useful or meaningful. These concerns needed to be treated respectfully, but I did not regard them as pedagogical problems: on the contrary, critiques of research methods were signs of pedagogical success, since they suggested that students were confronting authentic problems of educational research.

But there were also four significant problems. First, large volumes of data also increased their complexity and the amount of apparent "noise" or randomness, and therefore increased the challenges of interpretation. Some students dealt with this problem with relish by participating in the e-mail discussions heavily, as if such discussions were their true calling in life. Most, however, settled soon into a common position-that context, and only context, determines the value of any particular WT.

Second, not all students seemed comfortable with the listserv medium. They showed discomfort by responding with only brief, cursory, or shallow answers (or "me-too's," as I termed them above). In a few cases I helped such students by corresponding with them individually by e-mail, ("Your comment about X was intriguing, but I didn't understand it. Can you write more about it to the class?" etc.). This strategy worked well, but took time, like any form of individual help.

Third, as with classroom discussions in person, there was a tendency for students to address the instructor rather than each other-even though I had explicitly asked them to comment on each other's observations and interpretations. In compliance with the assignment, they did indeed comment on each other's work, but usually in the third person ("Joe found X") instead of in the second person ("Joe, you found X"). The project did not have a long enough lifespan for me to solve this problem fully.

Fourth, and perhaps the biggest problem, was knowing when and how to limit its scope. The data and interpretations inevitably pointed to major educational issues related to WT only indirectly. Interpretations of WT results led, for example, to dialogues about gender roles (Do girls and boys get and/or need the same WT?), to dilemmas about integrating children special needs (How long should you wait for a child with a significant speech problem?), and to assessment issues (How might a student's failure to respond fast enough affect a teacher's evaluation of the student?). In the end it proved impossible to deal with such large issues fully. Doing so required time and dialogue-more of it, I found, than I or my students were able to give, even using a logocentric, high-tech medium like e-mail.

In fairness to myself and my students, however, this limitation was not unique to e-mail, listservs, or the Internet: a shortage of discussion time has been a problem since long before computers were invented. To paraphrase Murphy's law, "discussions expanded to fill the available medium"-in this case, the medium of e-mail. I was left with much the same decisions about discussion management that I face every day in classes without computers: when shall I extend a line of discussion, and when shall I let it drop or even actively stop it?

References

Ormrod, J. (1999). Human learning, 3rd edition. Columbus, Ohio: Merrill.

Rowe, M. (1987). Wait-time: Slowing down may be a way of speeding up. American Educator, 11, 38-43.

Stahl, R. (1994). Using "think-time" and "wait-time" skillfully in the classroom. ERIC Digest. Bloomington, IN: ERIC clearinghouse for Social Studies/Social Science Education. ED370885. <www.ed.gov/databases/ERIC_Digests/ed370885.html>

Tobin. K. (1987). The role of wait time in higher cognitive learning. Review of educational research,57, 69-95.

Woolfolk, A. (1998). Educational psychology, 7th edition. Boston: Allyn & Bacon.

Endnotes

1. A version of this paper was originally presented at the annual meeting of the American Educational Research Association, Seattle, WA, USA, April 10th 2001. The author wishes to thank the students of 129.180, "Psychology of Learning," for their helpful comments on an earlier version of the paper.

2. Please address correspondence to Dr. Kelvin Seifert, Faculty of Education, University of Manitoba, Winnipeg, MB CANADA R3T 2N2.