Does Professional Development Make a Difference? (With Thomas Guskey)

Subscribe on Android

We interview world expert on teacher development, Professor Thomas Guskey, about how to evaluate professional development. What evidence should we use? How can we tell when it’s successful? What can go wrong in professional development? And how can teachers be at the center of the checking process?

Does Professional Development Make a Difference? Transcript

Ross Thorburn:  Professor Guskey, thank you so much for joining us. In one of your papers about evaluating professional development, at the end, you say a lot of good things are done in the name of professional development, but so are a lot of rotten things. What are some of the rotten things that get done in the name of teacher development?

Professor Thomas Guskey:  One of the things that we tend to do in education, all to our detriment, is we tend to be innovation oriented, without careful attention to the evidence that might support any particular innovation.

Educators often go to large conferences where they hear very dynamic presenters who talk about these opinions they have about how education could be structured without a lot of evidence to confirm that that would really lead to greater student success.

We invest in these different innovations without thinking carefully about what impact they will have, not caring so much about what evidence really supports them.

Teachers try them. They use them for a period of time. They don't see the improvements in student learning that were promised, and so then they become very frustrated. As a result of that frustration, they tend to, one, withdraw, and be distrustful of school leaders who bring in these innovations, of the people who advocate for these innovations, and become rather skeptical overall, because they've been burned in their approaches to it.

I think that is going to work against us. I think we're coming to recognize, at long last, that when people are suggesting that these new things should be implemented, we're asking important questions about, "What evidence do you have to support it? Has it been used in contexts similar to ours? Have they documented the impact on students in ways that we find meaningful?"

We ask those critical questions, and it leads you in a very different direction.

Ross Thorburn:  You mentioned evidence there. Can you tell us what would count as evidence, or what does evidence look like?

Professor Guskey:  I think there are right resources of evidence of student learning. One of the dilemmas, particularly here in the United States that people encounter, is they want to look very narrowly at improvements in standardized test scores.

Those standardized test scores may or may not be well aligned with the curriculum that students are taught. If it's poorly aligned with the curriculum that students are taught, we wouldn't see a lot of improvement there because they're testing students on things that they haven't been taught, weren't part of the instruction program.

If we think more broadly about the variety of student learning outcomes, that it might be considered to include those overall major standardized assessments, but also to include teacher assessments, classroom assessments, other demonstrations of learning and performance, to even go beyond that to other kinds of affected things where students are confident of themselves in learning situations.

Did they feel better about their abilities in school? Are they more engaged in learning activities both in school and out of school? Are they more purposeful in the way they're going about their learning?

There's this wide range of student learning outcomes we know contributes to their success, but we often don't think of what we turn to evaluating the impact of professional learning programs.

Ross Thorburn:  Obviously, there's a lot of different types of evidence. For me, one of the problems I have with this concept is that for someone running a teacher training session, that the time it would take to do the follow‑up and gather evidence that it worked, or it didn't work, would just take so much longer than running the training itself.

How can that be done in a manner that doesn't require an obscene amount of time?

Professor Guskey:  There's a balance must be sought here. Indeed, the kinds of evidence that we often use to evaluate programs is gathered on a very irregular basis. That tends to be pretty ineffective.

The approach that I advocate is to actually ask teachers what evidence they trust, what evidence would they believe. If that's what's really working for them, how would they know it, what differences would they see in their students, and what evidence could they provide that could confirm that difference?

Sometimes, that kind of evidence is not easy to gather. It might be something we have to obtain through classroom observations, or student interviews, or things like that, but we could certainly build that into the program.

The second aspect of it is that information must be gained rather quickly.

A few years ago, I was being interviewed by an educational journal. I was asked during the interview, "When teachers are trying a new innovation, a new approach, how soon should they see results?"

Very good friends of mine at that time were saying, "Well, change is a slow process, and it might take us a while to get there. And you have to sustain efforts over significant periods of time." I said, "In two weeks."

The interviewer was stunned. The reason that I suggested that is that all the evidence that we were gathering was showing that teachers need to see that what they're doing is making a difference.

Because the stakes are high for teachers, they have this fear that if they persist in using these new strategies and techniques, there is the danger that there are students who might learn less well. They are unwilling to sacrifice their students for the sake of innovation.

That means that you must build into any particular innovation, some strategy where teachers can get evidence pretty quickly, that is making a difference. That means you can't wait until the end of the year to give a standardized assessment.

What you need to do is think about the evidence teachers gather on a very regular basis to find out if their students are learning things that's important for them to learn.

Ross Thorburn:  Does that mean that we are really looking at raising teachers' awareness of how to evaluate new technique and methods rather than, say, launching a big research project about how to evaluate how well something works overall in a school in terms of student outcomes?

Professor Guskey:  Absolutely. In fact, I advocate for that. These are your frontline warriors who are implementing these strategies. They need to see that it's making a tangible difference in the school lives of the students.

We need to be helping teachers think about what evidence they would gather that would show this has made a difference. It could be engagement raised in class. It could be the kids asking more thoughtful questions. It could be them coming to class earlier and engaging more thoughtful discussions as a part of the class. All sorts of things that teachers look for to find out if what they're teaching is really coming across.

Engaging teachers in those conversations about, "How would you know that this is making a difference?" is one of the most important ways we can start.

Ross Thorburn:  We're really then looking at helping teachers evaluate what works and what doesn't work in their own classrooms.

Professor Guskey:  Yeah, that is exactly right. In fact, I believe that that should be included in any professional learning experience. If this worked, how would you know it? How would you be able to tell from your students? What evidence would you gather to show that this was making a difference?

To build that into any professional learning experience so that the teachers become more thoughtful and more evidence oriented in their approaches as well. All those could be very, very positive benefits in any professional learning experience.

Ross Thorburn:  That's really interesting. I think that's something that's often missing from teacher training, where normally it's, "Here's this great technique, use it." Here I guess, what we're saying is, "Here's this technique, use it. And when you use it, what evidence can you collect from your classes that would show if it was working or not?"

Professor Guskey:  I agree with you. Yes, it's not something that you would see as a common element in most professional learning programs at this time.

Ross Thorburn:  Another reason that you've written about why training and teacher development often fails is because of a lack of support from schools.

If the school doesn't really support a particular professional development program, or a new practice for teachers, is it still worth trying?

Professor Guskey:  No. I think that the very best teachers are always looking for ways to get better. They are always open to ideas that might benefit their students. They are in an environment where they're willing to try those things out. They have some initial confidence that it might lead to improvement.

Again, the critical element will be allowing them to gain some evidence in a relatively short period of time, that their extra effort and their extra work are paying off, that they see a benefit.

Teachers are very, very hardworking individuals. They spend many hours in preparation outside of their regular class time. They are willing to commit even additional time and resources to that if they have some confidence that it is helping their students better.

When I do studies on teachers, I'm always asking questions like, "What makes teaching meaningful to you? What are the benefits you derive from teaching? What makes it a good job? What makes you excited about what you're doing?"

99 times out of 100, the teachers that I interview define that in terms of their students. What happens is when I see that these lessons are coming across, when I see they catch on to ideas, when I see that light come on.

This is the power of professional learning, to give that to teachers in a very powerful way, to give teachers the joy of what brought them to teaching in the first place, and to let them have that positive influence on their students which they really, really want to have.

Ross Thorburn:  Finally then, as an argument against measurement and evaluation, what do you think of that Einstein quote, "Not everything that counts can be counted, and not everything that can be counted counts"? Are there some important effects that just can't be measured when it comes to professional development and evidence‑based practice?

Professor Guskey:  I think that there is. There are certain things that are extremely difficult to measure. I also go back to the quote by another scientist, Lord Kelvin, who said that, "If it can't be measured, it can't be improved."

That we really need to think about, if we want to make improvements, we have to find some way to determine whether those improvements have occurred or not. That implies the process of measurement.

We might say we want students to be more joyful in approaching learning situations. Clearly, that's difficult to measure, but it doesn't mean it's impossible.

We want students to have greater confidence in themselves in learning situations. We want them to have a higher level of aspiration, to have confidence that they can achieve lifelong goals and do well in their lives, and that what we're teaching can help them in that.

Just because it's difficult to measure doesn't mean it's impossible. If we want to see improvement, then we need to be able to find some way to document those improvements, which does imply some sort of measurement, even though it could be quantitative, as well as qualitative.