Preview Mode Links will not work in preview mode

May 2, 2023

In this episode (part 4 of the series), John and Andrew continue their discussion from part 3. They talk about how to use data charting in combination with the Plan-Do-Study-Act cycle to gain the knowledge managers need to lead effectively. 

0:00:00.1 Andrew Stotz: My name is Andrew Stotz, and I'll be your host as we continue our journey into the teachings of Dr. W. Edwards Deming. Today I am continuing my discussion with John Dues, who is part of the new generation of educators striving to apply Dr. Deming's principles to unleash student joy in learning. The topic for today is Prediction is a Measure of Knowledge. And John, to you and the listeners, I have to apologize. I'm a bit froggy today, but John, take it away.


0:00:30.9 John Dues: Yeah, Andrew, it's great to be back. I thought what we could do is sort of build off, what we were talking about in the last episode. We sort of left off with sort of an introduction to process behavior charts and importance of charting your data over time. And sort of the idea this time is that, like you said at the outset is prediction is a measure of knowledge and prediction is a big part of improvement. So I thought we'd get into that. What role prediction plays in improvement, how it factors in and how we can use our chart in combination with another powerful tool, the Plan-Do-Study-Act cycle to bring about improvement in our organizations.


0:01:15.1 AS: And when you say that prediction is a measure of knowledge, you're saying that prediction is a measure of how much you know about a system, or how would you describe that in more simple terms that for someone who may not understand that, that they could understand?


0:01:31.4 JD: Yeah, it took me a while to understand this. I think, basically the accuracy of your prediction about any system or process is an observable measure of knowledge. So when you can make a prediction about how a system or a process, and I use those words interchangeably, is gonna perform the closer that that sort of initial theory is, that initial prediction is to what actually happens in reality, the more you know about that system or process. So when I say prediction is a measure of knowledge, that's what I'm talking about is, you make a prediction about how something's gonna perform. The closer that prediction is to how it actually performs, the more you know about that system or process.


0:02:19.1 AS: I was just thinking about a parent who understands their kid very well can oftentimes predict their response to a situation. But if you brought a new kid into that house that the parent didn't know anything about their history, their background, the way they react, that the parent doesn't really have anything to go on to predict except maybe general knowledge of kids and specific knowledge of their own kid. How could that relate to what you're saying that prediction is a measure of knowledge?


0:02:52.3 JD: Well, I think that's a great analogy. One of the things that Dr. Deming said that it took me some time to understand was that knowledge has temporal spread - just a few words, but really causes some deep thinking. And I think what he meant was, your understanding, your knowledge of some topic or system or process or your kid has temporal spread. So that understanding sort of increases as you have increased interaction with that system process or in this analogy, your own kid. So when you replace a parent who knows their kid well with some other person that doesn't know that kid as well, they haven't had that sort of, that that same, that shared time together. So there's that, they don't have that same understanding. It's gonna take time for that understanding to build. I think the same thing happens when we're trying to change a system or a process or improve it or implementing a new idea in our system or process. And so the prediction at the outset is probably gonna be off. Right, and then over time, hopefully as we learn about that system or process or kid in this instance, that that prediction is gonna get better and better, as we learn over time, basically. I think.


0:04:15.8 AS: Yeah, it's interesting because saying the words temporal spread kind of gives way to the idea that Dr. Deming was educated in 1910, 1915, in speaking, reading, writing. And then he also, he said things, that his objective wasn't to just completely simplify. And I think that the messages that he was bringing were difficult to simplify, but you could say that, "improves over time" is what temporal spread may mean. Right? Okay. Let's keep going on this. This is interesting.


0:04:55.0 JD: Yeah, I think, maybe it'd be helpful if I share my screen and we can sort of connect the dots from last time to...


0:05:00.8 AS: Yep. And for the listeners out there, we'll walk you through what John's showing on his screen in just a moment. All right. Now we can see a chart on his screen.


0:05:11.7 JD: Yeah, I think, so we see a process behavior chart sort of orient, the watchers and then even the listeners. So the chart is a process behavior chart. That terminology can be a little bit confusing. Some people would call this a control chart, some people would call it a Shewhart chart, my sort of preferred terminology is process behavior chart because it's literally charting some process over time. So the example I used last time was charting my own weight. So you can use, you can chart personal items, you can also obviously chart things that are important to you in your organization. But the main thing is pull numbers out of a spreadsheet. That's what we talked about last time. Pull numbers out of the table instead plot that same data over time. So you can see how it varies naturally, perhaps, or how it varies in, special ways over time. So the, for the watchers, the blue dots are individual data points. The dates are running along the X-axis of the chart. And so you can see those moving up and down over time as I weigh myself every morning. Then we have the green line.


0:06:30.6 AS: At the beginning of the chart, we see those individual data points hovering around maybe 179 to 80, something like that.


0:06:41.8 JD: Yeah. Bouncing around in the 180, 178, 176 range. And then...


0:06:48.8 AS: And just for the international listener, John is not 180 kilograms [laughter], he's 180 pounds. Okay. Continue.


0:06:56.8 JD: That's right, that's right. On the Y-axis, we have weight in pounds. And so in addition to the blue dots and we've added a green line that is the average over time. And then we have sort of the last component of the process behavior chart, we have the red lines, which are the upper and lower natural process limits, or some people call them control limits sort of are the bounds of this particular system at a given point in time. And so, as we watch this data unfold, we can see that it does move up and down in different ways, in different patterns, but it's far more illustrative than if I was just looking at that table of numbers. So when I do this daily, I don't wanna overreact to any single data point. Instead, what I'm trying to do is get a sense of how this data is performing over time, right? So I can see this unfold over the course of days and then weeks and then months and all along, my knowledge of my weight system is increasing.


0:08:09.7 JD: Even if you don't know anything about process behavior charts, you could do this on a simple line chart or run chart without the limits, and you'd still learn much more than what you would with that table of numbers. But with the addition of the red lines, the natural process limits, what I am doing is sort of saying based on some simple mathematical calculations, that these are the bounds of my system that I would expect because of the data empirically based on the actual dots on the chart, these are the bounds of my system. And if a point would happen to fall outside of those red lines, I know something special has happened because it's so mathematically improbable that it's not to be expected. And there's a few other patterns in the data too that you can look for besides a single point outside of one of those red lines.


0:09:08.4 JD: But I'm looking for those patterns to see if something special has happened or I'm seeing if my data is sort of generally bouncing around between those red lines. And in either case, there are different approaches to trying to improve that, improve that data over time. And one other thing that I like to do, I always make my data blue, my average line green and my process, my natural process limits red. And then whenever I do this internally with data from our own organization, whether it's attendance data or test data or financial data, whatever the data is, I always use that same pattern. So people get used to seeing these colors and they associate blue with data, green with the average and red with the limits.


0:10:00.7 AS: So tell us more about, I mean, one of the things before we even talk about PDSA, what's happening here is that the upper limit and the lower limit at two points in this chart shift down. So you're, if you didn't change the upper and lower limit and you just had your, that standard one across the whole chart, then it probably starts to lose its value because the process that you're describing is going back in time to such an extent that things were different. Tell us about why you've made this adjustment.


0:10:46.0 JD: Yeah, I'd say if the natural process limits, so the red lines sort of stay in the same spot. So if I don't see those special patterns, basically what I can assume is that that system is, despite the fact that the data is bouncing around a little bit naturally, that, there's nothing sort of significant that's happened either in terms of my weight system getting worse, or in this case I want to get better. Obviously, I wanna lose a little bit of weight. If I don't see those patterns in the data, then nothing has changed. So if I'm trying something new to bring that weight down and I don't see any of those special patterns that tell me to adjust the natural process limits, that means what I'm doing is not having an effect. Right. So there's one, you wanna know what reality is for whatever the thing is that you're talking about.


0:11:37.7 JD: So on the very first day, you can see, when I weighed myself, it was like something like 182 pounds or something like that. And I could say I weigh 182 pounds, but that's not really reality, except that I weighed 182 pounds on the morning of November 28th when I recorded that data. But the very next day it goes down a little bit and then it goes down quite a bit that third day, and then it bounces back up, and then back down, and then back up and then back down a little bit. And that's the real sort of reality. And I don't really weigh 182, I'd probably weigh somewhere closer to that average of 179 across those first two weeks or so. Right? But I don't know that until I've collected, 7, 8, 9, 10, 11, 12 data points, what my reality is.


0:12:30.0 JD: And that's why this charting is so important. It helps me understand reality in a much more accurate way. So when we're trying to improve, I think, in this case, I decided to gather data on a daily basis. And I think when that's sort of another important consideration, when you're doing improvement work and charting, you wanna gather data in a rhythm that matches whatever it is, whatever that metric is that you're concerned with, you want that, you want the data to be gathered in a way that matches that metric. But in general, more frequent is better, as long as you're not overreacting, like I said earlier, to any single data point. Instead, you wanna gather data, you wanna have those 15, 20 data points, see the patterns, and then start to look for changes in those patterns. The three that I happen to look for are, a single point outside the natural process limits, or I'll look for eight consecutive points, either above or below that average line, or I'll look for three or four points that are closer to the red line than they are to that average line.


0:13:41.4 JD: Any of those three patterns emerge. I know something has changed, and I'll go ahead and shift the limits. If I know, when I'm looking for those patterns, I wanna know why that change has happened. So sometimes when I see a pattern, and if I don't have an explanation for why that data shifted, even though it shifted in a way that was mathematically unexpected, sometimes in those instances, I won't shift my limits. So I generally will only shift when I see a pattern and I can sort of pinpoint a reason for that, for that shift.


0:14:18.4 AS: And when you say shift, you're saying shift your upper and lower process limit?


0:14:20.0 JD: Yeah. I shift the limits at the point where I saw one of those special patterns begin, basically.


0:14:29.6 AS: Okay. All right. Keep going. So you got PDSA on there now.


0:14:32.9 JD: Yeah, so I think, when I think about continual improvement, there's a lot of different tools we can use and a lot of tools that are valuable, especially when you sort of facilitate an improvement team, a group of people working together, especially because those various tools can help you visualize what people are thinking. But if I had to boil continual improvement down to two tools, it'd be the process behavior chart combined with the Plan-Do-Study-Act cycle. So sort of the theory of variation is the process behavior chart, and then what Deming would call the theory of knowledge, the PDSA or Plan-Do-Study-Act cycle is a key component of that theory of knowledge part of the system of profound knowledge. So you can see on my chart, I have three cycles that I've gone through so far.


0:15:24.6 JD: So I've basically run three experiments to try to bring the weight down. So PDSA cycle one, then I made a slight adjustment based on what I learned adjusted after about 30 days, I ran another Plan-Do-Study-Act cycle, ran that for another 30 days to see how it'll impact my weight. And then I've started a third cycle, and I've been running that now for about 45, 50 days. So the idea is, you run a, basically a structured simple, it doesn't have to be overly complex, simple experiment. And then you see if what you're doing is working, and in this case it's resulted in two, two shifts or two patterns of data that tell me that that actual improvement has happened. Not that I just decreased my weight, but it decreased to such an extent that it showed up as a mathematically unlikely pattern in my data.


0:16:33.3 AS: Well, I think all of us who wanna reduce our weight, kind of wonder, what did you do that caused your weight to fall and be consistently lower?


0:16:47.5 JD: Yeah, [laughter] that's a good question. I mean, pretty simply, mostly I focused on what I was eating. I sort of cut out the sort of typical culprits, the extra carbs, the processed food, and the sugar and focused mainly on meat and vegetables, across all three meals. And I added a little bit of exercise, there's a little more detail to it than that, but that's the basic, the gist of it. But the thing was, I wrote it down in a template, a Plan-Do-Study-Act cycle template. So I had a simple plan written down. I had the dates during which I was gonna do this, and then I was gathering the data and charting it every morning to see how the experiment was working. And then after 30 days or so, I would study it a little more closely, revise the plan, and then sort of keep going with it. So it's not, certainly not rocket science, but it's a powerful method when you combine these two things. And again, you can do this for just about anything, any data that occurs over time in your organization, you can run these same experiments.


0:17:58.5 AS: So the power of the chart is that it gives you feedback to try to see if your prediction came true?


0:18:10.7 JD: Yeah. And you have the historical results. And then you can also look to see, again, if those special patterns emerge that tell you that actual improvement happened, verse, an insignificant, in this case, decline in weight.


0:18:29.0 AS: And what's interesting is after PDSA number three, you've gotten your weight down to an average of let's say 172, 173, something like that.


0:18:38.1 JD: Yep. That's right.


0:18:39.6 AS: And it's just kind of bouncing around tightly, somewhat within that level.


0:18:47.0 JD: Yeah, I mean, basically what you see is you'll see three or even sometimes four, or even close to five days in a row where it's below that average line. And so you're saying, "Oh, I'm getting close to being able to shift again." And then what actually happens is the weekend [laughter] So I'm way more disciplined during the week when I have to go to work and those types of things. And then, but you can learn from that. You can learn that that's what's showing up in the pattern. And I've also gotten to a point where it's gonna be harder. Those first five eight pounds are much easier. And then, from there, depending on what you wanna do goal-wise, it could be harder, it could require a sort of a slightly different plan because PDSA one, two, and three are all variations of each other. There wasn't a lot of change from each of the cycles, but there was some learning that happened.


0:19:35.1 JD: Yeah, I mean, I think that's, I mean, that's a good point to maybe go little deeper into the PDSA cycle. So I mean, I think, for me, it took some time to sort of understand the PDSA cycle, even though it's, again, it's a relatively simple tool, and I think it's just one of those where you just need to do it, and over time you're gonna learn. So I think the first thing, you make a plan, you do it, you carry out the plan, you study what happened, and then at the end you act and you decide what to do. And I think really, the most powerful part for me was this realization that during the plan phase of the PDSA, it is absolutely imperative that you make a prediction.


0:20:29.0 JD: And if I'm doing team-based work, I have everybody on the team make their own prediction independently. We actually record that prediction in the PDSA cycle. And then during the study phase, we compare the data that actually was produced from that system or process, and we go back and compare it to what we predicted. And the difference between those two things is the learning that drives the next cycle, basically. So it's this iterative process. So you're, you don't just run one PDSA cycle, you basically run it until you've brought about an amount of improvement on that system or process that's acceptable, not, and then you may turn your attention to some other metric in your organization that's important to you.


0:21:22.0 AS: So I think what's important about this is that what he's describing is the way to acquire knowledge within an organization. But many times we see organizations lose the knowledge that they had. And I think that brings us to the concept of training and making sure everybody understands how we're improving the system based upon the knowledge that we gain. And if you can hold that, then the next time that we wanna try to improve the system, hopefully we go to another level, and then we hold that other level through training and making sure that everybody understands the knowledge that has been acquired in the system. And once we feel comfortable with that, then we go to the next level. And let's say that we do that 10 times in a particular process. That means that we've acquired, at 10 different points in time, we've acquired additional knowledge about the system.


0:22:23.2 AS: Now since I'm a finance guy, I like to bring that into finance terms and say, and that's how you build a competitive advantage in your company. It's the acquisition of knowledge of your system and continuing to improve that. And by doing that, you start to get to a point where your competitor doesn't understand nearly as much as you do about that one area. And if you can solidify that through training, then you now are operating at a different level than your competitor. Now your competitor may be doing the same thing in another area, but you've built some competitive strength. And the end result from a business perspective is that you start to produce slightly better profitability relative to your peer until either they catch up or maybe they build some competency in another area. But I'm talking and thinking all about, business. Tell us more about how to apply this in education.


0:23:25.7 JD: Yeah, I mean, I think there are all kinds of applications, and I think you're exactly right. I think what most people have is streams of information coming to them in the form of various types of data every day. But they have very little knowledge, actual knowledge about that data, lots of information, little knowledge. What the PDSA does is allow us to gain that knowledge in combination with the with the process behavior chart. And I think having this structure is very important. I mean, I think you talked about building knowledge over time in your system. I think the fact that the PDSA, when you plan it, you write it down, just doing that is a huge advantage over how most people operate, and you write it down in this structured way.


0:24:23.0 JD: So there is this knowledge store, there's this written record that someone can go back to to see what you did on whatever area you were trying to improve at that time. You have this written record, you have this plan, you understand how that plan was put together, who was doing what, when were they doing it, where were they doing it, how were they doing it? Who was doing it? And then you can see how it actually worked in implementation. And then you can see the actual data that sort of came back when you tried something. And you can do these PDSA cycles on a very, very tight timeframe. So when I got some training on this, they suggested, some of the trainers suggested you, you do this, you can run a PDSA for one day, one hour, depending on the situation.


0:25:09.5 JD: I think in general, what I've seen is that PDSAs that I run are generally somewhere in the two to three to four week timeframe. And I try not to ever go beyond four weeks with the PDSAs. I wanna get back some learning... It may not mean that I know everything about whatever it is I'm trying to improve, but I wanna give back some data that tells me sort of what direction should I go next? What direction should I go next, and I'll keep going in that way until I learn sort of more and more over time. I do think it's helpful to have just a very simple template, which I'm happy to share with folks, in terms of the plan, I'm writing out a question or two that's most important for us to answer. And again, I'm making predictions around those questions. What do I expect to happen when we do this change, when we make this change.


0:26:07.6 AS: Like cutting carbs down to 30 grams per day from 60 grams per day, will, my prediction is that will bring my weight down over the next week or two by one or two pounds or something like that.


0:26:23.6 JD: That's right. That's right. If we have an attendance problem, we're gonna try some attendance intervention, and I predict that if I make this change, it's gonna improve X amount in the next three weeks. Right it's a concrete prediction and I, again, I mentioned that I have everybody on the team do that because then you can start to see too how people are thinking about these ideas, how much they believe, what degree of belief do they have in the change idea. You see that show up when you see their prediction and then you literally, it's the who, what, where, when of this idea. And you get, when I do this, I get very, very specific, John will do X, Y, and Z on Monday, Wednesday, or Friday for each of the four weeks of this cycle. Catherine will do this, Ben will do this, you're very specific.


0:27:11.4 JD: So everybody knows their role, what their job is in this PDSA, and then the two other things that are a part of the plan. First one is super important, it's the operational definition. So whatever the key concept or concepts are that are under study, we operationally define them. So it's very, very explicit how we're gonna measure those things when we start actually running the experiment. And then we have a plan for collecting data. And I like to put the data right into my PDSA template, so everything's in one place. Sometimes it'll be a link to a Google sheet and a series of charts, but a lot of times I'll just create a table and then link the charts. So you can see, you can see the data right there in the PDSA.


0:27:58.5 JD: I think from there we just, we run the test, we run the test on a small scale, and after the test, as the plan's been implemented, we're going to describe what happened. We're gonna talk about what data we collected and what observations we made. Now in this Do phase, you're not actually doing any analysis. All you're doing is describing how the plan was implemented in when the rubber met the road, when you actually put things into action, how did implementation go compared to how you said it was gonna go in the plan. And almost every time I do this, there is some sort of aberration, some change from the plan that needs to be noted because I didn't anticipate, or we didn't anticipate, something happening. In schools, maybe I was gonna do this for three weeks and, on the third day of the test cycle, the experiment, we had a snow day that was unanticipated, or a key person in the experiment was not there. A student that we were working with on a PDSA cycle missed three days of school because of the flu, you report those things back because they're gonna impact the data that you collected.


0:29:22.4 JD: So that's what you're doing in the Do cycle. You run the test and then you describe what happened, verse the implementation plan. I think probably the most sort of important part then is that study phase, so we're gonna analyze the results and again, compare 'em back to the prediction. So this is the absolutely critical part of the Plan-Do-Study-Act cycle. I almost think of it like a mathematical formula. The analysis minus the prediction equals the learning. This was a sort of aha moment for me. And this is where you...


0:29:58.6 AS: Which is where Dr. Deming said, what is it? Something like, testing without prediction, doesn't acquire, you don't acquire knowledge. The only way to acquire knowledge is to make a prediction and then look at the results relative. Otherwise you're just messing around. You're not necessarily acquiring knowledge.


0:30:26.7 JD: Yeah, that's exactly right. Yeah. There's like, no, there's no theory without prediction. There's no knowledge without prediction. That's what we're doing in the study phase. We're, again, we're, why, why did what we see happen? Why did that happen? Why was it so different from my prediction? Or why was I able to make that prediction? What did I understand about our system or process? So again, it's not a test, one of the first PDSA cycles I ran with a teacher, the prediction, was very different than the outcome. He said, "Oh, the test failed." And I said, no, we learned something on a very small scale before we did that same thing. Instead of with one student, we could have been doing it with a whole school of students and imagined the time and the resources that it would take to do that sort of failed effort.


0:31:22.3 JD: We learned on a very small scale not to do it that way. And in PDSA cycle two, we're gonna adjust, right? And that's what we're doing. So in the study phase, we're comparing prediction and what actually happened. And then in that act phase, basically we take what we learned and make the next test, right? This on this continual basis. And I always sort of say, think about the Act as the three A's. We're either going to Abandon that change idea because it went so poorly. Now, that's not gonna happen very often in that first cycle, or two or three, but down the road, maybe it is, we need to abandon this idea and let it go. But more generally, what happens, especially early on, is that you Adapt each cycle a little bit. Like I was doing in that weight example, I was just adapting, I was sticking with the same basic diet plan and making some tweaks as I learned.


0:32:19.4 JD: So I was adapting each of those PDSA cycles. And then the third sort of option is to Adopt. And by adopt, I mean, you're gonna make this sort of a standard approach, standard work in your system. This is how we're gonna do things from now on. And generally, we're not gonna make a decision to adopt something, an intervention, until we've tested it across, four or five, six, seven test cycles. And, as we do these iterations, in a school example, maybe we're gonna try an attendance intervention with one student, and then we're gonna try it with 15 students. If the test with the one student went pretty well, and now I want those 15 students to vary a bit from that original student, I want this next group to have, some other characteristics, whatever that may be, than that, that original kid, and then maybe I'm gonna test it with a whole classroom and then maybe I'm gonna test it with a whole grade level, and then maybe I'll test it with the whole school. But that's sort of the mindset. You sort of go up this ramp of testing, and as you go up that ramp, you sort of increase the sample size of who is included in that test. So that if you're gonna adopt this into your system, you wouldn't be pretty sure it's gonna work broadly within that particular context.


0:33:51.5 JD: So that's sort of the basic idea. And then, what I stick in all of the footers, and I stole this idea from the Carnegie Foundation for the Advancement of Teaching, but in the footer of my PDSA template and in all my sort of improvement templates, I put this phrase "probably wrong, definitely incomplete." Because that's the sort of the mindset that you have to have with continual improvement in general. And definitely with the PDSA cycle, because those first cycles, you're probably gonna get a lot wrong as you're sort of turning information into knowledge early on. You're not gonna know a lot about that thing, you're gonna learn over time. And so you sort of have to adopt this as your sort of mantra for PDSA cycles.


0:34:32.1 AS: Great. I was just looking at quotes by Dr. Deming, and the quote is, without theory, there is no learning. All right. So how would we wrap this up?


0:34:45.8 JD: Yeah, I mean, I think that's a great quote. And I think, so it doesn't, I mean, theory can be a little intimidating, just the word theory. I sort of originally thought of these grand academic or scientific theories, but that's not really what he was talking about. Generally, a theory can be a hunch. A theory can be an idea you have, a theory can be at one time I had a student that wasn't doing their homework, and I just, said, can you do this piece of homework first? [laughter], when you have study hall, you're never doing your reading homework when you go home. Can you just do this before you leave school? So that's a hunch, that's a theory, a theory to make things better.


0:35:28.1 JD: So I think, what I would do for folks that are listening to this, just grab a PDSA template, which you can find on a simple Google search. Just start plotting your data for something that's important to you. Get a bit of a baseline 10 or 12 or 15 points, and then try to run one of these PDSA cycles. I mean, I think it's this whole idea. I was in this IHI Improvement Advisor Program, and they would say, Okay, you got this whole hospital or this whole school, or this whole school system or hospital system, you need to improve. What are you gonna do on Tuesday? What are you gonna do on Tuesday when you go back into the office or the school or the hospital? That's sort of this idea of PDSA, do something, try something, get some change ideas going and see what that does to your data. It doesn't have to be this huge, huge scale thing. Try it on a small scale and see what happens. See what you'll learn.


0:36:31.1 AS: Well, there's a challenge. John on behalf of everyone at the Deming Institute, I wanna thank you again for this discussion. For listeners, remember, you can go to to continue your journey. This is your froggy host, Andrew Stotz, and I'll leave you with one of my favorite quotes from Dr. Deming. People are entitled to joy in work.