Home Page
cover of Episode_6-Kurt-Ready
Episode_6-Kurt-Ready

Episode_6-Kurt-Ready

00:00-28:39

Nothing to say, yet

Podcastspeechmusicfemale speechwoman speakingconversation
2
Plays
0
Downloads
0
Shares

Transcription

Welcome to Lean into Excellence, a Workstream Consulting Podcast. I'm Liz Crescenti. And I'm Marco Bonilla. And we will be your hosts as we embark on our continuous improvement journey. Welcome back to another episode of Lean into Excellence. I'm your host, Liz Crescenti. And I'm Marco Bonilla. And I'm just going to put a disclaimer out there, Marco may or may not make sense today because he did recently have surgery and he may or may not be on pain medication. Anyways, today we have Mr. Kurt Peterson joining us. He is the founder of Total Enterprise Consulting and Coaching. And we are going to do a deep dive on measurement systems. So Kurt, welcome to the podcast. Hey, thanks for having me today. Yeah, absolutely. Do you mind just giving a brief bio for our listeners? Yeah, sure. So again, my name is Kurt Peterson. I live in Michigan. I have a Bachelor's of Science in Mechanical Engineering, a Master's of Science in Engineering and Manufacturing Systems Engineering, an MBA, a licensed professional engineer, and a certified Lean Six Sigma Master Black Belt. And I've been working in industry for about 30 years now, both for, you know, automotive mining, and then my own consulting business here for about the last nine years. Awesome. Hey, Kurt, it's great having you here. I've known Kurt for the past five years, working with him side by side in Workstream Consulting. Kurt, we're going to do just a great deep dive into measurement systems. And I guess we should start off with defining what measurement systems are and why is it important in our problem-solving process. Yeah, absolutely. You know, just to kind of put some bookends around this here, that's not sounding very good, but I'm sure you'll edit that out. Yeah, so measurement systems are, it's the system, the process, the people, everything that goes into collecting data, information, obviously measurements of processes, that we would then, you know, extract that information and data out and use that for engineering purposes, for business decision purposes, problem-solving, you know, anytime we have to kind of look at data and analyze it, you know, the measurement system is what provides us that data. And the crux of it, the issue is that, you know, oftentimes that data and information, when it is collected, when it is put into the databases, you know, has, you know, excessive variability or some other problems with it, and then we end up using that data and making, you know, misinformed decisions. Kurt, yeah, excellent. Let me ask you something, Kurt, from all the projects you've done over the years, how often do you run into an issue where companies or individuals assume the measurement systems are accurate? In fact, they don't even discuss it, they just gloss over it. Well, I would say 100% of the time people assume that they're accurate and that they're correct, right? But that's the problem. What we need to do is we actually need to stop and pause, you know, oftentimes, and take a look at the measurement systems and sort of analyze them and assess them to see if, in fact, the data, the information that they're providing is accurate, if it is, you know, appropriately timed, you know, again, has very little variability so that, you know, the data is true. Yeah, no, absolutely. Let me ask you this, too. What about companies or individuals who just assume to look at the last step in the process, right? What we call sometimes the last mile. They assume leading up to that last point everything's fine, and they're only focused on that final output. Yeah, so you're right. There are a lot of processes out there, a lot of factories, a lot of businesses that are only looking at sort of the final end product and doing, you know, more or less a quality check or a test versus actually kind of measuring some sort of, you know, variable measurement that is important to, you know, the quality of the product or service that's being provided. And, of course, you know, these processes are highly complex, right? If they were easy to do, everybody would be doing it, businesses would be all over the place. But highly complex process with lots of inputs, all of those inputs have their own level of variation that are going into the process. And again, all along the way, you know, we need to understand how capable those processes are as well, those sub-processes that lead up to that final product. So again, all of these critical inputs, critical variables, you know, need to be understood to find out which ones are causing the variation. How do we understand it? Well, we take measures of them, and once again, now we're talking about application of measurement systems. So, Kurt, how do we assess the capability of our measurement systems? Yeah, great question, very important question, and certainly, you know, the topic that we're talking about here. The way we assess it is to, you know, first of all, to understand what it is we're measuring, what device we're utilizing, a tool, a gauge, a scale, whatever, and understand, you know, its capabilities from the place that we've sourced it from, right, and, you know, understanding how it traces back to NIST standards and so on. But taking all that into account, measurements do come from a full system, not just a gauge, not just a device. So, we therefore need to look at, you know, the people part of the process, you know, folks that are using that equipment and how they're applying it and, you know, how they're aligning it to what they're measuring and what's the environment that they're measuring it in, these products in, right? So, again, it's a full system. So, to assess all that, what we do is we perform, for instance, for like variable measurement systems or variable data, we would do like a gauge R&R is a pretty common way of doing that, a gauge repeatability and reproducibility study. And that's just a layered set of measurements of repeated and reproduced measures of the product, you know, that we're manufacturing and to find out statistically how much variation there comes from the part or the product itself versus the measurement system taking those repeated and reproduced measures. And it's just, you know, behind-the-scenes math that you could do on paper in Excel or any statistical software will sort of disaggregate all that data and tell us where the variation is coming from. So, gauge R&R is a very, you know, one of the more common ways of assessing a measurement system for variable data. And can we talk about attribute data? Yeah, certainly. So, there's a, you know, the key way to assess attribute style data is typically with an attribute agreement analysis where we're usually having, again, people that are involved in making decisions on an attribute that we're looking at, again, so not variable but something that's kind of attributable, and, you know, kind of passing it or failing it, gauging it as good or bad. And once again, you know, we're going to take a bunch of outputs of the process, some that are, you know, deemed good by experts and some that are deemed not good or bad by experts, and then have the folks that are generally, normally, you know, measuring those come in. And again, look at those outputs, give their feedback on whether they're good or bad, and do that multiple times, kind of in a random order. And then we'll go back and look at how many times they agreed with themselves and agreed with the other people, you know, things like that. So again, it's kind of like a gauge R&R study, but instead of measuring things, you know, on a scale, we're just using, you know, subjective evaluation. So Kurt, let's kind of close the loop on this, and let's see if we can understand. So some individuals or companies will finally accept that measurement systems are an issue, right? The output of measurement systems or the measurement systems themselves are a problem. They decide to do some analysis, like you said, go down the path of variable data or attribute data, depending on whether they're talking about systems or people. And once they get the results, how do they close the loop and improve their measurement systems? So it's less of a question down the road. Yeah, great question. One thing we do have to remember about measurement systems, and as much as we use the word systems, taking the measurement is also a process, right? And we're always talking about process improvement. Well, here's a process that painfully almost always needs improvement. So again, what we need to therefore understand is what is the process that we go through to take the measures, collect the data, get it into the database, and so on. And then at that point, we can analyze that process for where is the variation coming from, right? Is it, again, related to the gauge itself? Maybe it doesn't have very good resolution. Maybe it is related to the fact that I've got multiple operators or multiple people that use the same gauge differently, or they use different gauges, you know, because they're on different shifts. And we, again, break that process all down. And then once we understand and discover where the sources of variation are coming with it, then we work to, you know, get down to, again, a common, you know, low-variation type of process for taking measures, right? Everything done the same way, a standard operating procedure, if you want to call it that. That's sort of the, you know, high-level way to how do we do it. There's obviously a lot of other moving parts that go into talking about the type of issue that you might be having with the measurement system. So, again, it's resolution, discrimination, linearity, bias, variability, again, all have their own different ways of solving them, but those are typically more related to the gauge themselves. Excellent, Kurt. Yeah, great. Great response. So, Kurt, I'm curious. Can you give us a couple examples of past experiences with measurement systems and companies you've worked with? Yeah, without naming any names, of course, you know, one of my very first, you know, black belt projects that I ever did was on a high-value quality issue related to vehicle drifts and pulls, right? So, alignment-related issue, customer's driving down the road with their vehicle, it feels like it's pulling or drifting one direction or another, and, you know, has to do with steering and suspension alignment. And so, you know, we went to address the issue there and understand what was causing it, performed that requisite, you know, gauge R&R, the measurement systems analysis on the quality audit process, found that, you know, those key alignment characteristics had just a ton of variation relative to the tolerances that they needed to hold that led to this drift pull issue. And so, we really had to work hard to improve that measurement system again, get that variation down so that when they were making the alignment process happen in the plant, that it was, you know, more accurate and had less variation, and they could track it as the vehicles were coming down the line, right? Because even vehicles have variation that cause issues with alignment and therefore the customer complaints. So, you know, once we got that sorted out, figured out, then the whole alignment process kind of fell back into place, and that quality, you know, improved dramatically. That's a great example, yeah, great example, Kurt. You know, you triggered a thought in my head. There's a recent experience with a company that I recently worked with. We did a measurement system analysis on inspectors. We took four inspectors, two different factory areas, and two different shifts, and we had them inspect the same product over and over, as you know, right, through the process. And we realized, and we weren't surprised, that there was a significant difference between the four inspectors. Now, is it the inspectors? You know, you have to ask yourself, is it the inspectors? Is it the time of day? Is it the environment they're sitting in, right? I mean, this is when you start digging into the true details. And we're of the notion that we never blame the people, there's some factors there that we're influencing. And we did notice that in some cases, they were very conservative of things they thought were failures, and they would send it through a rework loop. And there's some percentage that would be let go to the next step because they thought it was a passing unit, right? So our goal is to minimize from both ends and come to a center where we all agree on the same product, right, on the same quality of evidence. Right. Absolutely. Yeah. And you're right, you know, we don't ever want to pin it on people. Everybody goes to work every day to do the best that they can with what they're given. And again, environmental issues are factors, you know, what they're taught, how they're using the devices that they're given. And again, they're following a process that, you know, is laid out for them or that they found that perhaps is maybe, you know, easier or safer to do whatever. But you know, it's all about getting back down to the best ways of taking those measures so that we can get that good, accurate, repeatable, low variation data that we're making really, you know, critical business decisions on. Yeah, absolutely. And one of the things we found, all right, this isn't the whole story, but a couple of things we figured out is, you know, lighting, you know, people, the type of lighting, if these are, you know, really small components, the lighting they use, the magnification they use, right, if they were under the assumption they were supposed to use a certain magnification, sorry about that, you know, just getting them on the same page, you know, they're not, they weren't sure what the others were doing, right. We had them isolated from each other. And we watched, we learned. So the standard practice or the operating procedures, you know, need to be clarified in that case. And how do we get them all on the same page? Absolutely. And, you know, again, definitely if it's a subjective evaluation, you know, one of the things that we want to do is try to move that to a variable-style measurement system. Yeah, anything we can do to remove the human factor, absolutely, right, because we're all fallible, right. I mean, it's not, again, not done on purpose. We all try to do the best job we can, but, right, sometimes that eight hours of doing X inspections, right, I mean, you get tired, it's a long day, you're not the same person as you were when you walked out. Human error creeped in. Exactly, right. Right. Yep. Okay, so how often, this is just something that popped into my mind, how often would you revisit the measurement system? How often would you, you know, because if you're doing it and you're thinking it's, you know, you're doing everything correctly and everything's accurate, when in reality a few things could be off and you're getting, you know, incorrect data and you're not really understand, you know, you don't understand where it's coming from or it's going to affect that, how often would you revisit it to make sure, okay, we're still doing the right thing or we're still hitting what we need to hit? Well, that's a wonderful question and a very important one for process improvement projects and the outcomes of them and for all of our, you know, business critical KPIs, right, key process indicators. But yeah, we should definitely be checking, looking at, auditing, assessing our critical measurement systems on a regular basis. Now, what does regular mean? Again, could be, you know, very much based on how business critical it is or process critical it is, but without a doubt, at least on an annual basis, we should go back and do some kind of assessment of those critical measurement systems, whether it's a calibration check, and I'll always use the word check. I don't like to just assume that we need to recalibrate, but to see if it's still within the calibration range or, again, you know, we just want to do another gauge R&R every so often to verify that everything is still, you know, giving us the right results. Or obviously, if there's any kind of change to the process of taking the measures, so if we bring in a new person, a new inspector, a new auditor, then we would probably want reassess at that point to make sure that, you know, everything is, again, still working to the level of capability that we need. And this all can easily be added to, you know, a control plan for a project or part of just a regular audit, you know, procedures that you might have at your facility. Yeah, I'm sensing a theme here. We're just talking about process mapping and living that and how often to revisit it and, you know, things that change, whether someone's retiring or you get a new one coming in on board and how that affects the process, but, oh, yeah, I'm learning something new every time we do this, so. Yeah, Kurt, just one more thing. Sometimes as a signal to maybe something's wrong or something needs to be looked at again, sometimes process drifts, right, in the data itself, right, and sometimes as long as things are yielding well, no one looks at it, but no one's realizing that the data's getting closer and closer and closer to the limits, right? Right. So that's something that might be important to just give you a signal that something might be different. It might not be wrong, but something has changed. Without a doubt, yeah, exactly, you know, if you are collecting and looking at data, let's say, in a process control chart, right, doing some SPC work, and you are seeing potentially some drift of your data, again, very well could be the process or, again, could be the measurement system that's providing that data, and we'd want to investigate, you know, both ends of that and figure out what's causing that. So for sure, but you're right, a lot of times, you know, if it isn't broke, don't fix it type of thing. Right. Yeah. We've got to keep our eyes open. That's right. It's a dangerous assumption, right? Yeah. I mean, we sit and we relax, and all we see is green light, green light, green light, but you don't realize that that is drifting, right, up or down, or... Right. Yeah, this is never a one-and-done thing, right? People make their careers out of, you know, measurement systems and metrology and things like that, and I can say I've also done, you know, made a good part of my career on doing this as well, and I find all the time that the data that I use in my consulting work, always going to look at where it came from, what's the source, what kind of variation does it have, is it shifting or drifting or moving around, you know, prior to any analysis of the actual data itself. Absolutely. You know, over time, everything has wear and tear, right? Anything has gears, as you know, just like a vehicle, wear and tear, you've got to readjust everything. Right? You've got to, you know, replace parts, and this happens with people, too, right? Like I said, an eight-hour day, you're not the same person you went in as you do on the way out, right? Your eyes are tired, or depending on what the measurements are, right, if it's a visual measurement versus a mechanical versus electrical, so systems wear. That's right. Absolutely. Kurt, let's talk about how measurement systems play a role in problem-solving and where you should place measurement systems in the priority of things to do when you go into a project. Yeah, so, you know, one of the famous problem-solving methodologies that we use all the time is the DMAIC, you know, Six Sigma DMAIC methodology, Define, Measure, Analyze, Improve, and Control. So, you know, right there at the very beginning of the real root part of the problem-solving methodology is the measure phase. So, right away, high priority is to understand what is your main output measurement or characteristic that you're looking at that you're going to try to improve. And right from the get-go, we're going to want to understand, you know, the measurement system that's used for getting that information, that data, whether we have one in place already or not, right? And so, right away, in problem-solving, we need to understand the output of the process and understand its current, you know, capabilities, how it's performing over time, like doing SPC work and things like that. And then, when we get into sort of the analyze phase, remember Define, Measure, Analyze, now we're looking at the inputs to the process and what's causing the variation or the problem with that output. And as we start to sort of filter out which inputs are likely to be the most critical ones, then, once again, if we want to improve them, we've got to measure them, right? So, you know, we've heard all these statements about if you can't measure it, you can't fix it, right? Or you can't improve it. So, we, once again, would look at the measurement systems associated with those inputs to make sure that the data we're getting on the inputs is going to be, again, accurate and true. And now we can kind of understand that transfer function of Y is a function of X, right? With real good, reliable data. And then, once all that's done through the improvement process, we've analyzed the inputs, we've gone into the I phase, the improve phase, and then what part of the closing out of that problem-solving project would be, you know, in that control phase. And once again, we want to look back on all those measurement systems and, as we talked about a little bit ago, having a process in place, a system in place to re-evaluate those measurement systems post-improvement to make sure that, again, they're still going to give us good, reliable data, especially now that we've hopefully reduced the variation and we've made changes to the process and that they're still adequate. Yeah, that's fantastic, yeah, because it's not one and done, right? We can't just look away and hope that it just continues the way it's been, right? Even after a good fix, right? So... That's right. Absolutely. So, let me ask you this, Kirk. A lot of times, you and I and other problem-solvers in the world, we're brought in after the fact, right? After a product's been developed, a process is in place, and we're kind of chasing our tails a little bit. Where some of this stuff could have been addressed early on in the design process. So, we tend to look at products post-production, but how do we incorporate measurement systems in the design or evaluation of a product to eliminate some of these things further down the line, some of these issues, right? Or to understand what these product measurement variations are going to look like up front? Yeah, I love that you asked that question. It's a question that is almost never asked, right? It is. It's a huge, important thing about new product and process development. So, you're going into your product development phase, and you're trying to maybe do some DVAs, dimensional variation analysis, based off of the products of all the inputs of the parts that are coming together, scaling and everything else. And then you're going to say, all right, we expect to have a certain process capability coming out of that, you know, a PPK of 1.33, let's say, right? That's kind of the standard everybody goes to. Well, if we don't factor in the amount of variation of the measurement systems that are producing all those products and these final calculations, we're going to probably miss that mark by the time we actually get into production and start measuring the outputs in our factories. So, if we understand what normal amount of variation of our measurement systems is up front, right, by doing these studies and having all that data prepared, you know, from our previous processes and the improvements that we've been doing, we can then apply that as part of the variation analysis simulation, and then we'll basically build that in, factor it in, right? And the nice thing about that is, if you do so, first and foremost, you're going to be capable, you're going to have all the, you know, measures to be able to send your product down to your customer, and in fact, that product is actually probably even better than we promised it would be because we've accounted for the measurement variation in it. Right, right. And I guess the other way to look at this, too, in the process of developing a product, you might not have the measurement systems that are capable of detecting what you're trying to measure, right? And that's a really good point or time to evaluate. You know what? We need to increase our capital equipment, or we need to find a better way to measure this output, what the customer cares about, right, what they're willing to pay for is really what it is. So up front, you know, instead of waiting until after you throw it over the fence to production and let them chase their tails and say, you know what? We really don't have a system that can do what you want. So it's a really good time to kind of scope out and measure systems besides the product in development. Oh, absolutely. And when your product design and, you know, your process design teams are working together, like versus, as you said, you know, having a separate wall and just getting the product thrown over the wall to them, you're going to have a lot more success with all this capability stuff and understanding, you know, how to measure that and how we're going to deal with it in the plant when you guys give us the product design and so on and so forth. So absolutely. Yeah, because as you know, right, I mean, companies are in the habit of innovating, right? And they're going to be challenging and pushing the limits of science and technology, right, engineering, math, all the stems. And besides pushing the technology or the innovation on the product, we're pushing the technology on the testing equipment, right, or the ways of measuring. So we might as well get ahead of it, right? I mean, it's amazing how we do it kind of after the fact, like an afterthought, so. That's right. That's right. Yeah, we just use the old systems that we had in place and they're just, you know, woefully inadequate for the latest technology, like you said, or the latest, you know, tightest tolerances that we need to be at. Absolutely. All right, Kurt, as we're getting towards the end of our time, would you mind just sharing some key takeaways that you want our audience to leave with and execute in their practices? Yeah, sure. Like we've kind of been talking about, all processes have variation. Measurements and taking measurements and the measurement systems that give us the data that we make business or engineering decisions on come from measurement systems. They have variation, right? And as the Lord Kelvin once said, if you can't measure something, you can't improve it. If we're on the path of continuous improvement, we need to be able to measure and trust the data that is going to drive those decisions. So, you know, we need to get, we need to be sure that our measurement systems are trustworthy, accurate, repeatable, reproducible, all that stuff. And doing measurement systems analysis is the first step in that process. Absolutely. Well, Kurt, thank you so much for joining us today. This was a great conversation. We really enjoyed it. And just a reminder. Thanks for joining us. Yeah, and just a reminder, new episodes come out every other Wednesday. And you can find us at WorkstreamConsulting.com or info at WorkstreamConsulting.com. And we will see you next time. Thanks, everyone. Thank you very much. Can you envision a scenario in your operations where despite your team's best efforts, there are persistent inefficiencies or bottlenecks that seem resistant to change? What kind of impact would resolving those issues have on your bottom line? Here's where the team at Workstream Consulting can help. We are a small but mighty group of 10 master black belts, each who packed 25 to 30 years experience. Our consultants have seamlessly navigated all 11 GICS stock market sectors. And together, we delivered cumulative savings exceeding over a billion dollars for our clients. Our approach is designed for immediate impact without the need for lengthy hiring and onboarding processes. Contact us today so we can help you in 2024.

Listen Next

Other Creators