Impact is a theme we circle back to again and again on the Leading Learning Podcast because we believe that any learning business needs to create impact for the learners and other stakeholders it serves. And measurement and evaluation are critical for knowing if we’re creating impact and for showing that impact.
Dr. Alaina Szlachta is author of the book Measurement and Evaluation on a Shoestring. In this episode, number 436, Alaina talks with co-host Jeff Cobb about what measurement and evaluation are (listen for her short but elegant definition). They also talk about data, ways to go beyond smile sheets and completion stats to get at long-term impact, the importance of an impact hypothesis, and automation and AI.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Download a PDF transcript of this episode’s audio.
Read the Show Notes
Alaina Szlachta: [00:00:00] What’s one thing different that you’re going to do because of this one-hour conversation today?
Celisa Steele: [00:00:10] I’m Celisa Steele.
Jeff Cobb: [00:00:11] I’m Jeff Cobb, and this is the Leading Learning Podcast.
Celisa Steele: [00:00:20] Impact is a theme we circle back to again and again because we believe that any learning business needs to create impact for the learners and other stakeholders it serves. And measurement and evaluation are critical for knowing if we’re creating impact and for showing that impact. That’s why we wanted to talk with Dr. Alaina Szlachta. Alaina is a self-proclaimed data nerd and author of the book Measurement and Evaluation on a Shoestring. In this episode of the Leading Learning Podcast, number 436, Alaina talks with Jeff about what measurement and evaluation are—listen for her short but elegant definition. They also talk about data, ways to go beyond smile sheets and completion stats to get at long-term impact, the importance of an impact hypothesis, and automation and AI. As you listen, I encourage you to think about one thing you might do differently as a result of hearing Jeff and Alaina’s conversation today.
Why the Focus on Measuring and Evaluating Learning?
Jeff Cobb: [00:01:27] As somebody who’s focused on data, focused on learning, focused on measuring and evaluating learning, that gets narrower and narrower. You’re in a relatively niche area. How did you get there? What was your path to what you’re doing now?
Alaina Szlachta: [00:01:42] Like many of us, I think we arrive where we are accidentally and never could have imagined I would be here today. But I think the journey begins with my time in Mexico. After undergraduate, I was like, “I need to leave my country to feel like I have a different perspective on the world, to figure out my place in it.” And so I went to Mexico and joined a nonprofit to teach English as a foreign language. I fell in love with teaching, so the throughline of my career are these two parallel throughlines. One is the education focus, and the other is the data focus. Really, the passion for education, teaching, learning and development comes from that experience in Mexico. I went on to get my master’s degree in education, a doctorate degree in education and human behavior, and then my career…. While we’re in grad school—many of you listening have done grad school—you know that it’s not just school; you’re also working. While I had a scholarship, I was working full time too, so it was a crazy time of my life.
Alaina Szlachta: [00:02:44] But I was working and doing some really cool projects that sometimes you only have access to when you’re a graduate student. All of those projects were grant-funded, donor-funded, special interest projects that needed to prove that they were going to be effective. I was working very tightly with how do we collect the right data? What’s our hypothesis? What evidence do we need to prove it so that our program can keep going? We needed data and a research lens to give ourselves permission to keep going with our programming. All of that was the introduction of that second throughline of how do we collect the right data to essentially get permission to keep going? Ultimately, I worked in the nonprofit higher ed sector for a big chunk of my career and then decided, before I get too old, I need to go get some experience in corporate. I took a step down from a director level to an individual contributor level to work for one of the Brandon Hall Award-winning learning teams.
Alaina Szlachta: [00:03:49] I thought, “They’re going to be able to teach me something great about corporate learning. I want that experience.” And, while the team was amazing, they didn’t have the same rigor around measurement that I had grown up with in my other experiences. It led me to question, “What’s going on here? Why is it so different in this one sector?” We’re doing the same thing. We are all working on educating and developing humans to get results. Of course, those results are all different, but why is measurement not baked into corporate learning in the same way that it is in other places? During COVID—everyone has their COVID story; what did you do during COVID when things changed?—I did a study. Being the educational researcher that I am, I’m like, “Okay, there’s this clear challenge around measuring corporate learning. Let me go investigate this.”
Alaina Szlachta: [00:04:42] I interviewed 40 people all around the world. Every person I interviewed, I said, “Who else should I be talking to? I’m trying to get some diverse perspectives on this.” That’s how I got to my 40. And that study led me to a conference presentation at ATD Core4. In the audience was an editor for the ATD publishing arm, and they reached out to me and said, “We’re looking for an author for a book series, a Shoestring book series, focused on measurement and evaluation. Would you consider submitting a proposal?” And I said, “Well, I want a place to publish my research and findings.” Of course, I had done this massive literature review around models, theories, perspectives on measurement in education and corporate sectors. And I’m like, “Yes, I could do all of that and put it into a book.” And that’s where we are today.
Do We Do What We Said We Would Do?
Jeff Cobb: [00:05:29] When you say measurement and evaluation with respect to learning, what do you have in mind, one, and then, two, what’s the big deal? What compelled you to think this is something that there needs to be a book about? Obviously, the Association for Talent Development thought there needed to be a book about it.
Alaina Szlachta: [00:05:47] Yes, I can sum it up in two phrases. One, making measurement easier is something that our industry, and I think many industries, can desperately benefit from. And then, two, what I mean by measurement and evaluation is also very simple: Did we do what we said we would do? We need measurement and evaluation to prove, to collect the right data, to do the analysis and investigation, to see did we do what we said we were going to do? Why or why not? And what are we going to do about it once we get those results? I like that perspective because it’s very inclusive and broad because, let’s be honest, when we’re in the world of education and learning and people development, we’re very nuanced, and so I’m a big believer there’s not a one-size-fits-all approach to measurement and evaluation.
Alaina Szlachta: [00:06:40] At the very basic, we need to get clear. What are we doing and why? That insight is the strategy that leads us into what are the appropriate tactics to then literally be able to say, “Yes, I did what I said I was going to do.” We want 100 percent of our target audience to complete this program. Well, did you do what you said you were going to do? We need to analyze and investigate that. We need some data to help us see it, all the way up to the business has a big, expensive problem, and we’re going to use learning as a tool to solve it. Did we help the business cut the costs of that expensive business problem? We have to go investigate that, and we need some data to help us do that investigation. Did you do what you said you were going to do? That’s the overarching theory, thinking I have around measurement and evaluation.
Partner with Tagoras
Celisa Steele: [00:07:31] At Tagoras, we partner with professional and trade associations, continuing education units, training firms, and other learning businesses to help them to understand market realities and potential, to connect better with existing customers and find new ones, and to make smart investment decisions around product development and portfolio management. Drawing on our expertise in lifelong learning, market assessment, and strategy formulation, we can help you achieve greater reach, revenue, and impact. Learn more at tagoras.com/more.
What’s an Impact Hypothesis?
Jeff Cobb: [00:08:07] Measuring that bigger stuff, that more important stuff, that feels really daunting. I’d love to hear a little bit more about how you approach making that kind of measurement feel easier, feel more manageable. You’ve already been pointing to it, but you haven’t used the term so far—you have this idea of an impact hypothesis. I know it drives everything, so I’ll throw that out to you, and let’s discuss.
Alaina Szlachta: [00:08:34] Yes. A few colleagues of mine in the world of measurement and evaluation all agree that impact is whatever it is that you want to influence. I think sometimes we have these big, lofty goals that aren’t what the business needs—or even what our stakeholders need. And so did we do what we said we would do? What impact did we have? We all want to be able to tell those stories. You’ve all probably been to a nonprofit fundraising event, and you have a speaker come that says their whole life changed because of the work of this nonprofit. Every one of us wants that kind of validation and confirmation that we’ve made a difference. The thing is, how do we unpack the difference that we’re meant to make? Sometimes it doesn’t need to be that long-term, big, lofty thing. Sometimes the business, our communities, or our peers or stakeholders need something really small and simple. Let’s unpack what do our stakeholders, our training requesters, our clients need at a basic level? I don’t think that we do a good enough upfront analysis to be able to clarify what’s the difference that we really need to make here?
Alaina Szlachta: [00:09:54] Sometimes it’s a lot less complex than we might think it is, so we sometimes overcomplicate things, and we don’t need to. I would say impact might feel like this long-term aspirational thing, but sometimes impact is just we want people to feel more confident in their role. Well, we can measure that. But, before we measure the difference in that, we also need to know what’s the thing that’s going to make them more confident, and what’s the problem that’s making them not feel confident? If we don’t have that information, we’re not going to be able to make a difference. We might offer something else, and maybe they like it, but it’s not necessarily going to make them feel more confident or address the root cause of that. I’m a big fan of data literacy, and I’d like to think that what I offer, my passion and mission in the world, is helping people feel more confident working with data and telling stories. But we have to do a better job investigating what’s the real problem so that we can prove if we made a difference solving that problem.
Jeff Cobb: [00:10:53] I’m taking that as being serious about your initial analysis. Whether you’re somebody who buys into the traditional ADDIE design process or not, that analysis part is still very important, regardless of what you’re doing. I know that you’re a believer in building the opportunity for measurement and evaluation into the instructional design in the first place. You can’t measure what you haven’t built for, basically.
Alaina Szlachta: [00:11:22] That’s right. It’s really difficult to do it.
How to Build Opportunities for Measurement and Evaluation
Jeff Cobb: [00:11:24] Yes, very difficult to do it. That’s something that a lot of organizations struggle with, though, particularly in our world, because they’re running a conference with all these different sessions, they’ve got this online stuff going, and having the discipline, the consistency to back up and say, “We’re going to do that analysis. We’re going to build in that opportunity for measurement and evaluation in the things that we’re developing.” Any tips/insights into how organizations can get their arms around that a little bit better?
Alaina Szlachta: [00:11:52] First, to the analysis part, I don’t use the ADDIE model. In fact, I didn’t even learn about the ADDIE or the SAM model until I went to the corporate world. I want to say to listeners there are a lot of other approaches to learning design and investigating solutions and how learning plays a role in community and business challenges. I think the ADDIE model focuses too much on the learning side of things, and that forces us into learning objectives that are about what knowledge people need to demonstrate they have. The analysis is about “What’s the problem that we’re trying to solve?” Not “What’s the best modality for the learner?” A lot of times we think we’re doing an analysis, but we’re actually doing a training needs analysis, which is, “What’s the best training design to make the learner feel like it’s a relevant experience?” That’s well and good. But that’s not an analysis of “What problem is training and learning supposed to solve for the individual participant and for the organization that you’ve come in, as a consultant, to be able to support?”
Alaina Szlachta: [00:13:02] And so, to your question about what are some tips here, I just had a conversation with a colleague who does cost-reduction analysis for organizations, and I asked him about his business model. They’re not in learning, but they are an organizational development tool, just like learning is an organizational development tool. This perspective is meant to help the business make sure that they’re not overspending. The value in that is we want to make sure that we have a great profit-to-loss ratio. And so he goes in, and he does this in-depth investigation. For learning business leaders, how we do our proposals, and how we decide we want to be compensated, I think there’s an opportunity for us to think differently about that. Instead of doing a proposal for “Here’s the training program,” maybe part of the proposal is, “We’re going to do an in-depth investigation of what’s the problem, and how can learning potentially be a tool for solving that problem?”
Alaina Szlachta: [00:14:06] But part of the proposal is getting access to the data. You can’t do a true analysis unless you’ve got hard data and evidence that helps you, as the consultant, to understand what’s really going on here. A lot of times, people want third-party perspectives because they’re not biased by the internal institutional history or anything internal that might bias your perspective. External expertise is valuable to help shed light on something that internal employees and leaders just can’t see the way that you can. So why not build that analysis into your proposal? Get access to the data, those key performance indicators or other data points that help you reflect as a mirror back to your client: “Here’s the problem that I see. Does this resonate with you?” And then you build into your proposal, “This is going to be a one- to three-year contract, and we’re going to be monitoring right along there with you how those KPIs are changing over time and how people’s performance and behaviors are changing over time and how training has played a role in that change.”
Alaina Szlachta: [00:15:15] We need to advocate for a different way of how we work with our clients, which is to be like, “Hey, training is not the silver bullet here. It is a sliver of a larger pie in how we help organizations. And, in order for us to help you see how we’re moving the needle, we all need to be looking at the same data together to make smart choices about what the business needs to invest in to continue optimizing and solving those problems.” The tip here is to think differently about how you do business with your clients, and don’t let lack of access to data…because you’re not an internal employee. Do your proposals, do your scope of work differently. And maybe you get people to push back. But here’s the thing—the ones that don’t push back, you’re going to have such wild, effective results that the business case that you then present to future clients is going to sell itself. So all you need to do is find one company who is willing to bring you in as a true partner, not just a vendor—a longer-term partner in solving business problems. You have access to the data. You’re looking at it with your client together. Over time you get one of those winners, and you talk about that case study to prospective clients. I don’t imagine anybody is going to say no because of the power of the results that you deliver.
Jeff Cobb: [00:16:40] I like that a lot. It’s very true in our world that, for example, if you’re in the education department in an association, you’re thinking about education. You’re not thinking about the broader picture necessarily of, say, membership and research and publications and other things that are going on in the organization. Or you’re in an academic continuing education unit, and you’re not thinking about how your work might align with the larger degree focus of the institution, the alumni program, all of those different things—really thinking through that. Now, you’ve mentioned data a number of times; I want to focus in on that. Before I do that, though, I’m going to do an acronym check because I broke my own rule and threw out an acronym without defining it: ADDIE. Most listeners probably know what ADDIE is, but I don’t want to assume that everybody does. Analysis, design, development, implementation, evaluation—the classic steps to an instructional design model, which, as you indicated, some people use. It’s fallen out of favor, I think, with a lot of instructional designers, but you need to know what it is.
The Role of Data and How to Know What to Do With It
Jeff Cobb: [00:17:48] How do you think about data literacy? What are the types of data that are important? How do we get better about gathering the right data? What do we then do with that data? A lot of times I encounter organizations where they’re like, “We got a lot of data. We’re not sure what to do with it. Our LMS has this stuff. What do we do with this now?”
Alaina Szlachta: [00:18:07] Yes, and that exact phrase, “We’ve got a lot of data. We don’t know what to do with it,” that came out when I did my study of professionals in and around learning. What was their biggest challenge when it comes to telling the story of what came about because you invested in learning? Lots of people said, “We actually have a lot of data. We have access to data, but we just don’t know what to do with it.” That’s where I think let’s break down data for a moment. You asked me to define measurement and evaluation at the start of this conversation, but we didn’t unpack data. There’s a false narrative around data that forces us to think about data as the quantitative, numerical side of things, and that’s a piece of the data equation. But, actually, data artifacts live in and around us all the time.
Alaina Szlachta: [00:18:59] Jeff, this conversation is a data artifact. When we have informal meetings with clients, internal employees, or peers, that’s data. Getting our heads wrapped around what are we doing and why? Going back to that impact hypothesis, we don’t have a clear hypothesis that helps to shed light on what data do we need to prove and test this hypothesis. Because there’s a lot of data that’s not going to be useful. I’m going to say something that I think people need to hear more and more and more, and that is your learning management system or your learning experience platform, the data that comes out of those platforms, is usually not the data that you need to tell that story of what came about because we invested in a learning program. Oftentimes the data does exist, but it exists in ways that we need to think creatively about harnessing that.
Alaina Szlachta: [00:20:00] For example, somebody worked in a construction company, and they didn’t have any formal measurement in place, and they needed to know what are the problems that are experienced in the field, and how can learning be a part of developing people so that those problems don’t persist? One of the data artifacts that could be the most useful—and not only the analysis side of the equation but also when we do our upfront analysis, the data artifacts that we use to understand the problem. Those are usually the same data artifacts that can help us see if we solved the problem. A lot of times, your upfront analysis is giving you where do I look to see if we made a difference? Back to the construction story. Informal conversations were happening regularly on the field between the management and the field staff. They were meeting regularly, but there was no way to harness those conversations. Because in those conversations were some of the problems that were coming up, some of the solutions that were being talked about, the results of the solutions, and how we keep tweaking them. Those conversations were the best data artifact to help the learning team figure out what are the problems, how can we help with that.
Alaina Szlachta: [00:21:19] So the question becomes, “How do we collect that conversational data?” And so then the conversation becomes, “Could we have one of the foremen or one of the managers actually have a recording device?” And it’s their job every time they have an informal conversation, “Hey, let’s record this” because the organic conversation is such great insight to help the organization support the field workers. And that little tweak…. Now, then you can imagine, okay, yes, we have all this narrative, qualitative data, which is more relevant to the use case, but what do we do with it? Well, that’s where AI comes into play. AI is so good. Today we have the tools, and we’ve had them in the past, but today we have the tools that make working with that narrative data so much easier.
Alaina Szlachta: [00:22:12] A lot of times those tools are free, or they have great free trials, so there’s no longer an excuse to say, “Hey, the data is too difficult or cumbersome.” It’s probably that you’re not figuring out how to narrow what are the most appropriate data artifacts, and then how do we harness it if it’s not already being collected? How do we creatively figure out…? And sometimes that’s like having a conversation with the manager of the team and saying, “Do you mind using this recording device and pressing record when those conversations happen? Because the business needs that insight.” And, if the manager is appreciative of the extra support, they’ll do it and without much extra effort on their part. So it’s a lot of creative thinking that I think is needed to help us use data more effectively.
The Role of AI and Automation in Measurement and Evaluation
Jeff Cobb: [00:23:00] That is a great segue into artificial intelligence, and I would love to hear more of your thinking around that. With respect to those sorts of data points, that more qualitative data, informal, conversational-type data, recently, I’ve been finding it very powerful to…. We have transcripts of so many things. A transcript will exist because of this conversation. We’ve had 500+ conversations on the Leading Learning Podcast, and we do all sorts of other meetings and that sort of thing. Within our own organization, we’ve got a lot of data from conversations that can be used. And something I’ve been doing—I’m hardly the only person doing this—is having conversations. And I use ChatGPT.
Jeff Cobb: [00:23:40] You can use one of the other large language models, but I’ll have a conversation with ChatGPT that’s starting more with what does ChatGPT know about whatever I’m trying to get to? How can I make it understand the outcomes that I’m looking for from whatever data that I’m going to give to the large language model? And then finally get to the point of uploading transcripts or whatever and saying, “Okay, based on what you know, what we’ve discussed about what we’re trying to achieve here, look at this data and analyze it, and come back with X, Y, and Z. And give it a plan.” If you’re thoughtful about how you’re engaging with the AI, with the data you have available, I’m amazed at what you can actually get out of it. That’s one example of being able to use AI, building on what you were saying, Alaina. How else do you see artificial intelligence being used, both to analyze data and then for that measurement and evaluation, that larger goal of evaluating whether learning is achieving the impact or not? How do you see AI being leveraged?
Alaina Szlachta: [00:24:39] The world of AI, I think it is overpowering another AI-ish tool. It’s not AI, actually; it’s just a technology tool, and that’s automation. I think AI and automation are working hand in hand these days. Sometimes your automation is part of what makes AI possible in terms of doing more powerful analysis and support for the organization. Segueing away from AI for just one moment. If we take the 100-foot view of learning…. One of the podcasts I listened to before you and I had this conversation, one of your podcasts was like, “What is learning, and how do we be more effective at facilitating it?” For me, learning is happening all the time, and you all mentioned that in your podcast. I think we do a disservice by trying to measure the learning intervention or event, and we try to measure before and after this one program. Actually, what’s happening during that program is just a small piece of the entire experience that an employee or a person is having in their life or in the workplace. Learning is happening all the time.
Alaina Szlachta: [00:25:55] We could use automation to take that 100-foot perspective and say, “If we want people’s performance to change, what are all the ways that we need to support someone?” A formal learning event is a piece of that. But manager conversations, peer groups, unpacking and problem-solving together, job aids, intranets, chat bots—a lot of organizations now have AI agents that are built to sit on top of the organization’s policies, procedures, all kinds of data and content that’s basically from the intranet. You could ask questions of that AI tool. For example, “How do I fill out a form to request two weeks off?” All of that up to “What’s the way that I deal with a challenging client who wants a full refund, but they’re beyond their refund period?” There are a lot of experiences and touch points in that very large experience of learning and performance.
Alaina Szlachta: [00:27:03] If we use automation, think of an e-mail marketing campaign. Many of us have worked with marketing tools, and we can create a marketing campaign. The moment somebody goes to our Web site, and they click on this, then based upon what they clicked on, they get this other series of things, and we can build out this automated journey. If we do that same thing, thinking about the life, not the lifetime because that’s overwhelming, but a period of time in which learning is unfolding relative to a certain performance expectation, let’s look at that and figure out what are all the ways that we could automate some of those touchpoints or automate did something happen or not? A manager meeting with their employee to maybe process what they experienced in learning and what they feel like they’re going to do differently because they invested in a 60-hour training program. And then maybe there’s an accountability worksheet that gets filled out. How do we capture all these different points in time that help us to see what’s helping people develop their performance or what’s getting in the way?
Alaina Szlachta: [00:28:14] Maybe the manager doesn’t have time to meet with the employee within the first one, two, or three weeks after the program. And then, as we know, we forget—the learning curve—we forget some of those new things. And so how do we capture those moments where we could have done a better job of facilitating and keeping the performance and development top of mind? We can use automation to capture data points on “Did that meeting with the manager happen or not?” And then that could trigger an e-mail that says, “Hey, learning team or operations team, so-and-so manager didn’t meet with this staff member within that three-week period post-training to talk about the experience. Let’s follow up with them so that the momentum is not lost.” We can use technology to help us capture and support that longer-term process of learning that’s not in the classroom, and that’s not the formal piece of the puzzle.
Alaina Szlachta: [00:29:10] Now, Jeff, I know you’re probably thinking, “Well, what about people who are vendors, and they don’t have those same touchpoints?” Part of the proposal is helping the organization to think through and capture the most meaningful touchpoints, and, if you build the automation for them, or you help them to build it, you can get access to that data yourself and help them to make sure that those pieces are happening. Maybe you get copied on those e-mails when manager meetings don’t happen so that you get the same insight that they do, and that helps improve your own process, your service, and product. There’s a lot we can do with automation that creates good, insightful data to help us be more effective. That is not at all the AI equation, but I think we get too excited about AI, and we forget that there are other things that we can do that give us great data to help us be more effective.
The Importance of Looking at Long-Term Impact, Even If It’s Harder to Assess
Jeff Cobb: [00:30:14] Often that long-term impact is very difficult, particularly outside of the context of a single organization or a corporation. For example, if you’re serving learners who are dispersed all over the place—it’s a membership base, or it’s the students in a state or region—you don’t have the same kind of access to them. But we do talk a lot about building learner journeys, building learner pathways, which any type of learning provider can do and can use the technology and the automation that goes with the technology to say, “Okay, we’re going to take you through the steps of this career path over time,” for example. And we’ve got that programmed into the LMS and some combination of the LMS and the marketing tools so we’re sending out reminders about X, Y, and Z and different opportunities and points where you might want to take an assessment or have a conversation with your manager that you record or whatever it is. You can lay out, over a very long period of time, with automation and technology supporting you in doing it what you want that learner to ideally be engaged with, that you’re able to track over time, and that produces a lot of data.
Jeff Cobb: [00:31:14] That can be daunting. Setting it up in the first place feels daunting, but you’ve got the tech and the automation to help you with that. As long as you know who the person is and can contact them, you’ve got that opportunity available. But then, if you’re able to take that data that you’ve accumulated over time and feed it back into an AI to help you analyze it so that you can do something with all that data you’ve got, that’s a pretty powerful equation. There are a lot of dots that have to be connected along the way there to make that work, but it’s not very far out of the reach of even smaller organizations to do that kind of thing at this point.
Alaina Szlachta: [00:31:47] Yes, and what’s great is that most of your listeners probably offer only…. Some folks might do bespoke programming and development, but I think most of us have what I call flagship programs that we do on repeat. You’re only investing maybe one time, and then you’re refining a bit over time, but you will get to a point where that upfront investment then pays off in spades. Don’t think of it like you have to be building this journey over and over again. If you build something one time, you can then duplicate that and adapt it for a different program. So you have a little bit of an upfront investment, but then it begins to pay off, and that takes the overwhelm a little bit out of it. One thing, Jeff, I wanted to say about how to take the overwhelm out of all the data that you get from those automations—I think this is something we overlook as an industry—and it’s milestones. If we take that 1,000-foot or 100-foot perspective on the learning that’s happening all the time in an organization and how a formal training program is just one part of that, and there are other informal things happening that’s facilitating the learning, what are the milestones that are essential for us to get the success that we want?
Alaina Szlachta: [00:33:09] And so I go back to that manager meeting example because I know that, when managers are supportive—and it doesn’t have to be a manager; it could be a senior peer; it could be a graduate of the program that’s checking in with the new graduates; it’s just someone that helps to process the experience and reinforces the large formal programming—we know that’s a critical milestone to somebody continuing in their development process. Let’s say we trigger did that check-in happen? That’s an important thing for us to capture. There may be other data points that are going, but what we want to know is, are those critical milestones unfolding in the way that we imagine or need them to get the results? So you’re looking at four or five critical data points. The first one is did people complete the program as it was designed? If someone only showed up to half of the programming, well, your first milestone wasn’t accomplished. What are you going to do about it? And that’s the other piece of this whole data equation. If you don’t have a plan for what you’re going to do with the data that you get, don’t collect it.
Alaina Szlachta: [00:34:21] Don’t put yourself through the hassle of building all of that if you don’t have the time, the resources, or the interest and motivation to do something with the data. If you’re not going to follow up with your clients and tell them, “Hey, folks aren’t completing the training,” or “They’re not checking in with their support person after training,” if you’re not going to be reporting that back, having the tough conversations about, “Okay, what do we do about it?” If you don’t have the bandwidth, the interest to do it, don’t collect the data. Be very honest about your capacity and capabilities around what you’re going to do with the data, and that makes everything else, again, less overwhelming and easier. Otherwise, you’re back to what you said earlier: “We have all this data. We don’t know what to do with it.” You should know what you’re going to do with that data before you even collect it because that makes it easy to streamline. What do we really need to emphasize and focus on as we build out all of these supports and structures?
Content Isn’t Always the Answer
Jeff Cobb: [00:35:15] Yes, and it goes back to your original basis in what is the impact we’re trying to achieve here, and what is the story to achieve that impact look like, and what are the data points that will tell us that that story is actually happening or how that story is happening? One thing I like about this too is organizations can get so caught up in content when it comes to learning. “We’ve got to build content. We’ve got to design content. We’ve got to publish content.” When you’re talking about things like, say, a manager conversation down the road, you’re not building content there; you’re providing for substance because you’re (A) saying it needs to happen, and (B) you’re providing some questions that might be relevant to that conversation. But the content is the learner and the manager talking, and then you’re just capturing that and being able to do something with that data. If you’re thinking in terms of the longer range and the story and how data is going to support the story, you can start thinking in terms of those substantial touchpoints that don’t necessarily involve you having to invest in building a whole bunch of content and making it all feel heavy and complicated.
Alaina Szlachta: [00:36:21] It’s a little bit challenging for us as providers and learning businesses because what we can control is the content. We want to invest so much in having great, quality content. But I push back on that, and I would say your content is useful in making sure that it’s relevant and it solves the problem that you’ve discovered, but what’s more important is your investigation of “Is the context and the environment currently supportive of the investment in your program?” Because, if the environment isn’t supportive of that, it’s going to be a waste. Some of the things to consider would be asking your client how involved, historically, have managers been in supporting employees, in reinforcing and having conversations about training programs in the past? If your client says, “Oh, our managers are way too overloaded; they’ll never do that.” Red flag. The second thing, “What else do you guys have going on right now that are big initiatives? Because we want to make sure the timing is right.” If the client says, “Well, we’re rolling out this; we’re changing our technology provider, and so all the staff have to go through this other training, and it’s mandatory,” blah, blah, blah, that tells me that now is not the right time for the employees to be taking yet another program. Because cognitive overload will prevent that program from being effective.
Alaina Szlachta: [00:37:49] You’ll have people dragging their feet like, “Oh my gosh, yet another program.” Even if your content and the learning experience are designed fabulously, people will be inherently resistant to it because of cognitive overload or competing priorities in the organization. There’s some stuff we can investigate upfront to help us and our clients pick the right time and have the right supports in place upfront to make that program effective. So my big takeaway for you all here is 80 percent of the outcomes and the effectiveness have to do with the environment and the context and its support of your program and the change that you’re trying to create through your program. I’d be investing a whole heck of a lot more time in that investigation and helping the client to support that complimentary environment over developing this robust, amazing learning program.
Personal Approach to Lifelong Learning
Jeff Cobb: [00:38:48] This being a show about learning, usually the people who come on are avid lifelong learners themselves. I want to ask you about some of your own approaches, habits, and practices related to your own lifelong learning. And, since you are a measurement and evaluation person, how do you think about the effectiveness of your own learning and whether it’s having the impact that you want?
Alaina Szlachta: [00:39:12] Yes, it’s a great question. One of the things that I do for myself—and because I’m a business owner as well—is research and development. I have a newsletter, I have a meetup group, and I get access to the perspectives, the challenges that my target audience is currently experiencing. When I see a trend on the rise, I’m like, “That’s an indicator. That’s a place I should go do some research and development, see what other people have done, what other models, theories, and perspectives might be out there.” And, if I don’t find something valuable, then I’m going to go write or build myself. One of the things I’m working on right now for my own continuing education is a pain point that a lot of us experience, and that is exactly what we talked about. It’s so funny. Full circle. I have a lot of data, but it’s not that I don’t know what to do with it; it’s that it’s really painful to do something with it because everything is manual.
Alaina Szlachta: [00:40:10] I am a solopreneur. When I get big projects, I do collaborate with other peers and contractors, but I’m the one managing my business and doing the majority of the client work. That’s why I love automation. I’m thinking a lot about automation right now to then basically have my automation be my second employee. I mentioned my meetup group. I want to be practicing what I preach, which is measuring some of the change that’s coming about for participants that come to that meetup regularly because the whole point of it is to get ideas and solutions to get you out of the weeds when it comes to measurement. If we’re being effective, folks should be being more effective themselves. One of the ways that I want to experiment—and I’ve got a partner that’s helping me in the implementation side of things—is building automations and some code to extract….
Alaina Szlachta: [00:41:07] At the end of my meetup, we talk about “What’s one thing different?” This is a great way to measure the intention and capture the ideas of what people will do because of the conversation, the class, or whatever you’re up to. I ask folks, “What’s one thing different that you’re going to do because of this one-hour conversation today?” That helps them to go, “What realistically could be that one thing that I do?” I purposefully do not use a survey tool. Now, it would be easy for me to ask folks to write in their one thing different because that would go to a Google sheet, and I could then analyze it with AI, with the themes, or dashboard it, whatever. But I believe that people share more honestly and authentically when they’re using their voice. Sometimes we process and share via conversation. I want to capture the auditory answers versus having people type. Plus, it’s at the end of the hour, and people probably don’t want to be typing, and they’re rushed, whereas, if they’re sharing via conversation.
Alaina Szlachta: [00:42:13] So I just record the last bit of our meetup, which is that one thing different, and I’m going to push it into a quarterly report that asks people to say, “Hey, you attended these sessions in the last quarter. You said this was your one thing different. Check how many of these did you do? And, for the ones that you did do, what became possible because of those actions? What unfolded?” Getting toward that longer term, I did this one thing, and what happened? And then the second question, “For the ones that you didn’t do, what got in the way?” That gives me really good data to continue to support the participants in my meetup, to support my own writing and exploration, but also it reflects back to my participants and gives them a chance to continue processing because we know that facilitating that processing and that critical thinking helps to facilitate learning. Again, back to your point of how I’m trying to integrate measurement in learning, this is one example of doing that, and I’m trying to get automation and AI to help build that quarterly report on autopilot so I don’t have to do it manually.
Celisa Steele: [00:43:28] Dr. Alaina Szlachta is the author of Measurement and Evaluation on a Shoestring.
To make sure you don’t miss new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also gives us some data on the impact of the podcast.
We’d also be grateful if you would take a minute to rate us on Apple Podcasts or wherever you listen. We personally appreciate reviews and ratings, and they help us show up when people search for content on leading a learning business.
Finally, consider following us and sharing the good word about Leading Learning. You can find us on X (formerly Twitter), Facebook, and LinkedIn.
Leave a Reply