If anything, we should all be able to agree that generative artificial intelligence is a curious thing, worthy of reflection and exploration.
Julian Stodd is a researcher, an artist, an explorer, a writer, and captain and founder of Sea Salt Learning, which helps organizations set strategy and change direction. He is also a firm believer in working out loud and a previous Leading Learning Podcast guest.
In this episode, co-host Celisa Steele talks with Julian about ideas from Engines of Engagement: A Curious Book About Generative AI, which he co-wrote with Sae Schatz and Geoff Stead. Julian’s perspective on AI is both technical and philosophical, specific and expansive, and, of course, curious.
To tune in, listen below. To make sure you catch all future episodes, be sure to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). And, if you like the podcast, be sure to give it a tweet.
Listen to the Show
Access the Transcript
Download a PDF transcript of this episode’s audio.
Read the Show Notes
Julian Stodd: [00:00:00] If you’re curious, direct your curiosity more towards your certainty than towards the edges of the system.
Celisa Steele: [00:00:11] I’m Celisa Steele.
Jeff Cobb: [00:00:13] I’m Jeff Cobb, and this is the Leading Learning Podcast.
Jeff Cobb: [00:00:21] If anything, we should all be able to agree that generative artificial intelligence is a curious thing—a thing worthy of reflection and exploration and curiosity, beyond the bounds of good or bad or right or wrong. Julian Stodd is a researcher, an artist, an explorer, and a writer, most recently of the book Engines of Engagement: A Curious Book About Generative AI with co-authors Sae Schatz and Geoff Stead. The book is available as a free download or as a beautiful for-fee physical artifact. Julian is captain and founder of Sea Salt Learning, which helps organizations set strategy and change direction. He is also a firm believer in working out loud and a previous Leading Learning Podcast guest, as is his co-author Sae Schatz. Julian posits that we’re learning, working, and living in the Social Age, which follows the Industrial and Digital Ages, and the Social Age is the focus of the prior podcast conversation. This time around, the conversation focuses on generative AI but with Julian’s particular perspective, which is both technical and philosophical. Celisa and Julian spoke in late January 2024.
Working Out Loud
Celisa Steele: [00:01:44] You embrace rather enthusiastically this principle of working out loud. Before we get to your latest book that’s out in the world, I would love to hear how you landed on working out loud and why.
Julian Stodd: [00:02:00] Yes, I’ll tell you because it’s, for me at the moment, a fantastic exercise in justifying myself because the way I landed at it was an act of desperation. It was really a term I used to prevent other people giving me bad feedback. I was so nervous and wary when I started sharing my ideas, I thought I’ll say, “Oh, I’m working out loud, so I don’t need you to tell me it’s rubbish. I already know it’s rubbish.” I started like that and ran like that for about three years, 2010 to 2013, and then I started illustrating the blog. When I look back, they’re terrible. They’re terrible illustrations.
Julian Stodd: [00:02:40] I’m not saying that out of false modesty. They’re rubbish, which is okay. You’ve got to start somewhere. But that drove this huge increase in engagement with my work. Then working out loud became something else. It became the mechanism by which I test ideas out. I find language, and I’m engaged with my community. Now, latterly, I’m in the last six months of a doctorate. I failed one before. I spent six years on one that I failed to submit, so I’ve come around to it again. I knew I would eventually, and I have, and it’s on the practice of working out loud, but it’s exploring it as a way of being, really. It’s a way of generating knowledge and capability and of holding yourself in your practice. At the end of it, I hope I’ll be able to tell you that working out loud is a real thing rather than just a made-up thing.
Celisa Steele: [00:03:34] I’ve seen it said of you—and maybe you’re the one saying this—that you frequently find new ways to be wrong. I’m curious about that willingness to be wrong—and to be wrong publicly because you’re working out loud. How does that fit with your own growth, your own learning and development?
Julian Stodd: [00:03:56] Yes, I guess when I say it, it sounds funny, tongue-in-cheek, or silly, but it’s not. It’s very real for me. I’m a scientist at heart. Although I say I’m an artist, which I am, I am a scientist. My work is rigorous and evidence-based, with the caveat that much of the research, inevitably in my space, is the social sciences. So it’s more ethnographic, social psychology, and such—like understanding how people work individually. One of the books I’m working on at the moment is a learning science book. Literally, how does our brain learn? But also, socially, collectively, how do we learn? How do we change? So being wrong is part of science. That’s the point of it really—finding new ways to be wrong. But I could argue that my work is an abstraction, as all models are.
Julian Stodd: [00:04:46] It says there is this thing called the Social Age. There is a notion of social leadership. There is social collaborative learning. These are things, but the pictures that I paint of them with words or with a pen are abstractions. It’s not really real, but it’s kind of. It gives us a frame to understand something. Now, to do that, you do have to try things out. You try out vocabulary, you try out ideas, and some of those things really persist and really grow. So the Quiet Leadership work I do at the moment talks about the organization as ecosystem, and that metaphor has spread widely through my work. Looking at it, it is useful. It’s a useful abstraction. Other things don’t.
Julian Stodd: [00:05:26] I published a book a couple of years ago called The Humble Leader, and that book was about eight years in the making, and it started off as work on social justice in San Francisco. It was really interesting, and it didn’t really go anywhere. So then I tried to make it a framework for fairness, and that became recursive. It started becoming like I was making it into a framework for the sake of making a framework. It didn’t go anywhere. And then suddenly, to be honest, out of the blue, I just sat down and wrote the book. It’s very short. It’s only 3,500 words. But it wrote itself. A funny thing is I’ll never do a second edition of that book because I couldn’t find that voice again. It was obvious to me that, through the years of playing with the ideas, the ways that didn’t work let me develop both the ideas and fragments of language that just came together. So, when it did come together, it was almost poetic. The language is quite poetic. It’s structurally different—the writing—than in something like the Engines of Engagement book. The sentence structure is different.
Julian Stodd: [00:06:42] I know that story might not make sense to other people, but it makes sense to me. Being wrong is part of my practice, and I’m perfectly okay with that. I’m comfortable with it. And the funny thing is, nearly all the time, people that I work with—when I’m running programs or sharing ideas at conferences—people generally love it when I share the ways that I was wrong. I can’t really think of a time when somebody has turned around to me and said, “Well, that just proves to me that you’re an idiot.” They might think it, but they don’t generally say it. And that’s interesting because it makes me think, well, why don’t some other people think they have that space to be wrong?
Julian Stodd: [00:07:21] Of course, there were very good reasons why they don’t because they operate in systems where people will judge them for being wrong. I’m running a program this year about navigating ambiguity, and one of the things we look at is why systems struggle to hear voices that they really need to hear. And they often do so because they’re really good systems, really well intentioned, but they view ambiguity as a threat to their efficiency and effectiveness or their structures of power and control, and so they silence it. And then people silence themselves. By talking openly about being wrong, in some ways, I create the conditions in which I can do so more safely, perhaps.
Celisa Steele: [00:08:08] We’re going to be talking more about generative AI. I’m just curious, though, while we’re talking about this notion of wrongness, do you think generative AI makes it easier or harder to be wrong? I’m thinking, if you’ve put out a thought and then you move beyond it, you come to not believe what you used to believe or what you put out, but now that’s out there. It’s being mined. It’s potentially feeding into what’s teaching AI. Any thoughts on that willingness to be wrong in public and what the impact of AI might be on that?
Julian Stodd: [00:08:46] That’s a big question. The first thing to be clear about is that my judgment of whether something is right or wrong is only one valid position. Sometimes I will say I had an idea, and it was really bad. Somebody else might think it’s good. Of course, more typically, I have an idea that I think is good, and somebody else thinks that it’s bad, but the things I’m measuring are typically, not always, but typically not absolute. They are subjective. So, in that sense, it’s good that it’s all out there. In fact, I’d probably go further than that and say I am pretty sure that if there are people in the world who have found value in my work, and I think some people genuinely have, which is fine and okay—that’s why I put it out there, in the hope that they will—but, if they have found value, I strongly suspect the value that they have found in it is value they have put into it.
Julian Stodd: [00:09:41] I don’t particularly think my work carries a great deal of knowledge or wisdom within it. It’s not rubbish. Most of it’s not rubbish anyway. And there may be a few bits of it that are really good. But, in general, I think what it does is it provides a context and a scaffolding in which people create a truth—their truth. And sometimes they mistakenly think that I give it to them, but I don’t think I do. I don’t think that’s unique to me. I think quite often I’ll do that with other people. I’ll read their book, or I’ll see them speak, and I’ll have a moment of insight. It’s interesting to think how often is that insight that they have given me, or it’s insight that I have created, I guess, in partnership with them, even if they don’t realize it’s in partnership.
Julian Stodd: [00:10:33] That’s, of course, an interesting thing about generative AI—it allows me to be in partnership through technology rather than in partnership with a person because these systems are dialogic, and dialog is a mechanism of sense-making and learning. In answer to your question—I think I danced around it—does it matter? Probably not. Generative AI operates at a vast scale, and a few books here and there aren’t going to make much difference to it, although there is a caveat to that. One of the things I look at in changes to social movement is how dominant narratives form. An easy way to think of a dominant narrative is like an art form, like, how did garage music come about? How did the genre of gothic horror come about? Sometimes one person writes a book, and it creates a genre. In that sense, ideas can scale and act as organizing features of culture or output. So that could be large enough scale to influence it.
Partner with Tagoras
Jeff Cobb: [00:11:39] At Tagoras, we’re experts in the global business of lifelong learning, and we use our expertise to help clients better understand their markets, connect with new customers, make the right investment decisions, and grow their learning businesses. We achieve these goals through expert market assessment, strategy formulation, and platform selection services. If you’re looking for a partner to help your learning business achieve greater reach, revenue, and impact, learn more at tagoras.com/services.
About Engines of Engagement
Celisa Steele: [00:12:09] Let’s turn to this book Engines of Engagement: A Curious Book About Generative AI, which we’ll probably focus most of the rest of our time on. It’s a trilogue. You co-wrote it with Sae Schatz and Geoff Stead. I’m curious about that title. I said “curious” in part because there are two words in it that I would love to get your view on: “engagement” and then “curious.” What does that title mean to you and particularly those two words in it?
Julian Stodd: [00:12:39] Yes, I have to admit, I love the title. There is a metaphor in the book, of engines. In my mind, it was like the steam engine, symbolizing the Industrial Age. And we talk about that, these engines that drive something forward, the story engines and the art engines. The engines piece is about a thing that we’ve created that then serves us in some way. We invented steam engines so that they would serve us, so that they would lead to mechanization. We could pull the carriages out of the mine more effectively. We could go on a seaside holiday more easily. So they are engines in that sense. I like the idea that an engine drives something. It drives a changing culture. It drives a new way of creating.
Julian Stodd: [00:13:29] So they are engines. Engagement because they are fascinating. They have captivated us, or they have captured us. And so those were the two words that I liked. Engines of engagement is what we have, but what they are, what they do to us, what we do with them—that’s what we explore. The book is both technical and philosophical. Our guiding principle was to set out to write the book that nobody else was writing. We didn’t want to write a guide to prompt engineering, and we didn’t want to write some clever book about how the algorithms work, mainly because we couldn’t—or, at least, I couldn’t. Sae and Geoff probably could. But that’s not our expertise. We bring different perspectives. And it does tell you how things work, but, predominantly, it’s about informing a stage for debate. How will generative AI transform our world, our organizations, our notions of art, industry, and such like?
Celisa Steele: [00:14:38] And so the “curious” part, then, as I gather from having read the book, is this idea of we should approach AI with some curiosity and this idea of how might we, how can we engage with it? What might it do for us?
Julian Stodd: [00:14:52] The curious piece is important, is interesting. Normally, in my work, I quite often use a slide at the start of programs. I don’t use slides with words on them usually, but this one does have a few words. It says, “Hold your certainty lightly,” and I think that’s right. Any time you come up with something, any time in life where you hit something which is polar opposite or is a commonly held view, it’s fine to feel the comfort of believing it. But then you should question it. “Is that really true?” When I said I was writing a book on generative AI, numerous people said to me, “Oh, are you for or against it?” Well, I was like, “What a ridiculous thing to say. It’s like saying, ‘Are you for or against gravity?’ ‘Well, I don’t know. It’s kind of handy, isn’t it? It keeps my feet on the ground.’” So I’m not for or against generative AI.
Julian Stodd: [00:15:44] When people get too bogged down in their own certainty, myself included—I do this just as much as everybody else—if you get too bogged down in your own certainty, a key capability is to spot that and think, “I do feel really certain that bias is a big issue, so should I start thinking about bias differently?” Because what if I didn’t have all the facts? Or what if somebody can change my view? Or what if there was a perspective? Or what if that used to be an issue but isn’t an issue any longer? Why am I so certain about something? Because very few things in life are truly certain. So the curiosity is important. That the word is there for two reasons. It’s a curious book about generative AI because it represents our curiosity, and it’s a curious book about generative AI because the book itself is curious. I think sometimes people aren’t sure what to make of it. It’s poetic. It’s philosophical. It’s technical. Probably doesn’t fit easily into a box.
Celisa Steele: [00:16:43] Of course, it’s got lovely illustrations.
Julian Stodd: [00:16:46] The illustrations are wonderful, but that’s more credit to Sae, really. We have a whole chapter about how we created them. They are original illustrations. I feel idiotic saying it, but I should say it. We have written the book. None of it’s written by generative AI, and we have created the artwork. But what we actually did for that was Sae and Geoff used some of the art engines to create illustrations from the text in the book. I then sat down and took far too long to use those as inspiration to create entirely new hand-drawn illustrations. And Sae, as well as being an exceptional learning scientist, is a graphic designer and artist. She’s a phenomenal artist. So she then took my illustrations and added some digital textures to them. Sometimes that would mean breaking elements out and making them 3D or changing textures in the image. I would argue it’s a human collaboration, but it is a dialog with generative AI.
Celisa Steele: [00:17:53] It sounds like, from a comment you made earlier, the book that you and Sae and Geoff were originally going to write on learning science is still in the works. Is that correct?
Julian Stodd: [00:18:04] It is. Yes, it’s funny. The learning science book is a pretty good example of failure because I wrote it before. I wrote a book on learning science. It’s like 70,000 words or something. It was pretty hard work to write it, and I sent it to Sae. We’ve never worked together, but we’ve collaborated. We’ve written some academic papers, and I greatly respect her. We have a friendship, which is strong enough that she very diplomatically said to me, “Well, this is interesting. You’ve written two books somehow and smooshed them together into something that doesn’t really work in any sense whatsoever.” So I took that feedback, sulked for about 18 months, and didn’t touch it. I wanted to come back to it but was just too daunted by it. I was just at the limit of my capability. And then I had my moment of insight, which was, “Why don’t I just carefully ask Sae if she’ll write it with me? Because she’ll make it better.”
Julian Stodd: [00:19:08] And it took about 10 seconds for her to say, “Yes, let’s do that.” We’ve been looking for something to do, and, frankly, I don’t know why we were so idiotic. It took us years to figure out what to do. We’d been kicking around other ideas, and I was like, “Why don’t we just write the thing that we know about? Let’s write it on learning science.” So we started that. And then I said to Sae, “Do you know Geoff?” She didn’t know Geoff. Geoff is brilliant. He circles around the learning space, but he’s kind of a product guy, works in typically high-tech startups, scaling startups. He brings a brilliant perspective on things. And so I said, “I’ll tell you what, let’s do a learning science happy hour on a Thursday evening, and I’ll bring Geoff in. Let’s all bring our gin and tonic, and we’ll just talk for an hour so you can get to know each other.” We did that, and we just kept on doing it, and we still do it. Every Thursday we meet up—without the gin and tonic. We do a learning science happy hour. We did that, and we started writing the learning science book, and we’ve made good progress on it. It’s an 18-month project.
Julian Stodd: [00:20:18] And then, in July last year, as we were six months into it and really gathering steam, I said to them, “You know what? I think we should stop and write a book on generative AI.” And they both said, absolutely, immediately, “You’re an idiot. Why would we do that? We’ve just found momentum. We’re doing really well.” And so, I said, “Okay, well, why don’t we spend two weeks just looking at it, just to see if we’ve got something to say, that we could do really fast? I want to have this book out in 2023.” And so we did two weeks’ work on it, and then we went into our happy hour session, and I was all ready to say, “You know what? You were right. That was a really daft idea.” And they piled in, saying, “Yes, we should absolutely do it.” And so we did it. It came together really fast, the illustrations took a bit longer, and we managed to get it out last year. And now we’re switching our focus back to learning science. So, yes, we’ve probably written 30 percent of it. We’re hoping to have it out at the end of 2024.
The Interplay of Learning Science and Generative AI
Celisa Steele: [00:21:30] Great. I look forward to that book when it’s out. This book, Engines of Engagement, started or came out of that work talking around learning science, and generative AI kept popping into the conversation, so you paused. You decided to pursue it. I’m wondering if it’s possible for you to briefly talk about what you see as the impact of generative AI on learning science or the interplay of learning science and generative AI.
Julian Stodd: [00:22:01] It’s interesting. When we were arranging this conversation, you sent me that question, and I thought, “Oh, I should say, ‘Let’s not do that question because I don’t know what the answer is.’” But then I thought, “Well, you know what, I think it’s okay to use this as a chance to think about it.” My instinctive reaction is to say generative AI won’t impact on learning science because learning science seeks to understand the neurochemical, biological, social context in which we learn. So a new technology won’t change, at least in the short term, how our brains work. But then, when you think about the applications—where learning science takes us from learning science into the practice of designing, delivering, and supporting learning—it’s going to change a ton of stuff. The way to think of it is this—we’re writing the learning science book, not because people need to be learning scientists, but because there is a ton of research that tells us how people learn that, with a little bit of effort, will inform how we design and deliver learning.
Julian Stodd: [00:23:11] And generative AI isn’t going to make people learn better per se but can be woven into the design, delivery, and practice of learning. It’s probably more a matter of synthesis and using learning science to inform our understanding of learning and using that understanding of learning to think about how generative AI can support the process. I’ll give you some examples of that, but, again, I won’t make you the promise that these are good examples. But, in the program I’m running at the moment, I’m doing this module on navigating ambiguity. Just today I shared something with the group. I’d asked them to do some activities on describing what ambiguity is. We took, verbatim, the words they used, and we used that to generate some artwork—the way that different cohorts described ambiguity. And one cohort has come up with a series of brutalist buildings that people are staring at. Another’s words have been interpreted by the same engine as this beautiful, mountainous landscape.
Julian Stodd: [00:24:24] When I shared that with the group today, again, I wasn’t really sure how to share it with them. Because I said, “This is not analytic of your writing. View it as a reflective surface. I can show you the art that has been created in response to your words, and I can show you the art that was created in response to the other cohort’s words, and it’s clearly different. View it as being reflective, but view it as reflective in the way a shop window is when you walk past it or a muddy puddle. It’s not going to give you clarity, but what it might help you do is find new language or new ways to explore.” And that’s exactly what they did. They talked about it. They said, “Has our culture been reflected in this? Is our landscape full of these brutalist buildings and people looking lost? Has that something to do with the language we use in our organization?” That’s quite a good insight. I think that’s probably true as well. If you joined an organization, chances are I could predict the kinds of language you would use pretty rapidly.
Julian Stodd: [00:25:32] When we join, we conform to types of language and behavior. So we can use it like that. We can use it for dialog. In a previous iteration of this program, at the end of last year, I got people to do some free writing exercises. Write 150 words on how systems change. We then took those, collected them together, did the analysis through Claude (one of the generative AI tools), shared the story back to the group, and asked them to be in dialog with the story. So, again, we’re not using the generative AI to mark them, to score them. We’re not using it to design the course. We’re not using it to set learning objectives—because I never set learning objectives. What we are using it for is as a different partner for dialog. And, for me, that’s a key way of understanding generative AI. Not the only way and not the only value. But, for me, the primary value at this stage in my practice is as a dialog engine.
The Impact of Artificial Intelligence on Social Learning
Celisa Steele: [00:26:47] I want to ask about the impact of AI on social learning, and I was really interested in Engines of Engagement around this concept of anti-social learning, where AI can help it feel like a dialog, feel like you’re having that social experience—except you’re not because there aren’t necessarily others involved. Talk a little bit about that potential for impact of AI on social learning.
Julian Stodd: [00:27:12] Yes, it’s quite interesting. In the book, what I say is that generative AI takes something that used to be unique, which was expert dialog. You will be able to find the key people in your network you can have a high-quality conversation with, and, to some extent, it commoditizes dialog. It makes high-quality dialog a solo activity, which is crazy, and it also makes it available everywhere all the time. That’s clearly relevant in social learning because sense-making is a matter of dialog, the creation of meaning. So, clearly, generative AI impacts heavily in social learning. But the chapter that you’re referring to, I was pleased that you said it because the chapter is the most imperfect one in the book, I would say, which it shouldn’t be. It should be the strongest one. I wrote that piece on anti-social learning, which was essentially to say social learning is about social collaboration except now you can do it by yourself. So it’s anti-social learning.
Julian Stodd: [00:28:12] I wrote the whole section, and I thought it was absolutely bloomin’ marvelous, I have to say. I shared it with Geoff and Sae and also with Donald Clark, Mark Oehlert, and Marc Zao-Sanders, who’ve made contributions to our book—all of whom are real experts in learning. And, to a soul, they all turned around to me and said, “What on earth are you talking about? It makes no sense.” They’re just like, “I just don’t know where you’re going with it.” I was a bit bemused by it because I thought I absolutely know what I mean. So I went back and rewrote it, and I shared it back out, and they were like, “No.” They would actually drop it. I really couldn’t figure it out. I said, “Okay, well, I’ll write it again.” I wrote something again, and I thought, “No, I’m just going to drop it. I don’t understand why I can’t articulate this idea.” And then they’re like, “Oh, yes, we’ve got it.” But I still think it’s only a shadow of what it should be, and I’m still a bit bemused by why I’m struggling to explain it.
Julian Stodd: [00:29:11] But that takes me back to that conversation about working out loud. Sometimes you have to develop your vocabulary over time, and then you have to find the structures of language and art, I guess, which help you to share it. I’m clearly not quite there with it yet. Or maybe I was having a bad Friday. But it’s interesting. I have no doubt that a couple of things will happen. The first is that students and individual learners will run circles around organizations because that’s what people do. Individuals and communities find the edges of systems and the permeability of boundaries, and they work around things. Good people work around systems to do good work in more efficient ways. So I’ve no doubt that will be happening. That is happening right now. And, secondly, I have no doubt that the real impact of generative AI will be driven within a model of profit and power because that’s how things get adopted, and we may make it right down the line, but too much philosophizing about social good and purpose will be overtaken by the sheer fact of the matter that people will invent things, sell them, and make a ton of money.
Three Ways Generative AI Will Impact Individuals and Organizations
Celisa Steele: [00:30:32] In Engines of Engagement—I’ll quote a little bit—you write that “The impacts of Generative AI on both us and our organisations are probably best understood in three ways.” Maybe you could briefly touch on those three ways and, if possible, offer a brief example of what AI looks like in each of those three ways.
Julian Stodd: [00:30:52] The first way is a change within a system, the second is change that fractures a system, and the third is emergent capability. That’s a way of saying, most likely, the short-term impacts and adoption of generative AI will be within our systems as they are at the moment but in ways that make it more efficient. For example, when we started talking today, before we started recording, you said, “I’ll unleash my AI notetaker.” That’s straightforward. Neither of us have questioned it. It’s easy. It just makes life easier. In fact, I think you even said, “It just makes life easier.” Clearly, that’s going to be a first stage of adoption, and that will typically be within a model of efficiency for organizations. If they can do more efficient call handling, more efficient communication, innovation, if they can make something a slightly better quality, they’ll do all that.
Julian Stodd: [00:31:52] And people who invent systems that do that will be able to monetize them or sell them, so that’s going to be our space of innovation as well. That’s change within existing systems, typically driven by efficiency, economy, such like, so it saves us time and money, makes us better at doing what we already do. The second model is change that fractures our system. Instead of letting us do something more efficiently, it abstracts out part of the system, so it makes something within the system no longer useful. To make up an example, at the moment, if I want to remortgage my house, I can go to a mortgage broker. But, if a generative AI system can pull together the information I need, it doesn’t make the mortgage broker more efficient; it makes them redundant, so it will pull them out of the equation. There are probably plenty of examples of that.
Julian Stodd: [00:32:54] I’ve got a friend who’s an artist, who surfs on the line of making his living from art, as many people do. He doesn’t make a good living from it, but he does make a living from it. But a lot of the people he makes a living from are people who are writing their first book, putting out an album, or want to do a poster for their festival, and they pay him $300 to do something once a year. But, now, maybe they’re going to generate something themselves because they can use his artwork as a reference image if they want to. So that is going to cause some pain. But I suppose, on a larger level, change that fractures systems will start to have real impact. It’s not just about doing things more efficiently; it’s about doing them fundamentally differently. So that’s the thing a lot of people worry about—where our jobs will go, what won’t survive. And, of course, a lot of the research and the narrative is saying it won’t just be the low-paid jobs. Maybe it will be the solicitors. Maybe it will be the leaders. And so, clearly, a lot of our pain is going to be felt in that second stage.
Julian Stodd: [00:34:05] The third one is about emergent capability, which is what will we be able to do that we haven’t even conceived of doing before? And that’s quite interesting. I think I can most easily give you a historical example in a different context. One of the things I look at in my work on the Social Age is that I say we moved from an Industrial into a Digital Age and then into the Social one. Organizations used to have infrastructure, used to be a thing that they did. They had networks. They had buildings. They had fleets. They had printers. They had fax machines. They had infrastructure, and you needed an organization to have infrastructure. But, of course, today you don’t need an organization at all to have infrastructure. Infrastructure has become divorced from organizations and is available everywhere. In fact, funnily enough, as I speak to you now, on the floor next to me are two gigantic and expensive printers, which we’ve had in our office. We just all looked at each other—we’ve been moving office this week. We said, “Why don’t we just get rid of these? Because there’s a print shop down the road. And, you know what, it’s just cheap and easy, and these darn printers never work. They’re forever out of ink, or they’ve lost connectivity.”
Julian Stodd: [00:35:19] It’s like infrastructure has gone from being a defining feature of capability to being an albatross around our neck. So that was a shift in mindset but driven by collaborative technologies, driven by these different things. It caused us to fundamentally shift our understanding of organizations. Now, generative AI is most likely to do that again in all sorts of interesting ways because it doesn’t just speed up. It doesn’t just have the potential to speed up processes. It has the opportunity for us to conceive of different ways of doing things, of innovation, of communication, different ways of collectivism, different ways of learning. Most likely we’ll see this around education because education, in many ways, has already sold its soul by universities charging high course fees, and we see that markets rarely add social value to things in that sense. The making of money makes some people money and some people happy, but it’s not always a universal benefit. This type of technology has the potential to fracture that in a very real sense. You can just have a conversation with Claude, and Claude will tell you something of great value. No significant cost, and it’s essentially free. That’s bound to impact on our systems. Emergence is interesting, and that’s, of course, the biggest shift, but it will come after some of those others.
Expert Generalists, Specialists, and AI
Celisa Steele: [00:37:10] One of the other concepts in Engines of Engagement that I was struck by was this idea of expert generalists and the fact that I can work really well, or expert generalists can work really well, in partnership with generative AI. Would you just briefly explain to listeners what you mean by expert generalists? And then I have a follow-on question.
Julian Stodd: [00:37:34] Yes, in some ways, I think you should invite Sae on to give you the best view on that because she bought the expertise in that. But I would look at it from the point of view of the generalist. Essentially, what you can see is that we have two types of people. We train people with deep expertise, but we have other people who are generalists. I’m a generalist. The fancy name for it in my doctoral thesis is “transdisciplinary practitioner,” which means I’m a generalist. I know a little bit about quite a lot of stuff. The Victorians would describe a polymath as somebody who knows something useful. You have to know something useful. You can’t just be aware of it. You have to know something of vague use about a lot of stuff. Now, that’s not super useful if you’re trying to do the in-depth thing really well, but it’s quite useful if you’re trying to evolve what you’re doing.
Julian Stodd: [00:38:18] The expert generalist says you take people and give them some sort of core training, but the specific capability they hold is in learning how to do things. So they are people who have an advantage to an understanding of metacognitive processes, the ways that they learn, and the ways they develop performance. Of course, it turns out that very often they can outperform people with deep specialisms, so that’s quite striking. It indicates something very important for capability within organizations. You probably still need both, but you want to think about the balance between it. I used an illustration on the blog which shows an organization with people plugged into it. I think that’s a historic view of building organizations, to find all the things you have to do, plug people into the gaps to do them. And then, next to it, is a tree that grows out of people.
Julian Stodd: [00:39:12] How do you view your capability and your organization? Is it structural, and you plug people into it? I have a data analyst, they go and get a better job, so I just plug in another data analyst. Or, to some extent, does your capability grow out of people? And, if you take that view, you still have structure, but you also have this diversity—diverse capability—and you have the spaces where you can listen, explore, and experiment, which is a tricky thing in its own right. There’s a whole other piece of research I’ve been doing around experimentation and failure. I think that’s quite interesting. The notion of expert generalists is obviously supported by generative AI because you can be in dialog with yourself. You can access the literature, essentially, through those kinds of tools. Expert generalists will rely on dynamic, contextualized knowledge, and these kinds of technologies can bring that to them. Somewhere in that mix is probably a broader conversation about organizational capability and the function of learning within organizations.
Celisa Steele: [00:40:15] The question I had in mind was this—we work with learning businesses that are helping to develop adults who are fitting into those organizations, whether they’re being plugged in or whether they have that tree growing out of them. But I’m thinking some learning businesses are probably heavily invested in creating those specialists. They’re helping them go deep. An obvious example would be medical specialties. You’re really helping that doctor specialize in a particular area. But, given Engines of Engagement and this notion of expert generalists, it does almost make me wonder if learning businesses, if part of the impact of generative AI will be a change in what they’re helping adult learners learn how to do. Maybe it is more on that metacognitive side and helping them learn and think critically and do those sorts of things, rather than going really deep in a particular topic area. Any thoughts from you on that?
Julian Stodd: [00:41:19] Yes, it could be both. I generally hedge my bets and say we need both. I’m quite sure, at the moment, we need both. We will need people with deep specialism, but maybe we connect them up differently. I think that’s a reasonable view because we’ve seen, generally, a shift away from the structural organization to the organization with more ability to change, to be more dynamic. Maybe we need to think about the balance of ingredients. Do we actually, more explicitly, need to talk about these generalist layers? Because, in some ways, historically, in our organizations, the things that have gone horizontally are things like HR, finance, logistics. They’re not necessarily the elements that are going to give us the cutting-edge innovation. Maybe we want to pivot something across that isn’t administrative. When we think about things like data scientists, maybe they should be horizontal rather than vertical. I talk about diagonal connection, the way that you can connect between things. In the book on the Socially Dynamic organization, we talk about interconnection being key—connected within your structure but also connected into other pieces. And the more interconnected the organization, the stronger it is. So this idea of having specific capability, generalized capability, and being broadly interconnected, I think those are important.
The Importance of Experimenting with Generative AI
Celisa Steele: [00:42:47] Engines of Engagement is fairly philosophical at many levels, but I think there’s an attempt to be somewhat practical. I’m wondering, for those who are interested in helping others learn, what are some practical ideas to try or some questions to be thinking about in terms of how generative AI might factor into the work they’re doing to support those adult learners?
Julian Stodd: [00:43:15] I would say, try stuff out. I’m not explicitly going to tell you what stuff to try out, but I’ll try to give you a tool to decide and try out things that are small and local, things that you can use with small groups of learners to try it out. Stumble, graze your knees, figure it out. If you’re curious, direct your curiosity more towards your certainty than towards the edges of the system. You can do this by looking at “What do I read every week? What arguments do I hear where people have clear and polarized opinions? Where’s my opinion clear and polarized?” And direct your curiosity in that space. Don’t just think about how to make things more efficient within the system you already have. Think about different things that you can do.
Julian Stodd: [00:44:02] I said to the group this morning, “I’ve never done this before. I’m going to share a picture of the thing that you have created. I don’t really know how to introduce the picture. It’s not an assessment of you. It’s not research-led.” So I came up with that language: “Use it like a reflective surface. Use it like a poet has listened to you and written this poem, which is useless because you don’t need a poem, but also, if you listen to the poem, maybe it’s going to inspire you to think about something differently.” So do that for yourself. Just try stuff out, not just in the obvious ways. Look to break things apart slightly. Learning is a process of fracture and reformulation, so find something you can break locally and gently.
Using Generative AI to Cheat (and That’s a Good Thing!)
Celisa Steele: [00:44:52] We always like to ask guests who come on the Leading Learning Podcast about their own approach to their lifelong learning. I know when you were last on the show that Jeff asked about the role that writing and illustrating play in your learning. Given our conversation today, I’d like to ask how you’re using generative AI to help you continue to learn and develop personally and professionally.
Julian Stodd: [00:45:17] Plain and simple, to cheat, at every opportunity. This morning I was having a conversation with one of the people I work with, Sam, and we were talking about Instagram and threads—what’s the difference between them? We were faffing about, and then we thought, “Let’s just ask Claude.” We asked Claude, and we got a really good answer. It’s like cheating. We don’t have to know it. We could find it out easily. So we did. I’ve been using it in the doctoral work when I need to read something. I’ll be like, “Oh my goodness, I’m supposed to know what this word means. Can you just explain this word to me like I’m 12?” And it does. And then I can go away and write something that sounds clever. So I definitely use it to cheat, and that’s about it. I don’t use it in my art. I don’t use it in my writing. Because, for me, the purgatory is the thing.
Julian Stodd: [00:46:14] I’m unpopular with my writer friends because I don’t really have writer’s block. I just tend to splurge stuff out. But the writing is the thing, so I don’t want to make it easier. Even when I write on LinkedIn, it says, “Rewrite this with AI.” I’m like, “No, I don’t want to rewrite it with AI.” If the sentence is—sometimes I deliberately let myself fall into this stream-of-consciousness writing. Sometimes I deliberately make my sentences disjointed, jarring, and awkward, and I don’t want it smoothed out. I want to feel it like that. I’m a really bad example of what you should do. I don’t listen to podcasts. I don’t watch TED Talks. I don’t read any of those airport books about being a better leader or learner. It just doesn’t do it for me, although clearly it does for millions of other people. I like to bump into interesting people, spend time in different communities, try stuff out. You’ve got to find your own path. So I do use it, but it’s in fairly limited ways.
Jeff Cobb: [00:47:27] Julian Stodd is founder of Sea Salt Learning and co-author of Engines of Engagement: A Curious Book About Generative AI. That book is available as a free download or, for a fee, as a beautifully bound physical artifact. Follow Julian on LinkedIn, where he works out loud.
To make sure you don’t miss new episodes, we encourage you to subscribe via RSS, Apple Podcasts, Spotify, Stitcher Radio, iHeartRadio, PodBean, or any podcatcher service you may use (e.g., Overcast). Subscribing also gives us some data on the impact of the podcast.
We’d also be grateful if you would take a minute to rate us on Apple Podcasts or wherever you listen. We personally appreciate reviews and ratings, and they help us show up when people search for content on leading a learning business.
Finally, consider following us and sharing the good word about Leading Learning. You can find us on X (formerly Twitter), Facebook, and LinkedIn.
Related Resources
- Learning in the Social Age with Julian Stodd
- Modern Learning with Sae Schatz
- Working Out Loud with John Stepper
- Learning Out Loud with Michelle Ockers
- Marketing AI with Paul Roetzer
Leave a Reply