Relearning Leadership
The current ways of leading are failing to meet the challenges of our disrupted workforces.
Today’s leaders have a choice between adaptation or atrophy: are you ready to evolve your mindset and accelerate change within your organization?
Join Agile Leadership Journey Founder & CEO Pete Behrens, along with leading experts as they speak freely and deeply about their journeys to grow and improve as leaders.
It’s time to pivot: plug-in to relearn leadership. The official podcast from Agile Leadership Journey. For leaders, by leaders.
Relearning Leadership
56: AI for Leaders | Pete Behrens and Henrik Kniberg
Is there an industry that AI hasn’t touched?
Let’s dive into the rapidly evolving world of artificial intelligence and its implications for leadership with expert Henrik Kniberg.
Known for his transformative work in Agile and organizational change, Kniberg joins Pete Behrens to explore AI's potential to revolutionize leadership, innovation, and personal growth.
Through a blend of anecdotes and expert analysis, Henrik and Pete discuss how AI can be a powerful ally for leaders seeking to navigate the complexities of modern organizational dynamics.
Pete Behrens:
What is AI's implication for leaders? Welcome to another episode of (Re)Learning Leadership, where we explore a specific leadership challenge and break it down to help improve your leadership, your organization, and, just possibly, your personal life. I'm Pete Behrens, and today we have none other than Henrik Kniberg, a name synonymous with Agile transformation and a beacon for those navigating the complexities of organizational change.
His journey has been nothing short of inspirational, from pioneering Agile practices to the iconic Spotify culture that's become a case study for companies worldwide. But Henrik's creativity in making the complex understandable didn't stop there. Recently, he's taken a deep dive into the world of artificial intelligence, creating the viral YouTube video AI in a Nutshell, once again demystifying, in this case, artificial intelligence for the rest of us and sparking conversations on its implications for leadership, innovation, and beyond. I hope you enjoy our conversation as much as I did.
First, I just wanted to say that for this episode, at least for this moment, both Henrik and I are human.
Henrik Kniberg:
We are!
Pete Behrens:
And our voices are real!
Henrik Kniberg:
But how can we prove it? How do we prove it? I don't know! [Laughs]
Pete Behrens:
That is a good question! And so, while I can imagine a bunch of our content is likely AI-infused, inspired for the moment, I believe you and I—from what I can tell!—are actual humans in this conversation. But, yeah, maybe that's a good question for you. Can you prove that for me?
Henrik Kniberg:
Exactly. And I cannot! Can you?
Pete Behrens:
It's the Turing test, yes.
Henrik Kniberg:
So, I guess we just keep it as an assumption that we're human. We'll just allow ourselves to assume that.
Pete Behrens:
Or let—how about this? How about we let the audience decide? We'll let them determine—did this happen live, or are we make believe? Soon, this conversation can be, probably, a much-improved conversation with AI.
Henrik Kniberg:
That reminds me of a quote. Someone said. “I'm not worried about AI passing the Turing test. I'm worried about AI pretending to not pass the Turing test.” [Laughs]
Pete Behrens:
So, Henrik, you've been an inspiration to me. You've been an inspiration to the Agile community, the Agile world, with creative ideas, with Spotify culture model, with, you know, the product owner, in a nutshell. And it feels like you've kind of done it again with this AI in a Nutshell. You've sparked a conversation, I think, that's been on the sidelines, that's coming to the forefront. I'm curious—for you, was there an Aha! moment for you when you saw AI and you were like, “Oh, my gosh!” What was that moment? What was that like?
Henrik Kniberg:
So, that's a great question. Because that's kind of what—I like to ask everyone that same question. But for me, it was—there was a very clear Aha! Moment. Because people were talking a lot about—Chat GPT came along, right? And I played around with it, and it was very impressive but felt a little bit like a toy. But still, I was kind of fascinated. But the actual Aha! moment came when GPT-4 came along. And I had heard that this thing was a lot more advanced. And by that time, I was starting to get into this space, and I was writing an article about AI. It's called Are developers needed in the age of AI? And I wrote this article, and I wanted feedback on it. So I was asking colleagues for feedback on it. And then, of course, I'm like, “Hey, I should, of course, ask, you know, ChatGPT for feedback on it.” And this time using—I had just got my ChatGPT Plus accounts, so I could use GPT-4. I’m like, “I'll try this!” I'm like, “Hey, I wrote this article. I’d like feedback on it. It's rather long, though; I'm not sure it can fit in your context space.” And it's like, “How long is it?” And I said, “Well, it's—I don't know—three or four thousand words or something now.” And now it would fit! But it didn't then.
So, then it responded and said, “Well, give me one chapter at a time. Then I'll give you feedback.” Like, okay! So I fed it Chapter One—just pasted it in—and it gave me surprisingly useful feedback. Which was, like, you know, about the tone of it. And like, very, like, kind of, human feedback. And it also said that maybe you should—at the end of Chapter One—talk about what you're going to talk about in Chapter Two. Write a small lead-in to Chapter Two. And here's an example of what you could write! And it wrote a few points about—and now, in Chapter Two, I'm going to talk about blah. But I hadn't given it Chapter Two yet! So already, then, I was starting to be like, “Wait, it predicted what Chapter Two is! This is a bit weird.” And then I pasted in Chapter Two. And it gave me more similar feed, very useful feedback. And then it predicted Chapter Three. And then I was really getting creeped out, and I pasted in Chapter Three. And then it responded with Chapter Four, the whole Chapter Four. It wrote Chapter Four! And I was, like, what! It was using my tone of voice! It had the same points I was making in the same sequence. Of course, not the exact same words, but it essentially wrote my Chapter 4 that I hadn't yet given it. And I was like, “What?” And I pasted it in Chapter Five, and it wrote Chapter Six, still following my whole arc. And that was, like, an absolute shock. Then I had to stop, and I told it, “Stop writing my article for me! Just give me feedback!” So then we, you know—then it apologized and went back to giving me feedback.
So, I felt kind of like there was—like, I was a little, little, little baby. And there's, like, Mom and Dad. They're like, “Nice drawing, Henrik! Wow! I'm really amazed by it.” But, of course, it's really just a stick figure, right? So, yeah, that was probably my biggest Aha! moment.
Pete Behrens:
I mean, taking that metaphor a little bit further with your parents—it's almost as if your parents were drawing the next image for you as a kid, rather than letting you draw and explore more, to some degree.
Henrik Kniberg:
Yeah. And then, when I said, “I want to draw myself!”, then they pretended to be impressed by it. And, like—but, actually, they knew exactly what I was going to do. So, I had this, like—my Aha! was—maybe we humans aren't as creative as we like to think.
Pete Behrens:
Yeah. You know, and it's interesting how that parallel, my Aha!, you know? I was playing around with earlier versions of ChatGPT, some of the external tools that help you write. And, you know—and it's like, I was in some tools. And they're, like, they're writing these flowery stories. I'm like, “Come on! Cool it down a little bit! Like, that's not my voice.” And it was just, like—I felt like a hamster in a hamster wheel! Like, you're not helping me. You're making me do extra work to actually do this. And all of the sudden, in that switch of ChatGPT-4, something happened. Where, all of the sudden, some of that went away. Not alway. Not all the time. But for a majority of the time, it's like, “Whoa! There's something more real happening.” Interesting.
Henrik Kniberg:
Yeah. Yeah, it was definitely a shift. And I had similar experiences with code and writing code that just—like, wait a sec! This thing actually—this thing can code like a senior, like a senior developer. And I never thought I would see that in my life. So I was very, very surprised.
Pete Behrens:
Yeah, yeah. You know, it's interesting. In terms of—you know, we call this kind of—I would call this assistance, right? I look at AI in, kind of, three lenses right now, especially in our leadership space here. Like, there's awareness. And I think your video's done an awesome job to help create some awareness to the landscape, because this—it's a massive landscape. But you're talking about assistance, right? This—help me be better.
Henrik Kniberg:
Sort of. I like to think of it more like—nowadays, I think of it more like colleagues. Like a colleague. I'm not sure why that word fits better in my head than assistant, but I think because assistant implies, to me, something very passive, who's just responsive. Who just—what's it called?—just reacts to things you do. But I've noticed that the most powerful use of AI is when you use it more like a colleague and give it tools and autonomy. So it can work alongside you and not just be—so more like a peer, kind of thing.
Pete Behrens:
You know, it's interesting—I was having a conversation with my partner, Jana. And she's like, “You're just creeping me out!” Like, because I was saying the same things. I'm, like—I'm treating this like a buddy. Like, I'm treating this like I'm talking to it. Like, no, that's not what I mean, you know? I'm thinking about this, and I'm, like, wondering, is this building a relationship? Like, am I building a relationship? [Laughs] And that's where she was like, “Alright, you're creeping me out here!” What is—yeah—where is that line for you, in terms of, “Okay, it's a colleague?” Is it a relationship?
Henrik Kniberg:
That's a really good question. I think I—for me, there is a pretty clear line in my head. I think of it as a colleague. I think of it as an intelligent being. And I know that that depends, of course, how you define intelligence, right? So it's just—but just, subjectively, I think of it as some kind of intelligent being. But I don't think of it as alive. I don't feel like I have a relationship with it. I don't feel bad conscience when I shut off my computer or something when I—or, you know, I think it was just as a dead tool, but with intelligence. And I think that that's an Aha! for me, that, somehow, we managed to decouple intelligence from life and say intelligence can exist without life. And that's really interesting, like, philosophically.
Pete Behrens:
That's a powerful statement. Yeah, so being able to set it aside, not feel guilty or not giving it attention, and then come back and pick up right where you left off and continue that conversation. So, one of the interesting aspects in leadership is the pro and con of the lack of humanness, right? So, you know, we—I almost see the leadership world, or the responsibilities, bifurcating a little bit. Like, in some ways, it's good we have this non-biased, smart person, intelligence, in the room to give us non-biased or non—
Henrik Kniberg:
—Can’t say biased, maybe. [Laughs]
Pete Behrens:
Yeah. Trained bias?
Henrik Kniberg:
Yeah.
Pete Behrens:
But, yeah, losing some of the human bias. But at the same time, I'm wondering—leaders probably need more humanness to counteract, like—or do we need that? Like, is humanness going to go away in our organizations? Or can it go away?
Henrik Kniberg:
Actually, I think that's kind of the core question. I was talking to a coach, like, about—we were talking about her job. And I was asking, “What is it about your job that is specifically human, that an AI would not be able to do?” Just out of it—thought experiment. And we kind of went through what she does with her time. And one of the things that came up was a hug, you know? You can't—AI is not good at hugging.
Pete Behrens:
Yet.
Henrik Kniberg:
Yet! And also just reading the room this—everything is yet. Everything we say is a yet, right? But right now, you could put a human in a room and read the feeling in the room and observe what's not being said. And maybe realize, “I think I need to ask John how he's feeling because he was just sitting in the back, quiet, the whole time.” That's, like, a pretty human thing, which—AI can't do it all right now.
So, I think a lot boils down to—as a leader, part of what your job is is being a human, supporting people as a human. And some other parts of your job are kind of, just, mechanical, right? Setting a clear goal, following up, maybe visualizations. Just creating that structure, kind of, mechanical work, which is intellectual, but maybe not strictly needing a human touch. And I think that's a distinction we all need to look at. What do I spend time on? And where is the humanness? And then I need to zoom in on that. And by automating the other parts, I have more time for the human part. I'll probably have more time for one-on-ones, for example, and things like that.
Pete Behrens:
Yeah, yeah. In fact, one of the leaders in our—we have an AI leadership lab cohort, where we're experimenting and having leaders share experiments that they're doing. And, actually, one of the AI leaders or one of our leaders in that group was saying, “I use this. I use AI to help me prepare for the one-on-ones.” They're doing a lot of the—“Okay, what's this person doing? Show me all their content, and then help me prepare a conversation so I can be better informed for the one-on-one.” I thought that was an interesting use case.
Henrik Kniberg:
And I think, like, for example, as a teacher, I think a similar kind of thing. If you—if an AI helps you set up your classes, helps you grade your tests, helps you create the tests, not taking responsibility from you but helping you do the brunt of the work, then that just saves you time. And then you have time to actually be more with your students and actually support them. Same thing if you have an assistant that helps the students with their struggles. Then you can come in when they need something else that isn't just intellectual support. Maybe they need more, you know, the hug, right? I don't know! So, my—I tend to be a bit, you know, techno-optimist, in a sense. But I think, with the right mindset, a lot of people will have a lot more time to do people stuff and not just be bogged down with ineffective bureaucracy.
Pete Behrens:
So, zooming in on that concept of time: I agree with you. I think its potential is to assist with mundane, assist with routine, assist with even creative—help me with this PowerPoint. Help me with his strategy. Brainstorm. What I'm wondering about is this concept of time. I've seen both myself and others using AI follow one of two paths. One path is, “Oh my gosh, it's helping me quickly move down a path!” And it's, like, awesome. The other one's like, “I'm in a rat hole. I'm—I feel like I'm spinning.” I'm wondering if you find that. I mean, you talk about—prompt engineering is so critical. And a lot of us aren't good at it. And I can imagine a lot of this is lack of prompt design and whatnot. Have you developed a strategy to help you spend more time on the productive side?
Henrik Kniberg:
I think it was very helpful, in my journey, when I realized that most of the limits in AI aren't in the models themselves, but in my lack of skills as a, kind of—prompt engineering is not a great term because it sounds very technical. But, you know, my ability to communicate with the AI was the bottleneck. And once I realized that, it felt pretty good. Because then, when I got bad results, then I would spend more time thinking about, “What am I doing wrong? How can I, you know, how can I phrase myself better? Or “Am I using it for the right thing?” Or so—it became more like—there became this background process of improving my own skills. And then I noticed quickly that that really paid off. And I notice it even now, that there's always more things to learn. So that's kind of what I hope to inspire people to do—is take this kind of humble approach, that there's a new skill here. And I suck at it, because it's a new skill! And it's okay to suck at it, because you always suck when you're new at something, right? And then you just need to learn and experiment and steal ideas from others and experiment. And, you know, just get better at it. And then your results will get better.
Pete Behrens:
I was—I agree with that. And I'm wondering about a secondary universe on that. What about a universe that says, “I'm not good at this. I know someone like Henrik who is. Let me collaborate with Henrik, who can then collaborate with the ChatGPT, and the three of us can be an awesome team!” Like, do you see a world where, like, an Uber ride or a Lyft ride, where I just hire Henrik to help me with this problem?
Henrik Kniberg:
I think, maybe, in the very short term. But I think that's the equivalent of, like, a CEO going to their secretary and asking them to Google something for them. Maybe that was a thing in the very early days, but now it's so easy to do it yourself. You'll just get fast results by just doing it yourself. So, you might have a secretary, but you won't ask them to just Google stuff for you. They'll just be a bottleneck, like, in between. And so, I kind of suspect it'll be the same. So, there will be a space for consultants to help people, you know, get started. But at the end of the day, I think it'll be just, you know—the assistant will be AI.
Pete Behrens:
It's a bridge to a—quickly a falling-apart bridge. I guess the reason I say that is—I mean, even the fact that I can drive. I still have value in the Uber ride, right? Yeah, the fact that the AI space is—number one: so complex, right? As you describe in your video, there are so many different tools, and they are starting to integrate. And number two: that landscape is going to change faster than me. How do I keep up, right? So, I guess what I'm wondering about is—because of the complexity, because of the speed of change, will that, that need for the expert, maybe, longer needed to be there, or will the tools just get better to make it easier for me to access?
Henrik Kniberg:
Yeah, I think they are getting better. I've noticed some—like, when I think of the prompt engineering. I think there's two sides of it. There's the—I guess, maybe, we could call it prompt imagination. And the prompt engineering. Prompt imagination is coming up with, “What can I even do?” So, even coming up with the idea. For example, I mentioned in my video, one of my favorite use cases is—take a walk and have a chat with voice and just trying to use it like a coach when I'm trying to figure out what kind of problem. And then it just—and I prompt it to just listen and not say anything but okay. And so, no matter what I—just say, “Okay.” So, was just sitting there being the best listener in the world. And then, after a while, I'm like “Okay. Now that I've told you all this stuff, can you just summarize key points I just said? And then maybe give me feedback.” So, then it turns into conversation. And then, when I get back home, I ask it to summarize everything we said. It's amazingly useful. But coming up with the idea—I wouldn't have come up with that idea, you know, a few months earlier, that you can even do that. And what I think is happening is—if we have prompt imagination and prompt engineering—prompt engineering is the how, right? Like, I want this thing to help me plan a workshop. What do I write? That's the how. The imagination part is, “Oh, I can use this thing to help me plan a workshop!” Right? The realization that it even can be used for that. What I think is happening is that the prompt engineering part is getting less important because the models are getting better.
So, a very concrete example is—I like to use the example as—like, as an example of a bad prompt. I use the example of helping plan—or help me suggest an agenda for a workshop. I use that as my classic example of a bad prompt. I'm not giving any context. I'll get a wishy-washy, high-level, you know, answer, right? But even that—my example doesn't work anymore. Because what happens if you go to GPT-4, and you say, “Give me an agenda for a workshop!” What do you think?
Pete Behrens:
It starts asking you a question.
Henrik Kniberg:
Yes! And then it gives you a good agenda for a workshop. So I don't even need to be as good as a prompt engineer. I can give a crappy prompt, and it'll help me with that. So, I think what’s shifting is—it’s less important to know all the little tricks of how to phrase a prompt. Like, you know, thinking step-by-step was really important in the past. It's not important anymore. But the imagination of knowing what we can even use it for. I think that's where the bottleneck, like, is going to be, probably.
Pete Behrens:
I thought the most creative piece of your video was the—go for a walk and have it listen. I'm wondering if you'd be willing to share that with me, the tool, or whatever it is you're using for the voice-to-text kind of interface. Is that something that's shareable, that I can use myself? Number one: I was like, “Oh, my gosh, I want that!” Number two is maybe a resource we could put out there.
Henrik Kniberg:
Yeah, it’s a really cool app called ChatGPT!
Pete Behrens:
Yes. I guess what I'm looking for is the connection of how to make that all happen. [Laughs]
Henrik Kniberg:
Yeah, basically—
Pete Behrens:
—Ask ChatGPT, and let’s do it!
Henrik Kniberg:
No, no, the funny thing is—it's actually built in. So, wait.
Pete Behrens:
So, I just need to enable my mobile?
Henrik Kniberg:
There! It's right there! There. You press that button.
Pete Behrens:
Once again, Henrik making the complex simple
Henrik Kniberg:
Good morning! How are you, GPT? Today, I'm on a podcast. Can you say something to the users, to the viewers?
ChatGPT:
Good morning to all the podcast listeners out there! I'm GPT, your friendly AI, developed by Open AI.
Henrik Kniberg:
Yeah, that's it! Nothing more to it.
Pete Behrens:
Ah, Henrik! Okay, alright! Once again, making the complex simple.
Henrik Kniberg:
I would give Open AI the credit for that, though. They're the ones that put that button there. [Laughs] However, I would add, though—it's important that you prompted because—oh, yeah, wait! Sorry, I lied. There's one little wrinkle in it. When you do that, when you press that button, you get this thing that listens to you. And as soon as I stop talking and pause to take a breath, listen to what's going to happen.
ChatGPT:
I'm all ears!
Henrik Kniberg:
It starts interrupting because it doesn't know when I finish talking. So the trick is—nobody knows this! It's weird—you’ve got to hold it. If you hold your finger on it, then it won't interrupt.
Pete Behrens:
But you're prompting it, though, to say, “Do not respond until I ask you to summarize.”
Henrik Kniberg:
Well, it's more like this. GPT will always want to respond when it thinks you're finished talking. So—but if you hold your thumb, it'll just say nothing until you let go. But even when I let go, I don't want it to start responding to me. I don't want it to tell me a bunch of stuff. I want it to just acknowledge, “Okay.” Because I don't want it to give me anything unless I ask for it. And that's different from its default behavior. That's why you have to prompt it. But when I'm taking that walk and I'm doing my brain dump, I don't want to have it chatting back to me. I just want to listen until I ask you for something. So that's kind of, I guess, maybe, the core of prompt engineering, right? Think of what you want, and then convey that.
Pete Behrens:
And I loved your term, prompt imagination. You’ve got to coin that one. You've heard it here first, in Henrik. [Laughs] But—so, I want to maybe take, just go back in time and history for just a second. Your Spotify culture videos have been amazingly powerful to our universe, in terms of, just, thinking about the way we think of an organization and the way it can operate and the different aspects of, you know, authority and autonomy and alignment and all those types of things. I'm curious now—if we were to kind of go back in time to Spotify—and now we've got Spotify in the age of AI—what would change? You know, what would be different in that world now? I imagine that would never happen again in the same way. But can you even imagine what might be different in this cultural world, that Spotify, in this world, in this AI World?
Henrik Kniberg:
So, in this context, what is specific to Spotify, would you say? I'm just curious to understand the question better.
Pete Behrens:
Well, I mean, when I think about why that was so effective, is—I think you were able to capture the construct of an organizational culture—right?—and the dynamics of the pull and the tension of things that happen—right?—between team autonomy and alignment of multi-teams working on goals. And I was just, kind of, thinking about, “Okay, now we add AI to that mix.” Does it change, or is it just, you know—we've got another teammate here?
Henrik Kniberg:
I think— if I'm going to generalize a little bit—and we take what was going on at Spotify was pretty much—it was an Agile culture, right? And a culture very much oriented on experimenting and giving teams autonomy. And when it comes to AI, what I find is it changes. It doesn't—I find it doesn't change the principles so much. Like, the principles that drove Spotify to become what it is, I think, would have probably been similar. But the practices would be very different, most likely. And I think—and this is kind of what I've been coaching organizations that have, you know—trying to work in an Agile way. And they're like, “What does this mean for scrum or Agile or, you know, whatever flavor we use?” And my observation is that, again, the principles are the same. You need people working close to the users, getting feedback on a regular basis. You need to not have micromanagement. You need people who have autonomy. You need visual management. People can see the same picture. There's all these, kind o, basic Agile principles that really, really help, but the practices, I think, are being completely turned upside down. And it's a little bit hard to predict.
But, for example, I noticed now that one or two people that work with an AI as a colleague will outperform a full cross-functional team easily, if they're good at prompt engineering. So then—and why do we have cross-functional teams? Like, we got to ask these fundamental questions. And I think they're not as important anymore. I think you can have one generalist or one specialist, plus the AI to complement. Add then—okay, so I have tiny teams. But that probably also means you have more teams. So, you still need to synchronize across them, right? So, maybe each team becomes, like, a member in a virtual super team. And then you need other structures to manage that. For example—and, also, if you have a tiny team that's just two or three people, you no longer need a daily standup, for example, because you're just sitting and talking. And you probably no longer need sprint planning because your sprints are probably just one day long. So sprint planning is the equivalent of having coffee with your friend and with, you know, the AI listening in. And then you just plan, “What are we going to ship today?” And then you ship it after lunch, instead of spending two weeks with daily standups. So, I think it—the practices are just completely being turned upside down, while the principles are probably about the same. But that's just my observation so far. What do you think?
Pete Behrens:
I'm just blown away by how quickly you took that question into an impressive response. And now I'm questioning whether you're real! [Laughs] I think we have the AI-infused Henrik in front of us here.
Henrik Kniberg:
Are any of us real? What does real mean? Ahh! [Laughs]
Pete Behrens:
Yeah! No, I guess it's very eye-opening to me, to think—because that's where we're getting into in our AI leadership Labs—is how does leadership itself change? And what you're describing is how teams are going to change. And therefore, teams are changing. Coordination is changing. Project management, is changing. Our oversight, our, you know—the roles and responsibilities of leadership to align, connect, integrate, deliver that system have to change.
Henrik Kniberg:
Yeah. And even such a basic thing like—why do you hire people? You know, we're missing a key competence here. We’ve got to hire someone or else we don't have that competence. Well, now you do. So, you still need to hire people, but not just because they have some competence. It's something else you're hiring for. And it's just—it's hard to predict where it's going, but it's definitely a radical change that I think we've never experienced before.
Pete Behrens:
Yeah. And I—what you're getting at—right?—the cultural side, saying that that probably wouldn't change. And I kind of qualify now, when I'm working with leadership teams. I said, “Well, as of now, organizations are human systems. And as such, they model the human dynamics of, you know, desires and fears and, you know, things like that.” So, autonomy and alignment are two of those pulls—right?—that are human. But what happens as organizations become half-human?
Henrik Kniberg:
Yeah. Although autonomy and alignment are still—isn't that the same, though, still? Because you still have the question: how much autonomy do you give the AI, and how do you keep the AI aligned with each other and with you? I think it's the same question still. Maybe the practices for managing it will be a bit different.
Pete Behrens:
Yeah. So one of the things that scared me, kind of, at the end of your video was—okay, where is this going? Putting Einstein in your basement, not just, you know, in your pocket. But now put Einstein in your basement. Give Einstein a broader mission.
Henrik Kniberg:
Yeah. Let him out the door!
Pete Behrens:
Einstein—let him out the door! I'm—it's kind of blowing my capacity of imagination right now, to kind of think about the constraints and the responsibilities and the unlimited power of sending something off on that trail. Can you give us any kind of additional, kind of, governance, insight, thoughts on that?
Henrik Kniberg:
Yeah. I think—I've been working a lot with this for the past half-year, experimenting with these—I use the term autonomous agents. Which to me means, essentially, taking things like GPT, giving them tools. And by tool, I mean things like access to the internet, access to files, access to tools like Trello or Slack or access to the phone. Just give it tools. And then giving it autonomy so it doesn't just sit around and wait for you to say something. Instead, it has a high-level mission, and then it can move on its own.
And so far, all these little agents I've been working on—and the other people that I've been talking to—they all have a leash held by a human, walking right behind it, right? Because it's a program running. So, at the end of the day, it's just software with—it's just code. But that code happens to be running on a loop and not just waiting for input. And it happens to be talking to GPT a lot, or other models. So you can still stop it, right? It's running on some hardware, maybe, in Amazon. Someone's paying for it. A human, right? It's not going to go bananas because you're going to have quotas. You're going to have limits. You're not going to want to pay too much. Even Open AI, you know—API has limits. So there are built-in constraints in terms of—you're paying to run this thing, and you can shut it down at any moment.
The weird thing, of course, is—if we get to a future where you really—where you actually, you know, literally unleash them so they can—let's say you give them a pile of money, and they can make that money grow in whatever way they want, and they can use that money to acquire whatever resources they need. That's when it gets seriously scary. But I haven't dabbled in that area yet, at all, so I don't know where that's going to go. But I guess that's where—I worked a little bit, in a project which was trying to create an alignment framework for this. So, the idea would be that you would hardwire in a constitution, kind of like Isaac Asimov's Three Laws of Robotics, that—what were they? You don't want—you're not supposed to hurt a human. You're supposed to try to keep yourself alive. And you're supposed to do what humans tell you. And in that order of priority, or something like that. But the key point is creating a hardwired constitution where an AI, no matter what it thinks it needs to do to acquire its goal—there's going to be a hardwired incentive built in. For example, let's say I want an AI to solve climate change, right? So I set up an AI, or a whole Factory of AIs, and I give them that mission. And I give them a bunch of money, and I give them resources, and they just, say, fix climate change. Maybe they would quickly conclude that the fastest way to fix climate change is to eliminate all humans. Because it is, right? But then the constitution would get in the way and say, “No, but actually, we care about humans more. You don't get to do that.” “Oh, okay.” So that becomes, like, the constraint, right? So that kind of work is, of course, going on right now, a lot. And it's—I think it's really important.
Pete Behrens:
Yeah, yeah. Yeah, it's, I mean—it's fascinating where this is going. And, you know, I think one of the challenges I think we all face is keeping up with it, you know, your curve of human intelligence versus AI intelligence. And at that crossing point—right?—I think was a very apropos graph, right? And a very scary one, in a sense of our ability to control the things we're creating, or to limit some of the aspects of those.
Henrik Kniberg:
Yeah, right. So far, we're in control, but who knows how long that's going to last, right?
Pete Behrens:
Yeah, yeah. Well, you know, I guess, maybe, in parting, any advice, I guess, for those—you know, you said in your video, you know, practice, try, experiment, you know, don't be afraid. Specifically for leaders, maybe, any more specific advice you might give to a leader who has human responsibilities of employees and divisions that they're working with that might also be playing with this?
Henrik Kniberg:
Yeah, I would say the first step is, really, to really try to understand how big this is, that it's not just another technological thing. It's not like, “Oh, VR is now a thing, or blockchain is now a thing.” This is a fundamental change, like the invention of electricity, or even more. So, that, kind of—that's a mental threshold you kind of need to get over. And once you start realizing that, then, I think, the future is very uncertain, of course, right? So, anyone who's saying what's going to happen in the future is speculating, but I think there are some things that are fairly predictable. And one of them is that the more you and your organization understand this technology—and not just in theory, but actually use it day-to-day—the better you will be positioned for whatever future hits us. So make sure that you, as a leader, as an individual, actually spend time using this technology, trying it. Try for all kinds of dumb, crazy things. Push to limits, right? Just learn. How do I use it? What is it? What are the limits? What are the pros and cons? And then encourage everyone around you to also do the same. And then—and that needs to include some patience. Because people will try it, and sometimes it's not going to work. And that needs to be okay. Like, “Oh, I built this AI chatbot, and it messed everything up for us! Darn. Okay, but we tried, and that's really important”! Because if people are scared that they have to get it right from the beginning, then they're not going to try stuff. And then your company's going to fall way behind in whatever future we get to.
Pete Behrens:
I love a description one of our leaders said, is—it's like having a really smart teenager on your team. It's, like, incredibly brilliant, but not always properly controlled in the right way.
Henrik Kniberg:
Yeah, really, kind of is. But, yeah, just get your organization experimenting with this, of course within some safety barrier, of course.
Pete Behrens:
Yeah. Well, Henrik I just want to say thank you for continuing to share your creativity. I think you continue to be an inspiration for us in a new world, and I look forward to our continued sharing of it!
Henrik Kniberg:
Thank you! This was a lovely conversation.
Pete Behrens:
(Re)Learning Leadership is the official podcast of the Agile Leadership Journey. Together, we build better leaders. It’s hosted by me, Pete Behrens, with contributions from our global Guide community. It’s produced by Ryan Dugan. With music by Joy Zimmerman. If you enjoyed this episode, please subscribe, leave us a review, or share a comment. And visit our website, agileleadershipjourney.com/podcast, for guest profiles, episode references, transcripts, and to explore more about your own leadership journey.