OCTOBER 2024

In the latest episode of our “In Conversation…” podcast series for 2024, Lucy speaks with Dr Vivienne Ming, a theoretical neuroscientist, entrepreneur, and co-founder of Socos Labs, a company dedicated to leveraging AI-driven research to maximise human potential.

In this conversation, Lucy and Vivienne explore the evolving relationship between AI and the workforce, discussing how technology can enhance our capabilities rather than merely focus on productivity gains. Vivienne emphasises the concept of “productive friction” and how AI can challenge us to become better at our jobs. They also delve into the implications of job disaggregation and the importance of finding balance in the age of automation.

Join us for a thought-provoking discussion on the future of work, human intelligence, and the role of AI in shaping our professional landscape.

Key Takeaways:

  • AI as a tool for enhancement: AI should be viewed as a tool to augment human intelligence and creativity rather than replace it, emphasising the importance of human value in technology strategies.

  • Understanding vs. Knowledge: While AI excels at recognising patterns and processing known information, it lacks true understanding and creativity, which are inherently human traits.

  • Productive friction: Embracing “productive friction” involves using AI to challenge our ideas and improve our skills, rather than simply automating tasks, thereby enhancing the quality of our work.

  • The risks of disaggregation: The disaggregation of jobs may lead to productivity gains, but it also risks diminishing professional identity and satisfaction. Finding a balance between automation and meaningful work is essential.

  • Empowerment through AI: The focus should shift from AI making our tasks easier to how it can enable us to achieve things we couldn't before, fostering growth and innovation in our roles.

  • Identifying peer role models: It is crucial to identify “unofficial” leaders who can inspire and influence others. This offers untapped potential in companies by bridging the gap between leadership and everyday employee experiences.

Lucy Lewis: Hello and welcome to the Future of Work Hubs In Conversation podcast. I'm your host Lucy Lewis, a partner in Lewis Silkin's employment team. And whether you're a seasoned listener or this is your first time, I'm delighted you're here to join me for some fascinating conversations with innovators, business leaders and thought leaders, exploring some of the longer-term trends and immediate drivers shaping the world of work. Now we're at a time where we're cost cutting and productivity gains seem to dominate the conversation around future developments in technology and in AI. But there are some experts who are working hard to ensure that humans remain at the forefront of technology strategy.

And our conversation today is with one of the leading voices on this topic. Dr Vivienne Ming is a theoretical neuroscientist, entrepreneur, and self -proclaimed mad scientist. She's also the co -founder of Socos Labs, a company which leverages AI driven research and inventions to maximise human potential for companies, for global policy and for education reform. Vivienne can now add author to her list of accolades. She's written a book, likely to be called How to Futureproof Your Kids, and that's due for release later this year. That challenges some of the conventional narratives around artificial intelligence, some of them I'm hoping Vivienne we can talk to you about today. So thank you for joining us and welcome.

Vivienne Ming: It's wonderful to be here.

Lucy Lewis: Now, one of the things that I am finding is that the impact of technology, particularly generative AI, not just on jobs, but also on skills, how people feel about work. That's an increasing part of the conversation on this podcast, but actually also quite a lot of what I'm being asked as a lawyer, as HR professionals, IT departments, look to roll out AI and technology into work. And I know the focus of what you do has been about how technology and AI can upgrade or prioritise human intelligence and human value. So, I thought a really good place to start would be just giving you some time to talk to us about what you mean when you talk about human intelligence versus artificial intelligence and what you mean by prioritising human intelligence and human value?

Vivienne Ming: Absolutely. You know, a science fiction author of whom I am fond, John Scalzi, just wrote an op-ed saying, why did Hollywood lie to us about AI? And this vision that it would be something so much more than what we have right now. And another author, Ted Chiang, famously of Arrival and other amazing stories, has written that AI at least as it exists today, simply isn't creative. And interestingly, I both agree with both of them, but also note as a neuroscientist that I actually think they have something wrong. Modern AI really is intelligence. It is a form of intelligence that we also possess.

The ability to pick up detailed and extensive statistical patterns out in the world. Now that's a little boring to talk about unless we want to get into machine learning, but it actually tells us something kind of amazing about ourselves. That when you interact with an LLM like GPT or Gemini and you have this very rich feeling interaction, it's doing one tiny piece of human intelligence. It's just doing that statistical learning piece how much of something that feels so profoundly human, like language, is accounted for by just that. So, me as a nerd and a neuroscientist who studies AI, that sort of thing, I think we underappreciate how genuinely powerful AI is as a tool, because it also points out what's missing. We have all these other forms of intelligence that humans possess, and this modern AI does not. And those are the things that allow us to be creators, that allow us to be explorers of the unknown.

In fact, almost literally by definition, AI is only about the known. We take a series, essentially, of questions. Here's a picture, is there a giraffe in it? And an answer, yes or no. And we give it to it again and again. Billions of times, billions of pictures, or here's a series of words, what's the word that comes next? And we give it to it again and again. It is far surpassing humanity in its knowledge of the known. But while it knows everything, it understands nothing. And that's what we bring. And that's why when I hear this tension in companies in the HR tech world, the ED tech world and beyond, even just consumer facing AI, it's all about making our lives easier and doing our work for us and writing our emails. That is not only a dead end, it robs us of our creativity. What makes me unique? If you uniquely didn't need to write that email, then it probably shouldn't have been written at all.

So when I think about building AI that interacts with people, what I'm thinking about exclusively is how am I making better people? And I think that's crucial for us all to in mind for a wide variety of reasons.

Lucy Lewis: And you're absolutely right, because the dialogue in this area, particularly in the workplace context, becomes about productivity, becomes about efficiency. The business case becomes, we should introduce AI because we'll be more productive, we'll be more efficient. I know you've talked a little bit about productive friction. How do you think you can use that to get away from this narrative, turn the dial on this narrative, that it's all about generating more efficient workplaces?

Vivienne Ming: Exactly, this is an alternative. It's not even the only one, but it's a very clear alternative. Instead of having GPT read and summarise your emails for you, and then write your emails at the other end, and boy, it seems like we can cut out the middleman and just stop writing terrible emails. Instead, how do we challenge ourselves?

I use Gemini, after a long history of collaborating with Google, that's their version of these if you're not familiar, and I use it extensively for a variety of purposes. What I would never use it for, or any of the others, is to write for me. My voice is my unique superpower. So, yeah, even this voice. So, I do something very different. In fact, I just did it over the weekend with my next upcoming book. Gemini, here's the new chapter I just finished. First draft, it's all together. Now Gemini, you are my worst enemy. You are my nemesis. For decades and decades, you found every flaw in my reasoning, find all the mistakes I made in my research, read through my new chapter, find every mistake and explain to me why I'm wrong. And then here's what's interesting. Then I can immediately follow that up with: Gemini, you're a bored audience. You're not my enemy, but you don't really care. Read through my new chapter and tell me all of the places where you lost your train of thought and didn't follow what I was saying and explain how I could do it better. You will notice in both of these cases, it didn't make my life easier. It didn't speed up the writing process. It surely didn't write anything for me. It slowed me down so that I would do better. The one side, the efficiency gain, is this small little benefit on top. And roughly estimated by existing research, it's somewhere between like a 20, 30 % boost in productivity. So, it cuts out a little bit of time out of my day, that in theory I'm spending doing something else. This ability to force me to be a better writer, to confront my misconceptions, that just doesn't make me 20 or 30 % better. It makes me multiples better. And that's what we should all be aspiring to here is not how AI can do the things for you. You can already do, but how it can allow you to accomplish things you've never been able to do before.

Lucy Lewis: There's a couple of thoughts for me that flow from that. I'll park one and come back to it. But the first one, and I love the idea, I really love the idea that AI will make us better at our jobs. And perhaps I have a slight vested interest in this, but there's also this talk about sort of disaggregation of jobs. I know you've talked a bit about de-professionalisation of jobs, that essentially some of the things we do, let's take lawyers in knowledge jobs as an example, we end up sort of coming out of our jobs, they were integral before and then they get moved down, moved down. If you like, move down the value chain, move down the skills train and you end up with this disaggregation of work. Is that something that we should be embracing? You know, on one sense, as you say, there's a productivity game there if I have more time to do other things, or do we need to find a way that AI can work alongside that disaggregation of work?

Vivienne Ming: You know, there's a tension there, a pretty fundamental one. Some of it has to do with what we believe about humanity at large. But let's start with the basics. There are jobs human beings shouldn't be doing. Someone shouldn't have to bend over in a field picking lettuce just to feed their family. However, it would also be insane for us to build robots that can do that instead and then think of nothing to replace that work. So, there are these deep tensions at play. But let's talk about what you're referring to in this idea of de -professionalisation. So, as we begin to move away from just C3PO is stealing your job and you have nothing to do, what we instead see is all of those parts of your job that are routine. If we bring it back to how I put it before, these parts that are known, another way of describing it is well posed, they have wrong and right answers, all of that slowly but inevitably, is consumed by AI. Not the whole job, but all of those tasks, those questions, those actions. It doesn't mean a human isn't responsible for that part of the job, but as you say, it pushes it down the skills ladder, down the wage ladder, more people find it accessible. So that's another form of tension. It actually does open up more jobs to historically lower skilled workers.

And you might see that as positive if you thought those people were not capable of doing more. But here is maybe me, the idealist, saying, well, I'm an idealist realist. I don't think that tomorrow, 100 % of the world's labour pool will suddenly be doing creative work as scientists and elite lawyers. I still believe every one of us has so much more to offer because once you go down that path where people are essentially just a pair of arms and legs walking an AI around the room, then you're in a trap. We do see productivity gains there, but what we also see most clearly from the ED tech world, but we're beginning to see it in our talent related data is those early career individuals, those junior learning employees, stop learning.

The AI really does the job for them, and they never truly get better at it. So they never move up. And while we may not think much of that, I don't know, in a retail context, we've built our system on assuming junior lawyers are learning how to do their job by doing that busy work. And so many other careers, you hire an early career coder, you pair them with a more experienced one, they work together, they learn how to architect a whole program, that's the unknown, let's call it ill -posed part of that task. It's not about writing just the right line of code. It's about putting it all together and seeing the bigger picture of what's possible. Maybe even disagreeing with what your bosses and your designers told you to do because they didn't understand the problem. And here we're robbing ourselves of that. So for me, in the face of total job replacement in some contexts of large-scale de-professionalisation, we look at what's left. This idea of creative labour, this idea that every job is about the unique capacity of a human being to bring something unique to a job. And I think that's a big change. Probably requires a leadership change, really, as much as anything else, because we need people to be able to make mistakes and explore and let the AI pick up the slack, if you will.

Lucy Lewis: Yeah, and I'd like to come back to some of the thoughts and ideas you have about how people can go on that journey. A lot of our listeners are HR people professionals that are sort of at the start point of wanting to bring people on that kind of cultural leadership change. But before we do, and because you've touched on it, I'm interested in your thoughts on this perennial debate about what is going to happen to all our jobs. On one hand, there's people with a very big doom and gloom saying, well, our jobs are all going to be replaced. There won't be enough labour for everybody. And other people saying, look, we should just see this as the fifth industrial revolution. Each of those industrial revolutions has come with job creation. We should just assume even if we can't identify now what those jobs are going to be, there will be job creation. And I'm interested in your perspective about on that.

Vivienne Ming: So my feeling is it's entirely the wrong question. I'm not a dystopianist about AI. Clearly, I've spent 25 years in academia and entrepreneurship and philanthropic work in this space, but I'm also not a utopianist. The world doesn't get better because you sprinkle microchips on it. And it's absurd to just say, don't worry. It'll just be like the Industrial Revolution. There was a memo from a British diplomat in 1850 saying the plains of India are bleached white with the bones of Indian weavers. The Industrial revolution wasn't so great for them. And it took, that was the second largest industry in the world at the time that the British Midlands essentially ended it as a career within the course of 20 years. So that was devastating for the vast majority of the planet if you want this tiny sliver of the world. So, I wouldn't be so sanguine about, ‘don't worry about it, it'll all work out.’

So, I'm a believer in basic labour economic theory. AI is already creating jobs. The real question isn't, will AI destroy more jobs than it creates? It is, who will be qualified to fill the jobs it creates? And, I don't have to guess at what that looks like. We can begin to speculate about how our choices can change that future, but our future is being written right now and has been for the last 30 years. And that is a future in which there is a massive increase in demand in the lowest skilled labour. We just need a back that can be present in a room doing things we cannot be bothered to build a robot to do and in the most elite, we call it high-scale labour, but again, as you note, I'm thinking this is better described as creative labour. It isn't simply that a doctor knows some esoteric facts about a disease, because GPT and Gemini know those too. It is that they're able to take that knowledge and apply it uniquely to the context of this patient. That demand, whether for doctors, or programmers, lawyers, leaders, is skyrocketing. Right on the edges.

Everything in between, over the last 30 years, has gone increasingly negative in its change in demand over time. That whole swath of jobs that were supposed to take people out of the working class and march them into success if they just put in the effort, are all being de-professionalised.

And in that sense, I think we need to be much more thoughtful, both as a society, but also as HR leaders talking to our organisations about the quick and easy thing to do is hire those lower skill workers, pair them with an AI and get the same level of productivity for lower cost. But the better thing to do is to really empower amazing people to do things they were never able to do before. That's there on the table. In fact, that's been the promise of AI for, I don't know, 20 years, super doctors and super lawyers able to do things they were never able to do before. But unfortunately, it's the AI bait and switch. It is a true thing that that is achievable, but it simply has turned into this de- professionalisation story.

Lucy Lewis: It's so fascinating that it reminds you, I had a conversation on this podcast with Simon Roberts, who's an anthropologist, and he sort of in slightly different terms, reflected exactly what you're saying that actually, when you look at the periphery of people's roles, they're quite happy if there's an AI tool that can help with them with that when you start to focus in on the core parts of their role, they start to be a bit more uncomfortable, like to your point about de-professionalisation. Is there something you think that people professionals can do, whether it's focusing more on skills, on training, culture change you talked about, that enable them to see that although we might be targeting those core aspects of your role, actually the ultimate goal is one that you can do exactly what you've described. You can bring more of yourself, be better for us. How do you start that journey, do you think?

Vivienne Ming: So I think there's absolutely a training aspect that can be done here. And I'm not always so bullish on just running workshops and trainings because it takes a while to change people. And we often don't have the time to really commit that once we're out in the workforce. But this is something that I think is there. It can be worked into people's early work experience. There’s a great paper out of Harvard looking at BCG consultants. I recommend looking it up. And there's a follow -up paper that's about to come out but hasn't been released quite yet. And the way they characterise, and these are obviously, these are elite performers, consultants, but this is going to characterise everyone interacting with generative AI right now, which is they find three basic classes, the self- automators who simply use AI to do their job.

That is the ultimate dead end. It turns out not only while they are faster at the routine parts of their job, they actually get worse at the creative parts of their job. So if you're looking to build a career, rather than just punch a clock, being a self -automator is 100 % a dead end. And allowing a culture in which that becomes the shortcut is deeply problematic. So, the next group are the centaurs. They view everything as an AI job or human job. They think through each project, and they break them up and they send it out. When an AI returns something to them, they either sort of accept it whole cloth or just reject it. And clearly that's better than the self-automators, but my research suggests that's also kind of a dead end. And here's why, both in terms of their research and my experience. The last group, they are called the cyborgs. Which perhaps not entirely coincidentally, when I interviewed for grad schools so long ago, that's what I told people I wanted to build. I was the crazy kid who walked around telling people I wanted to build cyborgs. And this group doesn't make a distinction. For them, every task is a mix. The AI is a tool.

It's their paintbrush. It's everything they're working on. They task it with doing pieces. Then they say, no, no, no, do it better. you misunderstood me or I misunderstood. They really interact with these tools. They guide them.

And here's my personal experience that echoes that so strongly. Why I've been so, I would argue, productive with AI across my career, not just as a scientist building them, but using it as a tool in my projects, is because I've been a professor. I know what it's like to work with grad students whose job 24/7 is to be an expert about the one tiny piece of the world, way more than I know about that project. Even though supposedly I'm the senior person in this partnership, they know so much more about it than I do. So what's my job? I'm bringing meaning and context and understanding. Here's why your idea probably didn't work. Here's what other people in this space are working on.

Here are some hard to define thoughts about why this space of the unknown, this part might be more productive and that one isn't. They can't know that. They know all the facts, I don't, but they can't know these complex creative tasks that are involved in doing great science. People don't think of engineering and science as art, but it is, and so is math.

All of these jobs are creative. So I bring that. And then when I interact with Gemini, or I'm interacting with an image generation tool, I'm treating it like it's my grad student. It knows so much more than I do, but never in a million years would I assume my grad students truly understand what they're talking about. It's all still abstract facts to them. And so I guide them, and I orchestrate, I say yes and no. Here are places you might have a misunderstanding or where might I have a misunderstanding? And I become that cyborg that's really orchestrating it. That is a learnable skill. We all learn it to be good professors. And I think it's a great way of maybe metaphorically internalising the most productive version of AI tool use is to really think you're the artist, it's the tool, and it is a continuous cycle of creation that you're engaged in with an AI. And that, I think, really changes it. And then you could genuinely build some basic trainings around this, highlight what works, and encourage your employees to lean into that distinction.

Lucy Lewis: It's fascinating because I said I had a thought earlier that I was going to come back to it. It's really covered in some ways by that. The thing that brought it to mind was you talking about your own book, actually, because one of the things I've reflected on a bit is what does all this mean for our sense of professional identity, our sort of sense of self? So if say, if I'm a writer, for example, and you gave the example, and I know that I could ask ChatGPT to produce copy, how does that impact my sense of professional worth, my sense of professional identity, which when we look at the future of work, we look at investment in the idea of good work, well -being at work.

Actually, our sense of professional identity is quite core to that. So I'm interested in your thoughts about that, particularly in the context of this example that you so wonderfully gave about your own book, and actually it hadn't diminished your sense of professional identity as a writer?

Vivienne Ming: Well, it's an interesting discovery I think many professors go through is you work so hard as an undergrad and grad school, it's all your work, it's so precious. And then you have your own lab and you're never directly doing the work anymore. You're advising a bunch of students and post-docs. An hour a day is as much attention as I can give any one project. And yet I found if I gave a piece of advice that unlocked something for a student, even just like one idea, I felt 99% of the ownership of that success that I felt when it was all me. And once you get comfortable with that, then it's big and it's transformational. So if I build a bunch of AI -based web crawlers and they go out and collect reams of data for an analysis, I don't feel like I didn't do that. I was not engaged in this research because what I did was, I experimented what sort of data should be collected. What sort of sites should I be hidden at? And I'm a cyborg. So I had to go out and collect initial data and then I changed the pattern and I would have it go out, collect more and then I changed the pattern and I would learn and it will grow. So that a sense of identity is big, but here's another one, and forgive me, I'm gonna try not to be too nerdy about this, but this is a reality of modern foundation models, these massive things like GPT or the big diffusion image models that we see is they aren't the same thing as human creation.

If we looked at all of the different language used online, what you see is this very long tail, the sort of classic shape of a distribution where, yeah, lot of the language is very stereotypical and just the same everywhere, but there's this long tail of people being themselves and using language in a way that is unique to them as an individual or as a cultural identity. When you look at the language produced by GPT, particularly in experiments where they have it learn from itself, it doesn't have that long tail. It is just that big mass of ideas right at the beginning. And so, if you want to ask a question that is a generic piece of knowledge that we might all want to know, it's the place to go. But if you want that long tail, that unique voice, whether we're talking about artistic creation or innovation or problem solving, all these domains where a unique insight is the entire value proposition of your career, you have to be on the long tail. And we can already see that these models are not capturing that, only we are. So, we need to be there and cyborg it out.

Lucy Lewis: Thank you, Vivian. We're coming to the end of discussion. It's been really fascinating hearing you talk about this. And I think there's some really good tips for people in businesses looking at how you go about explaining the value of AI, how you explain that it isn't intended to take away from that human creativity, as you say, the history, the experience, the long tail as you've described it. I'm also sure your book will have lots of other practical tips. So we'll definitely say to people to get that when it's out. But I always finish the podcast by asking the same question. And that is thinking about everything we've talked about today. If you were leading a business, what would be your two priority actions to build buy -in and resilience for the year ahead for the future of AI and technology?

Vivienne Ming: So, I'm both going to reiterate something I've said and bring an entirely new idea into this. So the first one is: I would never embrace the idea that AI is meant to replace people. I don't want AI memos. I don't want AI images in slide decks. Anywhere where you might do that, it didn't need to be done in the first place. That's my absolute rule. In fact, it is my rule at my three companies. So, this is me just speaking truth as someone that loves AI and what it's capable of. I would never allow that for myself or my employees. But what's also here that we haven't talked about today is how AI addresses communities.

So in my upcoming book, How to Robot Proof Your Kids, in which we actually have a whole section in which there's a chapter with that same title and then a chapter on how to robot proof yourself. But finally, there's a chapter on how to robot proof your company. How does AI play in to organisations, whole communities of people? How can it not only make individuals better, but groups better? And this is where we come into research I've been doing for years on maximising collective intelligence. How do you build the smartest teams? And I will say there's amazing potential for AI, particularly LLMs to finally allow us to understand individual job candidates and hold teams as kind of unique entities. How well will this person fit into this specific team? Because I'm a firm believer that you should never hire for individuals. You should hire for teams. And you want the person that's going to make that team better, not some person that hypothetically can reproduce your company all by themselves on a desert island. Although if you find such people, feel free to hire them anyways. But really you want that complementary diversity. A person that both clicks with the team, but uniquely brings something they don't have. So my wife and I began experimenting in the education world with how to build large language bottle based assessments of students that describes them actually as whole people.

Here's their strengths, here's their weaknesses, and here's why that pattern will make them successful. And here's what they still need to work on. And that would be a tremendous transformation to really be able to bring, I'm going to call it executive recruiting onto every single job candidate is kind of been the promise of this space since I was the chief scientist at Guild. And we were trying to build AI models to do this back then, but we didn't have the tools. And now we have something that truly can begin to address the complex reality of a whole human life and even a team. And one of the coolest things I'll just end with this is a new tool we're working on in my philanthropic work called the Matchmaker. And the whole goal of the Matchmaker is it takes the communication patterns of an organisation, a company, a university, it learns the actual social network from those communication patterns, not your org chart, the real social network. And then it finds the connections that should be created and deleted to make that community as smart as possible. And stuff like that is the bleeding edge of where AI can really make a difference in talent.

Lucy Lewis: That's so fascinating. Actually, it goes to a conversation that we've had previously on the podcast about the importance of leaders, leaders at every stage of a business. You know, when you map a business, you'll find that some of your real leaders don't necessarily sit at the top of the traditional hierarchy. They sit in the middle with other influencers. So yeah, I mean, the idea that you could map that in some way is really fascinating.

Vivienne Ming: Peer role models is one of the great untapped powers in every single company. And we have a tool that can help reveal that.

Lucy Lewis: Fascinating Vivian, thank you so much for your thoughts today. I know people listening will want to find out more and I'm hoping you're happy that I direct them to SocoLabs, www.socos.org. Please pick up a copy of Vivian's book when it's out later this year. I know I'll be reading that. That's all for our conversation today. I've been Lucy Lewis and you've been listening to Lewis Silkin's In Conversation podcast. To listen to more conversations like this, subscribe on your usual channels and I look forward to your company again when we have another conversation on the future world of work. If you'd to be part of our Future of Work Hub community, you can go to our website, www .futureofworkhub.info and get in touch with us. And we'd love to hear how you're navigating these issues. Until then, goodbye.

Comment