june 2024

In this fifth episode of our “In Conversation…” podcast series for 2024, Lucy is joined by fellow Lewis Silkin partners and co-heads of the Data, Privacy and Cyber Group, Alexander Milner-Smith and Bryony Long.

In this fascinating conversation, Alex and Bryony discuss a variety of emerging technologies, their use cases in the workplace and the potential legal and people-related risks that might arise. Some of these technologies are ones that many businesses are already exploring, such as the metaverse and ChatGPT. But Alex and Bryony also look ahead to evolving technologies such as neurotech, biometrics, empathic and semantic AI, and explore their potential impact on the workplace and what that might mean for workforce trust.

They also suggest some practical steps employers can take to make the most of these transformational technologies while also minimising the people and data risks. Lastly, Lucy talks to Alex and Bryony about how businesses can navigate an increasingly complex and evolving regulatory landscape, both in the UK and around the globe.

It’s very very important that you are cognisant of how you deploy AI in the workplace and that you really get those employees on board and you get them involved and get them trained up on how to use the systems but you also give them the reassurances that these systems aren’t going to be replacing them.”

Key takeaways

  • Bring employees on the journey. Businesses are increasingly seen as more trusted than government to lead on innovation and implement AI technologies ethically and responsibly. To mitigate employee concerns and build trust when introducing new technologies: communicate clearly with the workforce, particularly about organisational and individual benefits and efficiencies; be transparent about the reasons behind adopting the technology; train employees on how to use new technologies responsibly; give assurances on the impact of new technology on the workforce and people’s jobs.

  • Be clear on your use case. Using AI or other emerging tech just because everyone else is using it or because of a fear of being left behind, is not a good reason to adopt technologies. Identify a business need, engage with the workforce to help identify problems or challenges that need to be solved, and when considering solutions, consider the role of AI alongside other alternatives. 

  • Don’t act in a silo. When adopting emerging tech, particularly AI, this is multi-jurisdictional and multi-departmental. Do not act in isolation. Ensure organisational cross-collaboration to enable far better deployment of technology, both from a risk perspective, but also a trust perspective.

  • Take steps to mitigate risk. To mitigate employment risks of discrimination and bias, it is essential to carry out initial due diligence on any new the technology, and then test it on an ongoing basis throughout the life cycle of the project. To mitigate data risks when deploying technology, key factors include explainability and transparency, identifying a lawful basis, fairness, security and accountability.

Lucy Lewis: Hello and welcome to the Future of Work Hub’s ”In Conversation” podcast. I am Lucy Lewis, a partner in Lewis Silkin’s employment team. Whether you’re a seasoned listener or this is your first time, I am delighted you are here to join me for some fascinating conversations with innovators, business leaders and thought leaders exploring the longer-term trends and the immediate drivers shaping the world of work.

Today, we are going to look at some of the technological developments transforming our daily lives and our workplaces. We will start by looking at the legal and people-related challenges you need to think about, including the impact of new technologies on trust and on “good work”. And then we will discuss some of the practical steps you can take to make the most of these exciting technologies, whilst also ensuring that you are minimising risk. And finally, we will look at how you can navigate an increasingly complex and evolving regulatory landscape, not just in the UK but across the globe.

Our guests on today's podcast are brilliantly placed to speak to us about this. Alexander Milner-Smith and Bryony Long are two of my fellow partners at Lewis Silkin, and together they head up our Data, Privacy and Cyber Group.

So, welcome to the podcast Alex and Bryony!

Alexander Milner-Smith: Yeah. Thanks very much, Lucy. I am very pleased to be back speaking with you and even more pleased that Bryony is joining me. I am Alex Milner-Smith, I am the Co-Head of the data team with Bryony. As many people will know, we have been a large data team now for six, seven years and we cover lots and lots of workplace data, which we are obviously talking about today. But we equally have vast specialisms in ad tech and data litigation and cyber. Today we are obviously not going to be able to talk about the future of emerging technologies and so on without mentioning AI, but it is something that we have advised on a lot, of course, over the last 12 months and frankly, we will probably continue to do so, Bryony, for the rest of our careers. And I think I will hand over to Bryony to introduce herself now.

Bryony Long: Perfect. Thanks, Alex and delighted to be joining everybody today. So, as Alex says, I am the other Co-Head of the data team. I, unlike Lucy and Alex, started out life as a technology lawyer. I think Lucy and Alex come more from the workplace and as a result, I have done a lot of advice, not just on the data issues, but some of the contractual issues when it comes to implementing AI. At the moment we are doing a huge amount around putting policies and procedures in place, and around strategy as well. So, we have lots of experience of seeing how businesses implement AI, both good and bad, and hoping to share a few nuggets with you today.

Emerging technologies – the Metaverse and ChatGPT

Lucy Lewis: Fantastic. I am really looking forward to you sharing that experience. We are living in this tech-enabled world, we all know that we rely on technology in pretty much everything we do. And for some of us, and I include myself in that, the really rapid pace of change can get to a point that it feels quite overwhelming. Alex, you talked about AI, and ChatGPT is quite a good example of the impact of technology, but for a lot of people, it can feel like that slightly crept up out of the blue. So, I thought a good place to start would be zero-ing in on some of those emerging technologies, things that you guys think are likely to have a significant impact on the workplace. What are you seeing businesses introduce? I thought we could start gently by talking to us about some of the technologies that you think we will be most familiar with.

Bryony Long: No, absolutely. I think the birth of ChatGPT back in November 2022, I think it was, was a real game changer and as a result, we are seeing businesses, I would say nervously, adopt ChatGPT or using sort of ChatGPT technology to develop their own solutions which are effectively the same, looking at trying to create knowledge, help them draft precedents, help them draft articles, help them draft policies. They are also looking at technologies helping businesses to summarise lengthy texts as well - we have certainly been using some of that. There is still very much a nervousness though around it. I think people are aware of the fact that it can be helpful but still, I don’t think there is 100% trust. But we are beginning to see people really come around to using these technologies and, you know, fearing that they might get left behind if they don’t.

On the metaverse as well, again, it has sort of died a slight death I think now that AI has come to the foreground again. But for a while, we did see the likes of Meta introducing various workplace products which enabled people to meet in virtual worlds and work in a virtual way. And that was very exciting and, I think, will come back. I think AI has rather taken over at the moment, but we will be seeing more and more people using creative solutions to work within as well. So, it is a really exciting time from a technology perspective and there are some really exciting things that are happening, and I think very much the clients that we are working with now want to try and deploy them. I think they have to do it in a way that is intelligent and thoughtful, and we are going to talk about some of these things going forward. But certainly, there is a real shift in tide in terms of working in a kind of technologically savvy way.

Future workplace technologies – neurotech, biotech, empathic and semantic AI

Lucy Lewis: Yeah, I definitely agree with that. I mean, that reflects what we are seeing that clients want to understand, they want to see what other people are doing. As you say, they are slightly fearful of being left behind. And Alex, to avoid that sort of creeping up issue, tell us some of the things that we might be less familiar with, but are sort of there on the horizon that are part of the future.

Alexander Milner-Smith: That is a very interesting question, Lucy. I will start with things that are perhaps unusual and we are not seeing too much of and then sort of end with a few examples of things that some clients and some of our colleagues are already talking about.

So, I will start with neurotech. Neurotech, essentially, is technology that can allow people to see what is happening internally within our brain, what synaptic response is happening, what stress levels are being applied to the brain, which areas of the brain we are using. We already see a lot of this in the medical sector. But of course, today, we are talking about the workplace context and the ICO, in its paper about neurotech, does acknowledge potentially there are some use cases, Lucy, in the workplace. For instance, a very obvious one if you are an employer or an engager that works with your employees who drive heavy goods vehicles, you may wish to employ neurotech to check that people are not stressed, or they’re not falling asleep, or they are not sleep deprived when they are operating those vehicles. That would have a health and safety element to it. Equally, if we are to push that further, we might want to assess whether our employees are operating at 70% capacity, 100% capacity, 20% capacity. If I were to push it even further, there might be ways, as I understand, for neurotech to be used to actually trigger even more efficient synaptic responses so that people are more efficient. I must say that I am not saying that any of this is being used right now, unless Bryony always wants me to work harder and she may have put some tech into my drink or something, but that hasn’t happened as yet, but it is something that will happen in the future.

Taking it one step perhaps closer to reality is the use of artificial intelligence to make empathic judgements on people. There is already an example, and indeed a European regulatory decision in Hungary on this, where the Hungarian DPA criticised a Hungarian bank for using empathic AI algorithms to assess how consumers, customers had felt during and after consumer calls for a very, very vanilla reason, namely to decide if they needed a call back to say, ‘how are you feeling?’ to make sure that you do not lose customers and so on. But this is already happening now, we will certainly see deployment in the workplace of potentially empathetic artificial intelligence in recruitment, during disciplinary processes maybe. Certainly, outside the workplace, we will see it used in chatbots and so on. Then there are obviously myriad different medical technologies. Bryony, however, will touch later on some of the regulatory restrictions, certainly in the European Union on some of this technology but it is available now and it is being used.

Coming to some slightly more realistic things that are being used and will be used more in the future, biometric systems. I think we are all used to biometrics in the public space, in the public square, namely real time facial recognition to police crowds, demonstrations, potentially at airports for immigration purposes and so on. But we are seeing it touch more on the workplace. The basic thing that we are always talking about with our clients is can we use biometrics, fingerprints or facial recognition for attendance monitoring. But we are also seeing it deployed occasionally in financial services contexts to make sure that certain traders are not visiting the trading floor that they shouldn’t for price sensitive information and so on purposes. Equally, in the pharmaceutical industry, if there are clean rooms where someone has already gone through the UV or whatever cleansing process, they are trapped to make sure they do not exit the specific clean area. There are many problems with that and there are, again, many regulatory constraints but it is something we will see more and more.

And finally, and I could go on, Lucy, for hours on potential technology but there are lots and lots of new tech within the monitoring sphere and the one that I want to pick out is AI algorithms being used for semantic exploration. So, looking at how people talk in emails or Teams to work out whether someone is an attrition risk, whether someone is a competition risk or just to be slightly less cynical, whether someone is an absolutely loyal employee who will remain with you for 50 years being productive. But all of this does exist, and it will be used more and more. We are obviously going to talk later about the regulatory constraints around it and the trust issues with that and the transparency, but it is all there, potentially ready for deployment in the workplace.

People-related and legal risks

Lucy Lewis: Thanks, Alex and we’ll come on to some of those practical issues about what people are doing, because I can see it is a spectrum. But even on a spectrum, you know, some of what you are talking about is really quite terrifying and it is hard to see how people won’t perceive it as being intrusive and certainly, you know, we only have to look at the newspaper headlines to see that there is a perception that some of this is very intrusive. Perhaps we are being unreasonably led by the media but nonetheless there’s a perception I think within workplaces that actually, there’s a degree of over intrusion on all of this technology and I am really interested to hear from the two of you what you are saying to the businesses you advise about the kind of people-related risks, legal risks, I guess, because they are sort of tied up together to a certain extent and how you can go about addressing those things.

Alexander Milner-Smith: Yeah, thanks, Lucy. I’ll focus on some of the employment legal risks and Bryony will focus on the data risks. Again, as usual, there is myriad different employment risks we could talk about, but I think the big one that is raised most often in the press is the discrimination risk. Just because it was one of the last things I mentioned, let’s use biometrics as an example. There is a very, very significant risk of discriminatory decisions or discriminatory processes coming into the world of biometrics. In fact, we have already seen various cases around this whereby companies have not necessarily done the due diligence that they need to do on the biometric tech that they are using and as a result, have brought in biometric tech that does seemingly discriminate against certain ethnic minorities in terms of the number of false positives it has in terms of recognition. I won’t go into detail on the particular providers there, but there are some providers who have thought about that very, very carefully and for instance, have collaborated with the ICO to show that their technology has mitigated that risk, as far as possible, to a point where it is just a statistical anomaly that, frankly, is probably far better than a human making a decision in terms of, say, facial recognition. That is a key point I think that is important. You can never absolutely remove that risk of discrimination against any protected characteristic. But there does come a point in statistics where actually, the technology is better than a human. If I was comparing a photo of you, Lucy, with a photo of allegedly you from your passport, I will probably make errors. But the key thing is to understand that algorithms are not perfect out of the box, and you need to test, test, test at the proof-of-concept phase, at the initial phase and all the way through, frankly, the life cycle of a project to make sure it does not start developing biases six months, 12 months, 18, 24 months down the line. But I think that is the absolute key thing.

Bryony Long: And I guess building on that from a data perspective, the key issue I think is around explainability and transparency. I think one of the most important things is that if you are using this sort of technology, employees need to be aware of it, they need to understand how it is used and you, importantly, need to be able to understand how it is working. And if you are using it to make decisions, you’re going to have to be able to explain how those decisions were made, and it is incredibly complicated to explain any algorithm. So, I think that is one of the key pieces of work a lot of our clients are trying to do when they are deploying some of these solutions, is trying to actually get under the hood and work out exactly how these algorithms are operating and it can be quite difficult to get that information from providers because, of course, they do not want to be selling their crown jewels, so to speak. But it is incredibly important, particularly for decision-making AI tools in a workplace that you need to be able to explain how that decision was come to. Because if, for whatever reason, that decision ends up being challenged by an employee, it’s your burden as the employer to prove that you weren’t being discriminatory, for example, if that is what it was. Very much, it’s important that you do need to work and understand how those models process the data and make those decisions.

The other key thing that we find we are advising on a lot is around the lawful basis. As people may or may not be aware, when it comes to a workplace lawful basis, it is very difficult for employers to be able to rely on consent because consent under the GDPR has to be freely given, it has to be specific, it has to be informed, and it has to be an unambiguous indication of someone's wishes. A few of those are fine, but the issue for employers is the freely given, because rarely will an employee feel like they can’t feel compelled to give a consent. So therefore, when we are advising on data issues in the workplace, we always try and find any other lawful basis to rely on, unless we absolutely have to rely on consent and then lots of things need to be put in place.

And then as Alex has sort of talked about, fairness is a principle under the GDPR. It’s a principle that we have obviously always had, but no one has ever really honed in on it. But obviously, any system that you deploy that is not fair, that has potential bias, that could be inaccurate is going to have challenges and so, I won’t repeat what Alex has just said but for all of the reasons Alex has just said, that is also a challenge from a data perspective, even though it is very much an employment law issue as well.

The other thing is accuracy, ensuring that the machines are accurate and how you go about doing that is always very challenging. There is a great case where there was a lawyer that we always chuckle and think, how could anyone have done this, in the US who actually used ChatGPT to make up a case and it turned out that all of the citations that he used were incorrect. That is again a challenge, that we aren’t there at the moment in any of the models that we are looking at in terms of full-on accuracy. So, how can we be sure that the decisions that they are making are the right ones?

And then there are issues again around security, which I do not think are specific to AI. It is just that you are using so much more data than ever before that you need to ensure that all that data is kept secure. Then we always talk with our clients around accountability, and this is probably the key thing to be able to demonstrate how you have worked through quite a lot of these issues. So, what we do is spend a lot of time with clients preparing DPIAs, data privacy impact assessments, where we sit down, and we go through all of the challenges and work around and work out ways to mitigate them. Some of them, if I am honest with you, are not perfect sciences and there is an element of risk on some of these systems because of the challenges of some of the data issues in particular. But certainly, documenting your thinking and trying to take all the steps you can to mitigate some of these risks is going to be crucial to ensuring that you can successfully deploy some of these solutions.

Practical considerations for deploying workplace technologies

Lucy Lewis: Thank you both. That’s a really helpful run through of the considerations that you need to have in mind when you are thinking of implementing this kind of technology. One of the things that I think it’s helpful for people to understand is seeing that context, so understanding all this emerging technology, seeing the context of considerations that need to be applied. What are the key practical steps that people listening should be thinking through now, if they are looking ahead to thinking, yes, of course we want to have more deployment of this kind of tech within our workplace? What are the practical steps that they need to be thinking about now?

Bryony Long: So, I think when you are thinking of using AI solutions in the workplace, the absolutely critical thing is that you need to take your employees on that journey with you, you need to help them see some of the efficiencies. I think it’s fair to say a lot of people are still a little bit scared by the idea of using AI in the workplace. There could be a fear that their job might be replaced. One of the issues I’ve been advising quite a lot recently is with sports teams and sports clubs and using AI to generate insight around player data. Players might not necessarily want that intrusive insight into how they perform because it could be used against them when it comes to team selection, for example. So, there is certainly quite a bit of nervousness when you start talking about the use of AI in the workplace.

There is also a bit of nervousness as well around employers trusting their employees to use it in an appropriate way. The stories that we’ve heard of mass, I hasten to add, not from Lewis Silkin, but client data, for example, being inserted into ChatGPT and then you do not know where it goes. It really can get quite scary if you don’t have appropriate processes and procedures and do's and don'ts in place. So, for all of these reasons it’s very, very important that you are cognisant of how you deploy AI within the workplace and that you really get those employees on board, and that you get them involved, and you get them trained up on how to use the systems. But you also give them the assurances that these systems are not going to be replacing them. And so, whether you are doing questionnaires, whether, and that is actually another a good point, you are understanding where the use cases are. I have to say quite often, clients will come to me and say we need to be using AI, or this tool sounds really good, I want to use it. But then I often have to say to them, well what is the use case? What problem are you trying to solve by using this AI? Because just saying you use AI, it might not generate that many more efficiencies for you if you are not actually trying to solve a problem or come up with a new idea. And so, getting employees to help you work out what problems need to be solved is also a really good way of making sure that you are using the AI in the most efficient manner. So, for all of those reasons, getting the employees on your AI journey as an organisation I think is absolutely critical.

Alexander Milner-Smith: So, just picking up on something that Bryony mentioned there, the trust that employers and employees need to have with each other, of course. In fact, Lucy, in the first podcast that I did many moons ago, we talked a lot about respect for data privacy from employers, engagers and from employees. And a lot of the things we talked about, so just to carry on that theme from Bryony, Edelman, in their annual Trust Barometer are actually very interesting on this, if people want to look at it themselves, they can. But points three and five probably are the most interesting in that in point three, Edelman from their survey are very clear that business must lead on innovation because it is the most trusted institution, for 52 points more competent and 32 points more ethical than the government. So, in fact, business is in an extremely good place, if it follows some of Bryony’s recommendations to bring its employees with them. And then of course, Lucy, most of us are employees or workers of some stripe, so if our businesses are bringing us forward into a responsible and transparent usage of AI, frankly, not to get too ontological about it, will flow down into society.

But the fifth point is also very interesting. It says when innovation is well managed, people are 12 points more likely to embrace and 17 points less likely to reject AI. Now, that is a general point, but it again comes in the workplace and that is just from, you know, Bryony has already mentioned things like DPIAs and transparency, and there are lots and lots of technical things that you have to do that we will probably talk about a little bit later, in terms of the EU AI Act and so on. But that is one side of governance.

The other side is just bringing on, holistically, your employees with you. It is a trite thing to say, but all the work that Bryony and I help clients with, the voluminous governance work, can be wholly undermined by, Lucy, a poor rollout or a rushed rollout or a rollout that does not explain the purposes or the limits. And we’re seeing more regulation as well, we will touch on that. But if you can get the general trust piece right, then you really go a long way. You know, good communication, having trust from your employees can occasionally cover some errors that are made, this is new tech so it’s a very important piece.

UK, EU and global regulatory landscape

Lucy Lewis: Now, we have promised everybody a couple of times that we’ll get to regulation, and you talked a bit there Alex, about the Edelman Trust Barometer. We talk about that quite a lot on the Future of Work Hub, and it’s interesting that business is more trusted than government when it comes to innovation. But nonetheless, there is obviously a really important role for government in this and there is a really important role for regulation. So, tell us where we are with regulation, what is the approach of the UK regulator and how does that differ from the approach more globally?

Alexander Milner-Smith: I will take the UK position and Bryony will take the EU position and I might also reference some rest of the world elements. In summary, and people will have different views on this, the UK is taking a light touch approach. Some might say pro-innovation, business friendly. Some might also say, as the ICO themselves do, that they feel that they already have the underlying legislation to legislate this area, namely from an ICO perspective, the UK GDPR. From that perspective, they believe they have enough to cover proportionate use of personal data, transparency, having to do DPIAs when there is a high-risk processing element. Again, they feel they already have the legislation needed to protect consumers and employees. And instead, what they have done is release various guidance notes, be it on generative AI or generally about artificial intelligence or use of artificial intelligence in the workplace. And equally, other industry bodies, so DCIT have released a very, very useful paper on responsible use of AI in recruitment. Now there is no new legislation at all required there and yet most of the things that Bryony is going to talk about, in summary, from an EU perspective, I would argue are already covered in that approach from the ICO and UK government. Equally, other bodies like the FCA have released their own paper. The CMA, the Competition Authority have released their own papers, and they are also all working together to ensure there are no gaps.

Obviously at this time of recording, 16th of May 2024, I have to mention, of course, a new government might come in later in the year and I don’t know what their position will be. It may be we need legislation and indeed the Trades Union Congress is making calls for particular legislation around use of AI in the workplace. So, we will have to watch that space, Lucy, because those calls are also mirroring calls in Europe, even with the EU AI Act. Just before I hand over to Bryony, I will say just last week I was in Singapore and I was talking about the European (with a capital) position i.e. EU and UK, but there were speakers at the seminar I spoke at from Thailand, Indonesia, Malaysia and Singapore, so, obviously, Singapore being an enormous player. And none of them are actually doing any specific AI legislation, they are releasing a lot of guidance, and they are also taking that approach that their current legislation is enough to protect consumers and employees and so on. So, the UK may be taking a wrong approach, and they may regret this, but there is some logic to their view.

Bryony Long: As Alex says, the EU are taking a slightly different approach to the UK at the moment and they are doing effectively, I think, what they did when it came to implementing the GDPR. They are putting in place a blueprint that I suspect, at some point, the rest of the world will have to follow in some way, shape or form. This is because the territorial scope of the EU AI Act is not just for EU organisations. It will be for anyone who is deploying AI in the EU. Therefore, if you have got particularly stringent requirements in one huge market, then the likelihood is that most multinational companies are going to have to apply the set of rules they apply for adoption of AI in the EU to all of their adoption of AI. Otherwise, it just becomes completely difficult to manage and time consuming. So, and this is a bit like what has happened with the GDPR, is the fact that yes, there are other laws around that aren’t quite as heavy handed as the GDPR, but a lot of the multinational organisations we work with tend to just apply GDPR principles to their data processing activities.

So, what does the EU AI Act say? I actually think it is a quite helpful piece of legislation in the fact that it is not super draconian in terms of it is a set of do's and don'ts. Really, it looks at different AI systems, it looks at the purpose of the different AI systems and then sort of categorises it into whether it is an unacceptable risk. If an AI system is deemed to be an unacceptable risk, you can’t use that AI system. If it is a high risk, then you can use it but there are various obligations that you need to comply with. And then there is minimal risk and low risk and really when it comes to the low-risk obligations, they are very much voluntary and the minimal risk obligations are, again, quite watered down, and they are mainly around transparency. The key thing for those in the workplace is that most HR systems are going to fall within the definition of high risk because of the fact that they do have a significant risk to the health, safety and fundamental rights of the employees. And so, for that reason, the majority of solutions that you will be deploying in the workplace, if the system is being used in the EU, you will need to be aware of some of the obligations for high-risk systems.

And as I say, I think this will be the blueprint going forwards for a lot of other jurisdictions, even if they do not want it to be, because of the fact that most large organisations do have that multi-jurisdictional nature and therefore they will be putting in place principles that meet the highest level of regulation. The EU will probably be the furthest forward when it comes to regulating. There are other bits and pieces of legislation sort of dotted around the world regulating certain elements. So, for example, there is a law in New York which puts a prohibition on use of algorithmic decision making in the workplace. Obviously, that prohibition is subject to complying with various obligations, which are actually sort of similar to some of the EU AI Act obligations. There are also some executive orders that are in the US, but again, those at the moment are predominantly for public companies. So, whether as a private company you would have to comply with them more around providing information at the moment, they are not very specific to you. So, at the moment, I would say that other than the EU and the few dotted pieces of legislation here and there, there isn’t heavy regulation, but I think it is on the way.

Priority actions for employers

Lucy Lewis: Thank you both. It has been really great to have you and to do that run through of both the emerging technology, as well as the considerations for businesses, practical and legal. I’m actually going to bring you back to the nearer term to end our conversation, because everyone listening will know that I always finish these conversations with the same question to my guests. And here it is for you guys, given everything we have been talking about today, what would you say are the two priority actions for employers and their HR teams, the things they should be doing now to prepare and build organisational resilience for the year ahead?

Alexander Milner-Smith: Yeah, that is a really good question. My one is don’t act as a silo. Anything to do with new tech, but especially AI, is multi-jurisdictional. So, let’s just take a workplace example, a recruitment tool. HR and the recruitment teams might want to use a recruitment sifting tool. Perfect, they’ll be the ones who know the reason for why the business needs to use it. But the people who will be instrumental in deploying it and making sure it works with whatever other recruitment systems are in place will probably be the product team or the infosec team. At the same time, sign off will be employment legal plus the DPO, the Data Protection Officer, plus privacy legal. Plus, there will have to be general compliance people involved and any company, even SMEs will probably have some kind of data governance team, board, whatever. So, you do not just do things in isolation. So that idea of cross collaboration is very, very important and I think it will allow far better deployment of technology, both from a risk perspective, but also to go back to what we talked about from a trust perspective because your employees, workers, freelancers, consultants, workers will just see an organisation taking the compliance seriously and they will trust you more and they will go with the flow, as it were, of that technology.

Bryony Long: What Alex has just said is actually incredibly important and I think it is probably the headline point. But I think the other things that I always say to clients is be really, really clear on your use case, like why are you wanting to deploy this solution? Is there something, is there an alternative way of doing it that would net the same end result? As I say, time and time again, I see clients just wanting to use it because everyone else is using it as opposed to actually having a real need to use it and I think until you have got a real need, I probably wouldn’t just use it for the sake of it because of all of the complications that we have talked about. So, that for me, is number one and I think number two is encouraging responsible use. I think it is so important that employees know what they are doing when it comes to AI, because at the moment we are at the stage where, you know, AI is not going to replace the workforce. But, where it can go wrong and where it will go wrong now, is if people do not use it appropriately. So, you’ve got to have various rules in place around how you use AI and I think that if you can put that in place and you can encourage people to use it responsibly, then you can use it in a way that can really drive efficiencies and actually it can be a very, very useful tool.

Lucy Lewis: Thank you both. It’s been a brilliant and thought-provoking conversation, and I have definitely taken away that you need to focus on use case, you know, fear of missing out is not a good enough reason and not a good enough use case.

That’s it for today's conversation. I’ve been Lucy Lewis and you have been listening to our “In Conversation” podcast. To listen to more conversations like this one, you can subscribe on our usual channels, and I look forward to your company next month, when we will explore the impact of shifting expectations of employees and consumers and wider stakeholders, when it comes to sustainability and responsible business.

If you would like to be part of our Future of Work Hub community, please go to our website, www.futureofworkhub.info and get in touch with us. We would love to hear how you are navigating these issues. So, until next time, goodbye!

Comment