Advertisement

Interview: Luke Geoghegan at BASW on AI in social work

Artificial intelligence is already being used in the provision of social work and social care. Luke Geoghegan, Head of Policy and Research at the British Association of Social Workers (BASW) tells us what that means for the sector — and shares his concerns.

This interview was conducted for our new, free special report: People First With AI and Tech-Enabled Care. This is the full version:

Hello Luke. How much is AI already used in social work and what is it used to do?

I think it helps to make a distinction between formal and informal use of AI. Formal use is where an employer says to their staff, ‘Here’s an AI package, please use it.’ The transcription program Magic Notes is a good example of that. From what I understand, a hundred local authorities are now paying for Magic Notes for their social workers to use.

The informal side is where a social worker uses AI but it’s not formally sanctioned by their employer or the local authority. That’s not to say they’ve been told not to use it, they’re just doing it themselves. That can range from the very light touch — you Google something, get the AI synthesis at the top and use that — through to something more involved like getting ChatGPT to write you an essay or report. Obviously, nobody knows how many people are already making use of AI in that informal kind of way.

Luke Geoghegan, Head of Policy and Research at the British Association of Social Workers (BASW)

Luke Geoghegan, Head of Policy and Research at the British Association of Social Workers (BASW)

Is that an issue for social work in particular?

Potentially. One of the things we keep saying to people in our sector is that AI makes mistakes. It hallucinates. It can learn from material that is biased or simply wrong. If you’re going to use this technology, you have to critically assess what it produces.

What does that involve?

Let me give you an example. A lot of our work is done by email. When I’m emailing something important, I always read over what I’ve typed before I send it. In the past few months, I’ve noticed that autocorrect has become a lot more assertive, changing things I’ve typed so fast I don’t notice as it happens. I think they must have changed the AI technology behind autocorrect, to make it quicker and more intuitive. But I’m now finding, several times a week, that it puts in words that I didn’t actually type.

Sometimes the result is that the email isn’t as clear as it should be, or it doesn’t make sense. Sometimes it changes the whole meaning of what I said: ‘We must not do X,’ becomes, “We must….’ That could be very serious if you’re working on case notes. There’s a risk that the AI could change or overturn a whole recommendation.

The result is that I have to read over my emails even more carefully than I used to, before I send them.

And the same is true of case notes or reports where AI is involved.

There’s definitely a value in transcription because our work is bureaucratically heavy. I’m just a bit suspicious of the auto-generated summary. I can see that, across a range of professions, as AI gets better it will provide more sophisticated analysis. From what I’ve seen, we’re not there yet. Then there’s what gets produced as a result. AI can hallucinate and get things wrong but it’s also not very discriminating.

When you write up this conversation with me, you’ll edit out my ‘ums’ and ‘ahs’, and any repetition or digressions. You’ll keep what you judge to be pertinent and you’ll cut the rest. And that’s what we all do, all the time, as thinking human beings: we edit the information we receive to make it manageable and provide focus.

But an automated system transcribes everything. You can be on a Teams chat and the AI helpfully produces a series of notes for you all at the end. You get pages and pages! Who has the time to wade through all of that? My worry is that it’s too easy to think, ‘Well, I’m so busy today, I’ll just accept the recommendations at the top…’

This is all part of a bigger issue. As social work professionals — like journalists, doctors, lawyers or whoever else — our job is to assemble the evidence, make sure it’s from reliable sources, critically assess it and then reach a conclusion. That’s a skill, which takes time and experience to develop. Often, you learn it by taking notes yourself and then writing them up — that’s where the thought process occurs.

Are you saying that technology doing the job for us means we miss out on the learning?

I think the danger is that you can lose the skills of critical assessment, that you just become a consumer. AI should enhance not replace critical thinking in our work.

I saw a good example of exactly that kind of ‘enhancing’ effect not long ago. I’m hard of hearing and use sophisticated hearing aids that rely on AI. When I went in to have them serviced, the audiologist — a very skilled professional — asked me, ‘Have you spent a lot of time in the car recently?’ I was amazed. How on earth could she know?

So I asked, and she explained that the AI data showed that I’d been having conversations with people located directly behind me. ‘In my experience,’ she said, ‘that only happens for long periods when you’re in a car.’ The AI provided the pattern of decontextualised data but the audiologist used her experience to critically assess it and interpret the situation — and she was absolutely right. As I say: amazing.

There are some amazing examples of what this technology can do — such as the sensor technology we cite in our report, about the mother and daughter whose relationship was improved by installing discreet sensors in the home.

There’s certainly a lot of promise in that kind of thing. I’m still a little cautious about it, having spoken to people who’ve not had quite the same experience and had lots of false positives. There was the classic example of a carer making the bed and throwing the duvet and sheets on the floor. The motion sensor picked that up and reported it as a fall — and someone then had to go out and check it, and switch off the alert.

Your example shows how people use technological solutions to alleviate anxiety — it worked for both the mother needing support and for her daughter. We see that quite a lot: people install sensors or cameras, or those video doorbells, so they can keep an eye on what’s going on. There are obviously issues of privacy involved and there’s also the issue that some of these devices weren’t designed with social care in mind. In fact, people often don’t use technology in the way that its designers intended — which is not always bad.

How can misusing technology be good?

It’s more that it gets used in an unexpected way. When online shopping first became a thing about 20 years ago, it had a big impact on social work. You could do the shop for your elderly mother and have it delivered to her door, instead of paying for a home help to do that. Back then, it was mind-bending to learn that somebody in Australia did the weekly shop for their mum in the UK. Now, we take that sort of thing for granted.

That’s an example of technology as self-empowering: people use it to help themselves.

Yes, and I think the big issue is with people who can’t do that. The jargon term is ‘digital capability.’ The majority of people that social workers deal with are the most economically poor, and that tends to go in tandem with lower levels of digital capability and confidence with technology. Some people simply don’t have technology like smartphones or a computer. Some people don’t have the skills to use them. And some people don’t have the technology or the skills.

I’ve seen the effect of that first hand. My mother, who has never had a laptop or tablet, now has dementia. I spoke to someone about what we could do to help her. They suggested this sort of sophisticated alarm clock than can give verbal prompts: ‘It’s nine o’clock, take your medication.’ Great, I thought, that’s just what we could do with. But this device only works if there’s Wi-Fi, which my mother doesn’t have. It’s a very clever bit of kit, apparently, but completely useless for her.

Then there’s the issue I had with my local GP surgery, which helpfully sent me a text with details of my next appointment in a few months’ time. The details were in a letter included as an attachment — but it wouldn’t open on my phone. I regularly go to the surgery through my work, so when I was next there I asked them to print out the letter. They said they’d need to see photo ID, which I didn’t have — although I’m on the practice list there. I had my phone, I could show them the text they had sent, but that wasn’t enough. They said, ‘Have you got the NHS app on your phone? We could do it through that.’ I did, but not the latest version so I couldn’t get into it.

It was very frustrating and took a lot of time — and I’m pretty good with technology. What if that was some more vulnerable person, and about something more important than the date of their next appointment?

So you’re sceptical about technology.

A bit of scepticism is healthy! That’s not to say the technology doesn’t do what they claim — you’re right that it can do amazing things. I can see it had real benefit in your example. I also have a lot of sympathy with local authorities or social care managers who want to make things easier for staff.

But I’ve seen a lot of technology come and go in this line of work, all of it pushed very enthusiastically by the companies who developed it. I think that, just like in dealing with a social work case, we have to critically assess what’s in front of us. It might be the right product in principle, but is it right for us? What are the real costs, down the line? That’s not just the capital cost of buying the software and devices. We also have to train staff to use it and provide ongoing support. There are revenue costs and the issue of lifespan.

A lot of this stuff is free to begin with, to lure you in. Then, when you’re dependent on using it, you’re bombarded with adverts, or need to pay a subscription. Or you find your data has been harvested — that’s a big issue when we’re dealing with highly sensitive personal information in social work.

Software needs updates and upgrading — and that might involve cost. Then there’s obsolescence. What happens if the product you depend on is discontinued? You’ve probably seen the adverts on TV recently: ‘My telecare service is going to be stopped, but I called this number and they were really helpful…’ That assumes that the people receiving telecare are capable of making that call. Someone has to scramble round to sort it out.

There’s a risk that technology designed to save time and money actually creates work for someone.

But it’s more than that. In social work, we’re making decisions and interventions that directly affect people’s lives. We have to do that with careful thought and good evidence.

The question, then, is whether AI aids and enhances that process — or circumvents it…

That’s what I worry about. But AI is here and it’s being used more and more. I think we need to be up front about how we use it. If you’re going to let AI make a decision, don’t pretend there’s a human professional behind it. Be explicit: ‘The algorithm says…’

That’s what’s so important: we need to be honest. We need to think critically and ask questions. And we need to keep having this conversation.

Further reading:

The special report People First With AI and Tech-Enabled Care has been produced by Social Care Today and Healthcare Management (HM), in association with British Association of Social Workers (BASW), Digital Care Hub and Vantage. 

In related news:

Interview: Trialling TEC-enabled care in homes across Devon

Interview: Reducing falls in care homes with Earzz acoustic monitoring

Interview: AR app ‘Dorothy’ helps people living with dementia

Simon Guerrier
Writer and journalist for Infotec, Social Care Today and Air Quality News
Help us break the news – share your information, opinion or analysis
Back to top