06 Nov 2019

Chatbot helping victims of cybercrime

An interview with Tim Deeson, CEO of Green Shoot Labs, which has partnered with the Cybercrime Helpline to create a new chatbot aimed at helping everyday users deal with issues of cybercrime.

By Tom Dent-Spargo

shutterstock
Chesky / Shutterstock.com

 

RLJ: Can you start with an overview of the chatbot and how it works?

Tim Deeson: We were approached by the Cyber Helpline who are a non-profit that wanted to try to help victims of cybercrime, and if you’re a private individual at home, a victim of cybercrime, there’s not really a ton of resources that are available to you. The police don’t really have the expertise or capacity to help, central government doesn’t provide many resources and professional cybersecurity experts are expensive. What the team at Cyber Helpline have done is to leverage their expertise and knowledge, but do that in a way that was scalable. So the issue is how to help victims at a relatively low cost-per-engagement in a scalable way so that you have the benefit of a platform that can have an impact on lots of people. We worked with them from the product ideation and innovation stage to say, what’s the challenge, what’s the opportunity, what assets and resources do we have, and we developed the product for a chatbot to allow victims to get help in that way.

Obviously most people don’t have a good understanding of the ins and outs of cybersecurity so it’s designed to allow victims to describe in their own words – using normal everyday terminology – the attack and symptoms that they’ve experienced. We use natural language processing in our platform, which is an open source framework called Open Dialogue, to process the intent and use and knowledge mapping and different kinds of AI techniques effectively to provide an expert system that can diagnose what the attack might be and then provide support, provide recommendations for litigation or a solution.

RLJ: Making sure it’s accessible, using everyday words instead of jargon?

TD: Exactly. If they were experts in cybersecurity and knew how to describe the details of an attack, that would be half the problem solved and they probably wouldn’t need this help. It was important that it was available to people who weren’t computer experts and to provide them support appropriately.

RLJ: What's the process for a user?

TD: You would have found out about the Cyber Helpline through one of the different channels – I know they’re speaking to the police and the Home Office at the moment about how different forces and services could take advantage of it – or you would have been referred or found it through Google. You’d land on the Cyber Helpline site, you’d be able to just then start that discussion with the chatbot where you’re simply able to describe the issues and the problems. Depending on how specific you can be and if it can diagnose the issue from what you’ve said it will either have made some recommendations or recommended some resources or, in some cases, they also have volunteers who are available so it’s a combination of approaches of how they can best use their resources to help people.

RLJ: What sort of cybercrime does it cover?

TD: You have the automated attacks but what they also see is this crossover from victims of stalking or people who have been targeted specifically. So it looks across that whole spectrum of things like ransomware, and some of the mass-scale webcam blackmails and related attacks, you then also have the more typical phishing attacks and those aimed at fraud. It’s really designed to be as holistic as possible because often part of the challenge is that people just don’t know what’s going on, what’s happened to then try to tell people to categorise and then understand that. Even understanding some of the possibilities of what could have happened to you and what’s happening underneath is that first point of contact to help people understand the context of the situation that they’re in.

RLJ: How are you working with the police and Home Office to signpost it?

TD: From their point of view, they recognise that they have a real challenge as they don’t have the resources and expertise and capacity to support people in this situation so they’re interested to understand what kinds of solutions are available to help them be able to provide better support. I know there’s some ongoing testing in some parts of the central government and the cybersecurity national teams have been looking at different ways they can utilise it or incorporate it into their work.

RLJ: What the current stage of the chatbot and the reaction so far?

TD: It’s been in an early beta testing since the beginning of the year and there was a formal launch in early June, and it slowly can evolve and improve over time. We’ve had lots of people using it and they can train the system to improve the diagnoses by the more people that use it the more the team can train it to diagnose more attacks that it hasn’t been able to diagnose or attacks it could have diagnosed more quickly have a training model to be able to improve it over time. 

The feedback has been that it’s been effective; I think that it’s been correct in about 80% of uses, which is pretty high for this kind of system in an early stage. I haven’t heard of any negative feedback so far. At the moment it’s relatively simple, so the roadmapping plan for it is to improve the diagnosing accuracy but also the amount of in-depth support it can provide because it’s great in a way to say what’s happened, but the victim needs to know more about what to do next so that’s the part where it can be increasingly sophisticated and provide more resources.

RLJ: How are people with talking to a robot?

TD: We’re really clear that it is a robot and from GreenShoot Labs, the company who built it, the company I work for, our strategy is that it’s really important not to pretend to be a human, because people are fine dealing with humans and people are fine dealing with bots, but what they don’t like is when they think they’re dealing with a human but then it turns out to be a bot. The platform is really clear that in letting you know you’re dealing with a bot and that it has limitations on what it can help you with and what it can do. People are happy to have free 24/7 instant advice, people are willing to have that tradeoff of dealing with a bot and having a problem solved quickly that you wouldn’t otherwise be able to solve, consumers are kind of happy to take that as a bargain.

 

If you look at it from a user-sensitive approach people often have a problem or a need that they want to be resolved, and what we really want is it to be done in a timely fashion, to a high quality, in a way that’s not frustrating. I think people are fairly relaxed about the actual methods that are used to do that. The feedback we have on the solutions that we implement is that as long as you’re clear to people that you’re dealing with a bot and as long as the bot is effectively and intelligently implemented, people can resolve the problem they’ve had more quickly and in a time that’s convenient to them without having to wait for customer support or sales, then there’s not any inherent downsides to using a chatbot.

RLJ: What data does it take from the caller?

TD: We don’t take any personally identifying information. We take long-form attack descriptions which is effectively a long description of what the user has told us. We use that text to run the analysis and then that’s held in the attack database so that we can go back to look at it and see what conclusions the system came to and then improve it over time if it wasn’t the right one or we reinforce the accuracy if it was.

RLJ: How are you protecting the data collected by the chatbot?

TD: We’re taking a fairly standard belt-and-braces approach to security. It’s a hardened cloud stack that we use that’s running on industry-standard infrastructure with the normal security implementations that you would have. From the perspective of a potential hostile agent, because we don’t have any personally-identifying information in the long-form attack descriptions, it’s probably not a huge target – it doesn’t specifically benefit any kinds of negative actors. But of course, when dealing with cybercrime, the fact that it exists potentially makes it a route to attack and it provides another vector to attack victims, so it’s obviously something that we’re conscious of from a security perspective. So we’re careful about the kind of data we’re storing and the access to it. As I said, it’s not a particularly viable target for an attack but as soon as something is out there it can be potentially targeted.

RLJ: What's next?

TD: Cyber Helpline are a young exciting company, and for them they’re looking at different routes they can use to scale the product and the company itself and also looking for partners in the space, whether it’s commercial partners or public sector people who can provide support to victims already in there, so it’s about increasing sophistication and finding ways that other organisations can incorporate it into tools they use, that’s the big priorities they have.

 


This content is available to subscribers only. To continue reading...

Sign in to your account

Take a one-month free trial

If you aren't a subscriber, please sign up for a one-month free trial to access all Robotics Law Journal content, including:

  • All premium online content
  • Daily newsletters
  • Breaking news alerts


If you require further information, please email [email protected] or contact call us on +44 (0) 20 7193 5801.