Express & Star

Software to block explicit content while it is filmed or streamed developed

SafeToNet says its software, which uses artificial intelligence, could help prevent grooming, sextortion and bullying of children.

Published
Last updated
A child using an Apple iPhone smartphone (Peter Byrne/PA)

Technology is being developed that can block sexual or violent content as it is being filmed, shared or livestreamed, which could help safeguard hundreds of thousands of children.

A British start-up is using live-threat detection software, powered by artificial intelligence, to identify potentially harmful material as it is filmed or shared in real time.

It could be used on children’s phones to prevent them from creating, sending or receiving video or pictures involving nudity, sexual content and violence “before any damage is done”.

This is believed to be key to ensuring safeguarding, given that 29% of child sexual abuse content acted on last year by the Internet Watch Foundation (IWF) was self-generated, and this proportion is rising steeply.

Social media companies could use the technology to help prevent graphic content being uploaded and to interrupt livestreams, protecting users and minimising the exposure of moderators to potentially traumatising material, SafeToNet believes.

The company has already produced a device using similar AI which detects patterns on a phone’s keyboard to prevent sexting, bullying and other abuse.

This technology flagged up girls as young as nine who were being sent explicit texts during the coronavirus lockdown.

Chief executive Richard Pursey told the PA news agency the new technology, SafeToWatch, could help prevent grooming, sextortion and bullying, citing the example of a fight between two schoolchildren being filmed and shared widely at the expense of the perceived “loser”.

The father-of-four from Kensington, west London, said: “A phone is the most dangerous weapon known to man as far as I’m concerned, because you can do anything you like – talk to anybody you like, look at anything you like, hear anything you like, share anything you like. And it’s in an ungoverned, unregulated world.”

The AI technology runs twice as fast as the average smartphone camera and analyses video content frame by frame to assess its risk, hashing or greying high-risk imagery.

It can also detect content including anime and cartoons, and is being trained to identify gore, weaponry and extreme violence.

Television broadcasters could employ the AI to block out potential violent or unforeseen content on camera while correspondents are reporting live.

What happens once threats are detected is yet to be finalised, but options could include remotely locking the device for a period of time, blocking the app being used for filming or sharing or blocking the phone’s camera.

The technology could also be used by video-conferencing companies to filter livestreams.

SafeToNet founder Richard Pursey (SafeToNet/PA)
SafeToNet founder Richard Pursey (SafeToNet/PA)

“There’s no point you being told that yesterday your 12-year-old son sent a naked picture of himself, because it’s too late you know, you’ve squeezed that tube of toothpaste, the paste has come out and you can’t put it back in again,” Mr Pursey said.

“The ability to detect, to analyse live-video capture in the moment, and then do something if a risk is detected, for us that’s pioneering – we don’t know of another company in the world that’s done that.”

The AI successfully detected 92% of content involving nudity and 84% of violent examples during initial analysis of millions of images and videos, and accuracy rates are likely to improve as the system training continues.

In November, 2,000 families will start testing the software, which is expected to be ready for release by mid-2021.

Fred Langford, IWF deputy chief executive, said there is a huge and rising amount of self-generated content by children online and that such software should “absolutely” be pre-installed on all devices for children at point of sale.

He told PA: “Everything is moving towards end-devices, and this piece of software has positioned itself in the perfect place.

“From what I saw in the demonstrations, it would absolutely stop anyone from being able to view potentially illegal content on their phone and also to take those pictures and upload them anywhere else.

“And the flip side is people could run it the other side to measure what people are doing as far as uploading content.”

Mr Langford said it could also help minimise exposure of moderators to content likely to have a traumatic impact on their mental health.

Sorry, we are not accepting comments on this article.