kbc@keithborer.co.uk +44 (0)191 332 4999
Home Services Our Team News FAQ CPD & Training Vacancies

Artificial Intelligence in Digital Forensics: a force for good or bad?

March 20 2024

Over the past year, Artificial Intelligence (AI) has dominated headlines as the technology rapidly evolves. It's undeniable that AI will eventually impact the legal system and, in the context of digital forensics casework, the question arises: is AI a force for good or bad?

Technology for Good

Firstly, the scale of the challenge facing law enforcement is monumental. The quantity of devices being seized, and the capacities of those devices, means the data requiring review is overwhelming. AI tools integrated into digital forensics tools aim to assist by automatically classifying image content and text, facilitating the identification of potential illegal activities. For example, these tools can discern whether an image contains a child, a car, or other objects of interest, or if a text conversation indicates grooming.

The risk, of course, is that these tools designed to assist become an investigative standard and the output is trusted, whilst no-one considers what it may have missed or mis-identified.

It is important to understand the purpose behind these tools. In the example of identifying images containing children, is it built to avoid false negatives (i.e. never miss a potential child) or false positives (i.e. never say it is a child unless it is certain)?

The law for indecent images of children requires the subject to be definitely under the age of 18 (potentially discounting images of a child who may look older), whereas for the purposes of child protection we would never want to miss a potential victim. This leaves a conflict as to where the tool should be tuned. There must always be a human reviewer in the chain to check the output of such tools (and ultimately the jury, if agreement cannot be made); Keith Borer Consultants specialises in providing this review.

Technology for Bad

Whilst AI can assist law enforcement in identifying relevant material, it can provide an opportunity to use those same tools for bad.

For example, AI's ability to generate realistic images raises concerns, particularly in relation to the production of child abuse material. Luckily, the UK is already well placed to deal with these from a legal perspective. Faked child abuse imagery may fall under the Protection of Children Act 1978, which defines a pseudo-image as “made by computer-graphics or otherwise howsoever, which appears to be a photograph”, or the Coroners and Justice Act 2009, as Prohibited Images of Children is intended to capture non-photographic images of children.

There is the issue of faked ‘revenge pornography’ depicting a non-consenting adult. This appears to fall outside of the definition in the Criminal Justice and Courts Act 2015; a gap which has only recently been plugged with the introduction of the Online Safety Bill 2023, widening the definition of an image to ‘whether made or altered by computer graphics or in any other way’, effectively future proofing the legislation.

There are, however, aspects of AI that the law may not be ready for. For example, there are fears that ‘deep fakes’ could influence elections and undermine our democracy, examples of which have been seen in the Ukrainian conflict (such as a video of President Zelenskyy ‘surrendering’).

The risks of AI extend beyond imagery. Realistic ‘chat bots’ can be used to hold conversations with potentially thousands of children at once. If the chat bot is the one that is inciting the children to perform certain acts, what legal responsibility does the owner of the chat bot have?

This raises the important legal question: who is responsible if AI does something malicious that you didn’t ask it to do? Some theoretical examples include:

  • Using AI to generate lawful pornography, but it then produces an image of a child
  • Using a chat bot, which inadvertently has a sexual conversation with a child
  • Playing a game with AI which starts a thermonuclear war (showing my age, but the movie WarGames predicted the issues of AI back in the 80s!)

In summary, the current genre of “Artificial Intelligence” is sure to bring both opportunities and challenges to digital forensics, and Keith Borer Consultants is here to help you deal with the fallout. As AI continues to evolve, finding the delicate balance between harnessing its benefits and managing its risks becomes paramount.

Author

Ross Donnelly

Ross Donnelly
BSc (Hons), CFCE, CAWFE, ICMDE

Subscribe to our mailing list


Unsubscribe