AI for Detecting Extremist Content

AI for Detecting Extremist Content

Our Explainable AI Platform (XAI) provides an adaptable approach to detecting and classifying violent or extremist content.

Our Explainable AI Platform provides high throughput, scalable, explainable predictive analytics. It is the ideal tool for homeland security services and content distributors to detect and classify extremist, terrorist, or violent content.

The specialised AI deployment approach was developed for the UK Home Office to detect Daesh propaganda, and to classify Daesh images against a set of dimensions with high precision.

Key Features

Scalable

Our Explainable AI Platform is built for scalability. Every model and algorithm deployed into the platform is containerised and individually scalable. Queuing and constraint theory allows us to dynamically scale models at every step in a prediction pipeline based on the performance and throughput required of that model.

Explainable

AI should not be a black box. To engender trust it is imperative that consumers of machine learning predictive analytics understand how the AI reached its decision. Our Explainable AI Platform is designed for transparency and ethical decision making.  

Evolvable

Evolvability has been baked into the architecture as a 1st class concern providing a high degree of potential for customisation and extension. This allows the predictive model architecture to evolve as the problem space changes over time.

Deployable

Our Explainable AI Platform is deployable in cloud or on-prem, and can be used either on a fully managed SaaS basis or as an enterprise self-managed deployment in your own infrastructure.

Analytics Dashboard

Our Explainable AI Platform has a rich analytics dashboard for users and moderators to analyse throughput, and visualise the classification fingerprints of the media it has processed.

You can find out more about our Explainable AI Platform here.

Get In Touch