Our Explainable AI Platform provides high throughput, scalable, explainable predictive analytics. It is the ideal tool for homeland security services and content distributors to detect and classify extremist, terrorist, or violent content.
The specialised AI deployment approach was developed for the UK Home Office to detect Daesh propaganda, and to classify Daesh images against a set of dimensions with high precision.
Our Explainable AI Platform is built for scalability. Every model and algorithm deployed into the platform is containerised and individually scalable. Queuing and constraint theory allows us to dynamically scale models at every step in a prediction pipeline based on the performance and throughput required of that model.
AI should not be a black box. To engender trust it is imperative that consumers of machine learning predictive analytics understand how the AI reached its decision. Our Explainable AI Platform is designed for transparency and ethical decision making.
Evolvability has been baked into the architecture as a 1st class concern providing a high degree of potential for customisation and extension. This allows the predictive model architecture to evolve as the problem space changes over time.
Our Explainable AI Platform is deployable in cloud or on-prem, and can be used either on a fully managed SaaS basis or as an enterprise self-managed deployment in your own infrastructure.
Our Explainable AI Platform has a rich analytics dashboard for users and moderators to analyse throughput, and visualise the classification fingerprints of the media it has processed.
You can find out more about our Explainable AI Platform here.