20 May 2024

How CESIUM is driving AI adoption, safety, innovation and inclusivity (guest blog by Trilateral Research)

Learn more about how CESIUM, Trilateral Research’s groundbreaking child safeguarding AI solution, is driving AI adoption

What is CESIUM?

CESIUM is Trilateral Research’s groundbreaking child safeguarding AI solution. Conceived in 2018 as a response to ongoing data sharing issues identified by the UK government on inter-agency working to safeguard children from exploitation, along with subjective and reactive risk identification practices, Trilateral’s background in data protection, cybersecurity and emerging tech, along with their subject matter expertise, put them in a unique position to drive the development of an ethical AI solution to transform efforts to safeguard children from exploitation.

 

Benefits of using CESIUM include:

  • Optimised software
    • Secure access to multi-agency data
    • A single source of truth for each child
    • Comprehensive local area insights
  • Enhanced processes
    • Increased objectivity
    • Streamlined referrals
    • Targeted interventions
  • Improved outcomes
    • 80% reduction in pre-screening time
    • 400% capacity uplift
    • 34% earlier safeguarding referral

 

Awareness of CESIUM continues to gather pace. Over the past year, experts at Trilateral have been invited to present on CESIUM and the innovative work of the Trilateral team. Audiences have included the UK’s National Police Chiefs’ Council (NPCC), Interpol’s Specialist Group on Crimes Against Children (SGCAC), and on 21-23 May 2024, Abu Dhabi’s International Exhibition for National Security and Resilience (ISNR).  The Trilateral team also continues to receive recognition for CESIUM, with a 2023 DataIQ award win, and shortlisting for two National Technology Awards (announcing 23 May 2024).  

 

CESIUM: Driving adoption of AI

 

Since its inception, experts at Trilateral have recognised that for CESIUM to be fully utilised by the child safeguarding community, it had to address the challenges that prevent the adoption of responsible AI. To navigate these challenges, Trilateral developed sociotech methods, taking both technical and social factors into consideration throughout the development, implementation and ongoing use of AI.

 

The challenge

How sociotech addresses this challenge

Resourcing issues

Limited operational capacity, limited digital skills, resistance to change  

Expert, interdisciplinary and domain knowledge supplements internal capacity and skillsets, and a co-design approach supports real, cultural change. 

Legacy issues

Poor software, incomplete data sets, siloed data 

Software engineers and data scientists work with current systems and data to unpick the issues and provide holistic, user-friendly solutions. 

AI concerns

Algorithmic and data bias, generic algorithms, ethical concerns 

Ethicists are an integral part of the AI development journey, ensuring data and associated algorithms are as transparent, explainable, fair and trustworthy as possible.

Reporting issues

Lack of context for insights, reporting complexity 

Subject matter experts leverage a research-driven approach to support throughout development, implementation and ‘Business As Usual’ making sure insights fully address the problem at hand. 

Governance issues

Dynamic regulatory environments, data protection and data sharing concerns 

Legal advisors, data protection, responsible AI and cybersecurity specialists provide the expert knowledge to ensure data and insights are ethical, secure and compliant. 

 

Ensuring safety in AI

 

The safety of CESIUM, it’s data management and its outputs are critical for its success. The team at Trilateral implement an ethics-by-design framework that operationalises UKGOV’s five principles for responsible AI and ensures that safety is a key consideration throughout development and deployment.

 

  • Safety, security and robustness
    • Secure cloud hosting
    • Comprehensive data security
    • Ethical AI risk assessments

 

  • Appropriate transparency and explainability
    • UK Algorithmic Transparency Standard adoption
    • Robust processes and documentation
    • Co-design process

 

  • Fairness
    • Bias mitigation strategies
    • Transparency insights dashboard
    • Bias mitigation user training

 

  • Accountability and governance
    • Ethics-by-design framework
    • Continuous improvement processes
    • Contestability and redress mechanisms

 

  • Shared responsibility
    • Clear identification of roles and responsibilities
    • Client / supplier compliance
    • Annual validation and training

 

The data ethics methods used to develop CESIUM have been published in the Centre for Data Ethics and Innovation’s Portfolio of AI Assurance Techniques, and OECD’s Catalogue of Tools and Metrics for Trustworthy AI. 

 

Fostering innovation with AI

 

By instilling responsible principles in AI from the start – whether transparency, non-malfeasance or explainability – innovation in responsible AI development becomes a standard outcome.

 

Scaling with confidence

  • A robust approach to deployment ensures efficiency and bias mitigation.
  • Principle-driven ML models present less challenges and are less time-consuming.
  • Alignment to principles and values ensures accurate, streamlined testing.

 

Innovating continuously

  • AI evolution requires a continual drive for investment and adaptation.
  • Ongoing, ethical AI utilisation and readiness for future advancements.
  • AI becomes a co-pilot for innovation and sustainable growth.

 

Future-proofing innovation

  • Ongoing assurance that challenges the notion of a one-time bias removal.
  • Readiness for emerging regulations, such as the AI Act, beyond mere compliance.
  • Prioritising responsibility over reactive compliance enables competitive advantage.

 

 

Embedding inclusivity in AI

 

CESIUM’s ethics-by-design framework ensures inclusivity throughout development, implementation and ongoing use.

 

Diverse dataset utilisation | CESIUM leverages a diverse range of features from end user datasets, meticulously curated. By incorporating and accounting for the impact of varied demographics such as race, gender, age, socio-economic background, and geographical locations, our AI models deliver inclusive outcomes.

 

Bias-mitigated decision making | Through implementation of a unique sociotech bias-mitigation process and continuous monitoring and refinement of its advanced algorithms, CESIUM ensures fairness and equity in every insight it provides.

 

Community-driven development | CESIUM is a result of extensive collaboration with diverse stakeholders, including domain experts, ethicists and end users.

 

Transparent decision processes | With CESIUM, transparency is paramount. Clear explanations of how decisions are reached are provided, empowering users to understand and interrogate the factors influencing AI-driven outcomes. Trilateral is an early, voluntary adopter of the UK Algorithmic Transparency Recording Standard.

 

Accessible and User-Centric Design | CESIUM prioritises accessibility and usability, ensuring that it caters to the diverse needs and preferences of all users. Its UI/UX features incorporate the UK government’s accessibility requirements for public sector bodies.


ai_icon_badge_stroke 2pt final.png

techUK - Putting AI into Action 

Our Putting AI into Action campaign serves as a one stop shop for showcasing the opportunities and benefits of AI adoption across four strategic sectors - Health and Social Care, Cyber, Central Government and Transport

techUK is coordinating a calendar of events, reports, and insights to demonstrate some of the most significant opportunities for AI adoption in 2024, as well as working with key stakeholders to identify and address current barriers to adoption.

Visit our AI Adoption Hub to learn more, or find our latest activity below.

Upcoming AI Adoption events

Latest news and insights

Get our tech and innovation insights straight to your inbox

Sign-up to get the latest updates and opportunities from our Technology and Innovation and AI programmes.

Contact the team

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Learn more about our AI Adoption campaign:

AI generic card v4.jpg

 

 

Related topics