Prestigious UK Financial services end customer deploying a large security and AI programme require a Threat Modelling and AI Consultant. The AI Security Threat Modelling Lead is responsible for designing, implementing, and maintaining structured threat modelling practices within a financial services end customer environment. The role focuses on securing AI and machine learning systems, particularly large language models (LLMs) and agentic AI systems, by identifying, analysing, and operationalising adversarial risks in line with regulatory, operational resilience, and industry best practices.
Key Responsibilities:
Key Responsibilities:
- Leads structured threat modelling activities using methodologies such as STRIDE for AI, OWASP LLM and Agentic AI threat categories, and attack tree analysis.
- Develops and maintains a prioritised catalogue of AI-specific threat scenarios relevant to financial services use cases, including Prompt Injection, Sleeper Agent behaviour, and Denial-of-Wallet attacks.
- Translates identified threat scenarios into adversarial test cases in collaboration with AI/ML evaluation and engineering teams supporting the financial services end customer.
- Facilitates scenario-based workshops with engineering, security, risk, and business stakeholders within the end customer organisation to validate the effectiveness of AI security controls in realistic operating conditions.
- Expands and maintains a safeguards catalogue aligned to financial services regulatory and compliance frameworks, including FCA Operational Resilience, DORA, and the EU AI Act.
- Maintains an adversarial AI knowledge base covering emerging attack techniques, exploitation patterns, tooling, and defensive strategies relevant to regulated environments.
- Supports continuous improvement of secure AI development and deployment practices across the financial services end customer estate.
- Strong experience working within or for UK financial services end customers, with deep understanding of regulated environments and operational resilience requirements.
- Familiarity with FCA Operational Resilience, DORA (Digital Operational Resilience Act), and the EU AI Act.
- Hands-on experience with AWS Bedrock, including Agents, Knowledge Bases, Guardrails, and model lifecycle management.
- Strong foundational understanding of AI/ML concepts, including foundation models (FMs), retrieval-augmented generation (RAG), non-deterministic agent systems, and tool-using architectures.
- Deep knowledge of secure AI principles, including OWASP LLM Top 10, Agentic AI threat landscapes, and exposure to NIST AI Risk Management Framework (AI RMF) preferred.
- Proven experience in adversarial thinking, structured threat modelling, and security analysis of complex AI-enabled systems.
- Ability to translate technical AI security risks into regulatory, risk, and business impact language within a financial services end customer setting.
- Experience working across cross-functional teams including security engineering, data science, risk, compliance, and architecture.
- Strong communication and stakeholder management skills, with the ability to influence security decisions in complex enterprise environments.