Primary
AI & Cybersecurity Risk Review
1–2 weeks mapping misuse scenarios, trust boundaries, and human failure modes into prioritised actions.
Ask about this →Independent security research & advisory
CYBERSECURITY
Independent, practical risk advice for teams who want clear decisions — not hype.
Clear threat thinking for teams operating in messy reality: automation, incentives, time pressure, and systems that fail in non-obvious ways.
Quick note: you can email directly — no forms, no sales funnel, no pressure.
Short, focused engagements for teams adopting AI or modernising security.
Primary
1–2 weeks mapping misuse scenarios, trust boundaries, and human failure modes into prioritised actions.
Ask about this →Workshop
Small-group sessions: threat modelling, trust decisions, and what to do when automation is wrong.
Ask about this →Ongoing
Limited monthly support for design reviews, incidents, and governance — grounded in clarity, not dashboards.
Ask about this →Short essays on AI security, zero trust, and human factors in complex systems.
EN · Public
Placing a human “in the loop” is often presented as a safeguard against automation failure. In practice, it can relocate risk without reducing it — especially u…
Read (5–10 min) →EN · Public
Most AI security failures happen before deployment — at the level of assumptions about intelligence, trust, and responsibility.
Read (5–10 min) →EN · Public
Zero trust has become widely used — and widely misunderstood. It is marketed as an architecture or a suite of tools. In practice, it is a discipline of reasonin…
Read (5–10 min) →Clarity first: what the system is, what could go wrong, and which controls reduce risk under your constraints.
Typical first step: a short call or email to scope context, risks, and timelines.
Tell me what you’re building, what you’re worried about, and what “good” looks like.
You’ll get a reply by email. No forms, no tracking pixels, no newsletter unless you ask.