Independent security research & advisory

CYBERSECURITY

Risk reasoning for systems that matter

Independent, practical risk advice for teams who want clear decisions — not hype.

Clear threat thinking for teams operating in messy reality: automation, incentives, time pressure, and systems that fail in non-obvious ways.

No jargon tests. If you’re technical, great — if you’re leading or coordinating, you’re welcome too.

Quick note: you can email directly — no forms, no sales funnel, no pressure.

Zero-trust reasoning Human factors Practical governance

Services

Short, focused engagements for teams adopting AI or modernising security.

Not sure what you need yet? That’s normal — we can start with a quick map of the problem and constraints.

Primary

AI & Cybersecurity Risk Review

1–2 weeks mapping misuse scenarios, trust boundaries, and human failure modes into prioritised actions.

Ask about this →

Workshop

Security Reasoning Workshops

Small-group sessions: threat modelling, trust decisions, and what to do when automation is wrong.

Ask about this →

Ongoing

Advisory Retainer

Limited monthly support for design reviews, incidents, and governance — grounded in clarity, not dashboards.

Ask about this →

Writing

Short essays on AI security, zero trust, and human factors in complex systems.

Written for practitioners, leaders, and curious non-specialists — no maths required.

EN · Public

Human-in-the-Loop Is Not a Safety Guarantee

Placing a human “in the loop” is often presented as a safeguard against automation failure. In practice, it can relocate risk without reducing it — especially u…

Read (5–10 min) →

EN · Public

Why AI Security Fails Before It Starts

Most AI security failures happen before deployment — at the level of assumptions about intelligence, trust, and responsibility.

Read (5–10 min) →

EN · Public

Zero Trust Is a Discipline, Not a Product

Zero trust has become widely used — and widely misunderstood. It is marketed as an architecture or a suite of tools. In practice, it is a discipline of reasonin…

Read (5–10 min) →

About

What we optimise for

  • Zero-trust thinking beyond marketing
  • AI risk in real operational contexts
  • Human cognition as part of the system

If your environment is messy, political, or time-pressured — you’re not alone. That’s the default.

How we work

Clarity first: what the system is, what could go wrong, and which controls reduce risk under your constraints.

Typical first step: a short call or email to scope context, risks, and timelines.

Contact

Tell me what you’re building, what you’re worried about, and what “good” looks like.

You’ll get a reply by email. No forms, no tracking pixels, no newsletter unless you ask.