An unlikely trajectory — from Interpol investigations to building GRC programs from zero in fintech. The common thread? Protecting what matters, with evidence and precision.
What looks like career pivots on paper is actually one continuous trajectory — from investigating human threats to defending digital ones. Each phase built on the last.
Started in Drug Enforcement (2010–2013) performing undercover and operational work against organized crime rings, before advancing to the International Cooperation Unit. There, I managed joint cross-border projects and collaborated directly with Interpol, Europol, and SIRENE. Every audit I run today relies on the skills forged here: flawless evidence integrity, precise documentation under strict legal standards, and high-stakes risk assessment.
Led major incident response for global enterprise clients — coordinating engineers across time zones under SLA pressure. This is where I discovered that investigation skills translate directly to triage: structured evidence gathering, hypothesis testing, and communicating under pressure. I also participated in client compliance audits, gaining hands-on exposure to audit processes and control assessments that would inform everything I built next.
This is where it all came together. Co-built the Information Security program from zero alongside the CISO at a PCI-regulated payment processor. Co-authored the complete policy suite with the CISO, stepping in as the primary owner to maintain and drive enforcement. Designed a Common Controls Framework harmonizing SOC 2, PCI DSS, and NIST, and led both the SOC 2 and PCI DSS compliance programs—with a strategic roadmap to expand into additional frameworks. Championed the Third-Party Risk Management program (45+ vendors), implemented GRC automation, and ran 24/7 incident response. Every incident became a direct feedback loop into stronger controls.
I believe GRC should accelerate the business, not create drag. These tools automate compliance, assess risk, and reduce friction between security and engineering.
Python CLI tool that connects to Okta, GitHub, and AWS APIs, gathers security evidence, evaluates it against SOC 2 controls defined in YAML, and generates automated pass/fail compliance reports. Turns weeks of manual evidence gathering into minutes.
Interactive web application applying the NIST AI Risk Management Framework across Govern, Map, Measure, and Manage functions. Evaluates AI use cases through structured risk assessments and produces risk-scored governance reports.
Self-service readiness assessment evaluating organizational controls against SOC 2 Trust Service Criteria. Identifies gaps, scores maturity levels, and generates prioritized remediation roadmaps for audit preparation.
Automated gap analysis mapping existing controls to ISO 27001 Annex A requirements. Scores maturity across all 93 controls and generates implementation roadmaps with effort estimates and prioritization.
Operational intelligence dashboard tracking MTTD, MTTR, severity distribution, SLA compliance, and root cause analysis. Demonstrates how incident data drives continuous improvement.
Interactive 25-question self-assessment across the NIST CSF functions — Identify, Protect, Detect, Respond, Recover. Generates maturity scores and prioritized recommendations.
Interactive unified control framework harmonizing SOC 2, PCI DSS, NIST CSF, and ISO 27001 into a single control catalog. Demonstrates how a Common Controls Framework reduces audit fatigue and maps controls across multiple compliance requirements simultaneously.
Interactive strategy builder that generates ISO 42001-aligned AI governance programs. Profiles your organization, identifies gaps across 9 Annex A domains, and produces a phased 12-month implementation roadmap with exportable artifacts.
Interactive explorer for every clause and Annex A control of the AI Management System standard. Navigate requirements across 10 domains, track applicability status, and understand what auditors look for — with guidance on common pitfalls and what good looks like.
Every organization is racing to adopt AI, most without guardrails. The conversation is split between technologists who understand models but not risk frameworks, and regulators who understand compliance but not the technology. GRC professionals sit at the intersection.
We already know how to assess risk, build control frameworks, and create audit trails that satisfy regulators. The NIST AI RMF isn't a departure from traditional GRC — it's an extension of it. The same skills that build a SOC 2 program can build an AI governance program.
The organizations that will win aren't the fastest to adopt AI — they're the ones that move fastest with confidence. That's why I'm building tools like the AI Governance Risk Assessor and pursuing the AIGP. The future of GRC isn't just protecting what exists — it's enabling what's next.
Most organizations treat AI governance as a single-framework exercise. Some pursue ISO 42001 for the management system and certifiability — the policy structure, defined roles, internal audits, and management reviews. Others adopt the NIST AI RMF for its operational depth — structured risk mapping across Govern, Map, Measure, and Manage functions, with practical guidance on bias testing, explainability, and continuous monitoring. The problem is that each framework alone leaves critical gaps. ISO 42001 gives you the skeleton (accountability, audit trail, certification path) but not the technical muscle. NIST AI RMF gives you risk intelligence and operational methodology but no formal accountability structure or certification path.
The strategy outlined here runs both frameworks as a single integrated program — not two parallel workstreams. ISO 42001 provides the management system backbone: scoping (Clause 4), leadership commitment and AI policy (Clause 5), risk treatment and Annex A controls (Clauses 6–8), and the audit/improvement cycle (Clauses 9–10). The NIST AI RMF functions slot directly into that skeleton: GOVERN and MAP feed the context-setting and risk identification that ISO Clauses 4–5 require; MAP, MEASURE, and MANAGE provide the technical methodology for the risk assessments and control implementation that ISO Clauses 6–8 demand; and MEASURE and MANAGE continuous monitoring generates exactly the performance data that ISO Clause 9.1 needs for management review.
The implementation follows a 24-week phased roadmap across three stages. Phase 1 (Foundation, Weeks 1–6) establishes the AI governance context — organizational scope, AI policy, leadership buy-in, risk appetite, and a complete AI system inventory — drawing from ISO 42001 Clauses 4–5 and NIST GOVERN + MAP functions simultaneously. Phase 2 (Build, Weeks 7–18) is where the operational program takes shape — risk assessments using NIST MEASURE methodology, Annex A control implementation using NIST as the technical method, bias testing frameworks, impact assessments, and third-party AI due diligence. Phase 3 (Audit-Ready, Weeks 19–24) closes the loop with internal audits, KPI dashboards, management reviews, corrective action processes, and continuous monitoring pipelines — where NIST's monitoring data directly feeds ISO's Clause 9 requirements.
The organizations that succeed with AI governance won't be the ones that pick one framework and hope for the best. They'll be the ones that build a system that is both certifiable and operationally resilient. This dual-framework approach means you're not choosing between a certificate and real risk reduction — you get both. And because both frameworks share overlapping structures (risk assessment, control implementation, continuous improvement), running them together actually reduces total effort compared to implementing them separately.
Clauses 4–5 — Scope, AI policy, leadership commitment
GOVERN + MAP — Risk tolerance, system inventory, stakeholder analysis
Context-setting and risk tolerance framing happen simultaneously
Clauses 6–8 + Annex A — Risk assessment, operational controls, impact assessments
MAP + MEASURE + MANAGE — Bias testing, risk analysis, treatment prioritization
Risk assessment uses NIST MEASURE methodology; Annex A controls use NIST as technical method
Clauses 9–10 — Monitoring KPIs, internal audit, management review, corrective actions
MEASURE 4.0 + MANAGE 4.0 — Continuous monitoring, incident response, feedback loops
NIST continuous monitoring provides data ISO Clause 9.1 requires
"ISO 42001 without NIST AI RMF is a certificate without operational depth. NIST AI RMF without ISO 42001 is risk intelligence without accountability structure. Together, they build something neither can deliver alone: a system that is certifiable and resilient."
Whether it's building a GRC program from scratch, improving your incident response, or automating compliance — I'd love to hear from you.