Automated SOC 2 evidence collection from Identity, GitHub, and AWS sources. Evaluates findings against YAML-defined controls and generates audit-ready reports.
Click any component to see implementation details.
| Control | Name | Status | Severity | Details |
|---|
Controls are defined in YAML โ add new checks without touching code.
The collector authenticates to each source API using secure credential management, retrieves configuration and policy data, then evaluates findings against control thresholds defined in a YAML-based control catalog.
Each evidence source has a dedicated collector module that normalizes API responses into a standard evidence format. This modular design allows new sources to be added without modifying the evaluation engine โ when a new compliance source is required, only a new collector module needs to be written.
The evaluation engine processes each collected data point against its corresponding control definition, applying the defined threshold logic to produce an instant PASS, WARNING, or FAIL determination. Results are aggregated into a structured report exportable as JSON for pipeline automation or CSV for audit review.
Queries user and policy APIs to evaluate: MFA enforcement coverage across all user types, password policy compliance against defined minimum standards, inactive account identification, and admin privilege distribution.
Evaluates repository security posture including: branch protection enforcement on production branches, code review requirements (minimum reviewer thresholds), secrets scanning activation, and access control configuration.
Assesses cloud security configuration across: encryption-at-rest status for storage services, access logging enablement via CloudTrail, and IAM policy compliance including root account protection.
Key architectural choices and the reasoning behind them.
Control definitions are externalized in YAML files rather than hardcoded. This means GRC analysts can update control thresholds, add new controls, or adjust pass/fail criteria without touching code โ critical for maintaining audit readiness as frameworks evolve.
Each API source has an independent collector module. When a new compliance source needs to be added (e.g., a new cloud provider or identity platform), only a new collector needs to be written โ the evaluation engine and reporting layer remain unchanged.
Traditional evidence collection requires manual review of each control. The automated evaluation engine applies defined thresholds to collected evidence and produces instant pass/fail determinations, reducing audit prep from weeks to minutes.
Technologies chosen for this tool and the rationale behind each selection.
Chosen for rich API client libraries, rapid prototyping, and broad adoption in GRC automation tooling.
Direct integration with source platforms (Okta, GitHub, AWS) for real-time evidence retrieval without intermediary services.
Human-readable control definitions that GRC analysts can maintain without engineering involvement โ keeping compliance logic accessible.
Scriptable command-line interface enables scheduled and automated evidence collection runs within existing CI/CD or audit workflows.