AI-specific security assessment, adversarial research, and the open-source framework redefining how AI systems are tested. Authors of AAISAF.
We assess AI systems for vulnerabilities no one else is testing — and build the ones that survive.
Independent AI security assessment using AAISAF — covering attack surfaces that traditional pentesting misses. Prompt injection, RAG poisoning, voice AI manipulation, MCP server security.
Voice agents, workflow automation, workplace AI, and fractional AI leadership for teams moving from prototype to production.
AI Security Assessment Framework — the first comprehensive attack taxonomy for AI systems. Open-source. Battle-tested.
Novel coverage of Voice AI attack surfaces (9 techniques — first of its kind) and MCP Server Security (12 techniques — first of its kind). Maps to ISO 42001, NIST AI RMF, EU AI Act, OWASP, MITRE ATLAS, and Australian regulatory standards. Includes Passive Posture Assessment, Quick, Standard, and Deep assessment methodologies.
View on GitHub →Click any tactic to explore its techniques. Every entry includes detection, remediation, AISS scoring, and compliance mapping.
Open-source tools for building, evaluating, monitoring, and securing AI systems.