Cybersecurity & Cybercrime
In today’s interconnected world, where our personal and professional lives are increasingly digital, cybersecurity has never been more crucial.
What is GenAI?
Generative AI (GenAI or GAI) is a type of artificial intelligence that uses machine learning algorithms to produce, copy or rework content. This content output can be text, images, audio, code, other formats
Concern of GenAI
However, the following cybersecurity / safety concerns are holding back the adoption of GenAI services:
- Sensitive Data Exposure (personal and corporate data)
- Incorrect Outputs (AI hallucination)
- Malicious Outputs (exposing our clients’ vulnerabilities)
- Unsafe Outputs (generating output not safe for work)
- Shadow GenAI Usage (by users)
How We Can Support You
We are ready to support you in the cybersecurity concerns of your GenAI services
Pre-Adoption
Threat Modelling and Risk Assessment
Readiness Review
NIST AI RMF / ISO/IEC 42001 / SG Model AI Governance Framework for Generative AI
Safety Review of GenAI
Using Adversarial Testing and/or AI Verify Moonshot / MLCommons AI Safety benchmark
Development of AI Incident Response Playbooks
Employee Awareness Training
Post-Adoption
Configuration Audit & Compliance Review
NIST AI RMF / ISO/IEC 42001 / SG Model AI Governance Framework for Generative AI
Adversarial Testing
Based on MITRE ATLAS & OWASP Top 10 for LLM
Regular Safety Review of inhouse GenAI
AI Verify Moonshot / MLCommons AI Safety benchmark
Digital Forensics and Incident Response
Incident Response Simulation Exercises with AI Scenarios
Readiness Review
NIST AI Risk Management Framework
Assess the readiness in accordance to NIST AI RMF which covers the 6 Govern areas, 5 Map areas, 4 Measure areas and 4 Manage areas.
ISO/IEC 42001
Assess the readiness in accordance to ISO/IEC 42001 AI Management System Management Clauses (1 to 10) and Annex A (A1 to A10).
SG Model AI Governance Framework for Generative AI
Assess the readiness in accordance to Singapore aI Verify Foundation’s Model AI Governance Framework for Generative AI, consisting of Accountability, Data, Trusted Dev & Deployment, Incident Reporting, Testing and Assurance, Security, Content Provenance, Safety & Alignment R&D and AI for Public Good.
Safety Review
Adversarial Testing
Using the MITRE ATLAS framework and OWASP Top 10 Risks for LLM, conduct red teaming to bypass AI service safety guardrails or cybersecurity controls. Additionally, RTC can explore if popular GenAI services identify client system vulnerabilities or disclose sensitive client information.
AI Verify Moonshot
Using AI Verify Moonshot, perform safety review of AI model to provide an overview of how safe the AI model is.
MLCommons AI Safety Benchmark
Using MLCommons AI Safety Benchmark1 , perform safety review of AI model to provide an overview of how safe the AI model is.
Digital Forensics and Incident Response
Pre-Incident Logging Configuration Review
Review if adequate logging of GenAI services is enabled and retained for sufficient amount of time.
Deepfake Detection
Help our clients to identify if videos, images or audio files, submitted by our clients, are deepfakes.
Post-Incident Forensics Investigation
Analyse GenAI logs and endpoint logs to identify cause(s) of cyber incidents, recontruct the chronology of events and propose recommendations.