HomeCloud ComputingThe Domains and Organizational Features of AI Safety

The Domains and Organizational Features of AI Safety


When your CISO mentions “AI safety” within the subsequent board assembly, what precisely do they imply? Are they speaking about defending your AI methods from assaults? Utilizing AI to catch hackers? Stopping workers from leaking information to an unapproved AI service? Making certain your AI doesn’t produce dangerous outputs?

The reply is perhaps “all the above”; and that’s exactly the issue.

AI grew to become deeply embedded in enterprise operations. Because of this, the intersection of “AI” and “safety” has change into more and more complicated and complicated. The identical phrases are used to explain basically completely different domains with distinct targets, resulting in miscommunication that may derail safety methods, misallocate assets, and depart important gaps in safety. We’d like a shared understanding and shared language.

Jason Lish (Cisco’s Chief Info Safety Officer) and Larry Lidz (Cisco’s VP of Software program Safety) co-authored this paper with me to assist deal with this problem head-on. Collectively, we introduce a five-domain taxonomy designed to deliver readability to AI safety conversations throughout enterprise operations.

The Communication Problem

Take into account this situation: your government staff asks you to current the corporate’s “AI safety technique” on the subsequent board assembly. With no widespread framework, every stakeholder might stroll into that dialog with a really completely different interpretation of what’s being requested. Is the board asking about:

  • Defending your AI fashions from adversarial assaults?
  • Utilizing AI to boost your risk detection?
  • Stopping information leakage to exterior AI companies?
  • Offering guardrails for AI output security?
  • Making certain regulatory compliance for AI methods?
  • Defending in opposition to AI-enabled or AI-generated cyber threats? This ambiguity results in very actual organizational issues, together with:
  • Miscommunication in government and board discussions
  • Misaligned vendor evaluations— evaluating apples to oranges
  • Fragmented safety methods with harmful gaps
  • Useful resource misallocation specializing in the incorrect targets

With no shared framework, organizations wrestle to precisely assess dangers, assign accountability, and implement complete, coherent AI safety methods.

The 5 Domains of AI Safety

We suggest a framework that organizes the AI-security panorama into 5 clear, deliberately distinct domains. Every addresses completely different considerations, entails completely different risk actors, requires completely different controls, and sometimes falls beneath completely different organizational possession. The domains are:

  • Securing AI
  • AI for Safety
  • AI Governance
  • AI Security
  • Accountable AI

Every area addresses a definite class of dangerous and is designed for use along with the others to create a complete AI technique.

These 5 domains don’t exist in isolation; they reinforce and rely upon each other and should be deliberately aligned. Study extra about every area within the paper, which is meant as a place to begin for business dialogue, not a prescriptive guidelines. Organizations are inspired to adapt and prolong the taxonomy to their particular contexts whereas preserving the core distinctions between domains.

Framework Alignment

Simply because the NIST Cybersecurity Framework supplies a typical language to speak concerning the domains of cybersecurity whereas not eradicating the necessity for detailed cybersecurity framework resembling NIST SP 800-53 and ISO 27001, this taxonomy just isn’t meant to work in isolation of extra detailed frameworks, however relatively to offer widespread vocabulary throughout business.

As such, the paper builds on Cisco’s Built-in AI Safety and Security Framework just lately launched by my colleague Amy Chang. It additionally aligns with established business frameworks, such because the Coalition for Safe AI (CoSAI) Danger Map, MITRE ATLAS, and others.

The intersection of AI and safety just isn’t a single downside to resolve, however a constellation of distinct danger domains; every requiring completely different experience, controls, and organizational possession. By aligning with these domains with organizational context, organizations can:

  • Talk exactly about AI safety considerations with out ambiguity
  • Assess danger comprehensively throughout all related domains
  • Assign accountability clearly to the best groups
  • Make investments strategically relatively than reactively

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments