AI Content Moderation
AI → HumanAI auto-reviews user content; flagged items go to human moderators.
5 nodes · 4 edgessocial
agenthumansystem
Visual
Content Submittedevent
User posts text, image, or video.
↓sequential→ AI Safety Scan
AI Safety Scanagent
Check for hate speech, NSFW, spam, misinformation.
↓conditional→ Auto-Approve
↓conditional→ Human Moderator Review
Auto-Approvesystem
Content passes all checks.
Human Moderator Reviewhuman
Moderator reviews flagged content.
↓sequential→ Enforce Decision
Enforce Decisionsystem
Publish, remove, or restrict content.
uc-content-moderation.osop.yaml
osop_version: "1.0"
id: "content-moderation"
name: "AI Content Moderation"
description: "AI auto-reviews user content; flagged items go to human moderators."
nodes:
- id: "content_submitted"
type: "event"
name: "Content Submitted"
description: "User posts text, image, or video."
- id: "ai_scan"
type: "agent"
subtype: "llm"
name: "AI Safety Scan"
description: "Check for hate speech, NSFW, spam, misinformation."
security:
risk_level: "low"
- id: "auto_approve"
type: "system"
name: "Auto-Approve"
description: "Content passes all checks."
- id: "flag_review"
type: "human"
subtype: "review"
name: "Human Moderator Review"
description: "Moderator reviews flagged content."
security:
approval_gate: true
- id: "action"
type: "system"
name: "Enforce Decision"
description: "Publish, remove, or restrict content."
edges:
- from: "content_submitted"
to: "ai_scan"
mode: "sequential"
- from: "ai_scan"
to: "auto_approve"
mode: "conditional"
when: "scan.safe == true"
- from: "ai_scan"
to: "flag_review"
mode: "conditional"
when: "scan.safe == false"
- from: "flag_review"
to: "action"
mode: "sequential"