From Zero to 18 Platforms in One Session
On April 1, 2026, OSOP existed as an internal Claude Code plugin and a spec nobody outside our team had used. By the end of the day, it supported 18 AI coding platforms, had 5 skills published on ClawHub, and 5 pull requests open to major community repositories.
This is the story of how it happened — and why it worked.
The Insight
Every AI coding agent reads markdown instruction files. Cursor reads .cursor/rules/*.mdc. Windsurf reads .windsurf/rules/*.md. Cline reads .clinerules/*.md. Codex reads AGENTS.md. Aider reads CONVENTIONS.md. The pattern is universal.
The only differences are the filename, the directory, and occasionally a line or two of frontmatter. The actual instructions — what the AI should do — are identical across all platforms.
This meant we could support every major AI coding tool with a single-source architecture: one CORE-INSTRUCTIONS.md file, and thin platform-specific wrappers.
Phase 1: OpenClaw
We started with OpenClaw because it has the richest integration model — a full skill registry (ClawHub) with 13,000+ skills. Our existing osop-openclaw-skill repo had the content but the wrong frontmatter format.
Two parallel research agents explored the existing skill structure and the OpenClaw platform simultaneously. Within minutes we knew exactly what ClawHub expected: YAML frontmatter with emoji, metadata.openclaw.requires, user-invocable, and disable-model-invocation fields.
We updated all 5 SKILL.md files, built the marketplace package, installed the clawhub CLI, and published:
osop— osop — core workflow skillosop-log— osop-log — session loggingosop-report— osop-report — HTML report generationosop-review— osop-review — security & risk analysisosop-optimize— osop-optimize — workflow optimization
Then we opened a PR to openclaw/openclaw to add OSOP to their community plugins documentation. The CI failed on a formatting check — oxfmt was strict about blank lines. Fixed, pushed, all green.
Phase 2: The Architecture
With the OpenClaw pattern proven, we designed a universal architecture:
osop-agent-rules/ ├── CORE-INSTRUCTIONS.md # Single source of truth ├── install.sh # Auto-detect + install ├── cursor/ # .mdc with YAML frontmatter ├── codex/ # AGENTS.md ├── windsurf/ # .md with trigger comment ├── continue-dev/ # YAML rules ├── aider/ # CONVENTIONS.md ├── cline/ # Plain markdown ├── roo-code/ # Plain markdown ├── devin/ # Playbook markdown ├── obsidian/ # Copilot custom prompt ├── zed/ # .rules file ├── amp/ # AGENT.md ├── trae/ # project_rules.md ├── pearai/ # Custom command ├── sweep/ # SKILL.md ├── swe-agent/ # YAML config └── copilot/ # MCP integration guide
The installer script detects which AI tools are present by checking for their config directories, then copies the right files. ./install.sh --all covers everything.
Phase 3: The PR Blitz
We spawned 4 agents in parallel to submit pull requests to community repositories:
- PatrickJS/awesome-cursorrules — PatrickJS/awesome-cursorrules — the canonical Cursor rules collection
- SchneiderSam/awesome-windsurfrules — SchneiderSam/awesome-windsurfrules — Windsurf community rules
- bradAGI/awesome-cli-coding-agents — bradAGI/awesome-cli-coding-agents — comprehensive CLI agent directory
- continuedev/awesome-rules — continuedev/awesome-rules — Continue.dev official rules repository
All 4 PRs passed CI. Combined with the OpenClaw PR, that is 5 community touchpoints from a single session.
Phase 4: Obsidian and Beyond
We then researched 7 more platforms — Obsidian, Zed, Sourcegraph Amp, Trae (ByteDance), PearAI, Sweep AI, and SWE-agent — each with their own config format. Added all of them to the repo, updated the installer, and pushed.
What We Learned
- The markdown instruction pattern is universal. Every AI coding tool reads markdown. The "ecosystem fragmentation" is a naming convention problem, not a content problem.
- Parallel agents are force multipliers. Submitting 4 PRs simultaneously instead of sequentially saved hours and kept momentum.
- CI is your friend. The OpenClaw CI failure caught a real formatting issue. Fixing it immediately prevented a review round-trip.
- Single-source architecture scales. One file of core instructions, many thin wrappers. Adding a new platform takes 5 minutes.
The Numbers
| Metric | Count |
|---|---|
| Platforms supported | 18 |
| ClawHub skills published | 5 |
| Community PRs opened | 5 |
| CI checks passed | 5/5 |
| GitHub repos created | 1 |
| Files created/modified | 50+ |
| Time | ~2 hours |
What is Next
The foundation is laid. Next: complete the MCP server implementation, build framework-specific integrations (LangChain, CrewAI, AutoGen), and submit to more awesome-lists. The goal is to make OSOP as boring and ubiquitous as OpenAPI.
The entire session is recorded as an OSOP workflow — open the .osop file in the editor to see every step, tool call, and decision.