docs initial
This commit is contained in:
@@ -0,0 +1,319 @@
|
||||
---
|
||||
name: attack-surface
|
||||
description: >
|
||||
Strategic research framework that compresses months of market/competitive research into hours through structured power questions. Extracts unspoken industry insights, fragile market assumptions, and strategic attack surfaces from competitor data, reviews, and industry sources using parallel intelligence gathering.
|
||||
Use when user says "attack surface", "research the market", "competitive analysis", "analyze competitors", "find market opportunity", "stress-test this idea", "market research", "evaluate opportunity", "find blind spots", "market entry", or when they want to deeply understand a market, evaluate a new direction, find industry blind spots, assess a partnership, or analyze opportunities.
|
||||
Do NOT use for code review, testing, deployment, bug fixing, or implementation tasks.
|
||||
---
|
||||
|
||||
# Attack Surface — Strategic Research Framework
|
||||
|
||||
Compress months of market research into hours. The difference between 3 hours and 3 months isn't the amount of information — it's knowing which questions actually matter.
|
||||
|
||||
Instead of "summarize these" or "analyze the competition", this framework extracts:
|
||||
- **UNSPOKEN INSIGHTS** — what successful players understand that customers never say out loud
|
||||
- **FRAGILE ASSUMPTIONS** — beliefs the entire market is built on, and how they break
|
||||
- **ATTACK SURFACES** — the blind spots, the fragile consensus, the opening nobody is talking about
|
||||
|
||||
## Search Tool Selection
|
||||
|
||||
**Primary: Exa MCP** — Use `mcp__exa__web_search_exa`, `mcp__exa__crawling_exa`, and `mcp__exa__deep_researcher_start` when available. Exa is the best fit for neural search, crawling full pages, and deep research.
|
||||
|
||||
**Fallback: Built-in web browsing tools** — If Exa MCP is unavailable, use the Codex environment's web search and page-open tools to find sources, open pages, and extract evidence. Record the exact URLs you relied on.
|
||||
|
||||
**Detection:** At the start of Phase 2, check whether Exa MCP is available in the current environment. If it is not, use the built-in web tools for the entire session and note that in the Source Dossier.
|
||||
|
||||
## When to Use
|
||||
|
||||
- Entering a new market or vertical
|
||||
- Evaluating a new feature direction for an existing project
|
||||
- Assessing a partnership or platform opportunity
|
||||
- Stress-testing a business idea before committing
|
||||
- Finding competitive blind spots and underserved niches
|
||||
- Any strategic question that benefits from deep evidence-based analysis
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
7 phases, alternating between automated intelligence gathering and user-guided analysis:
|
||||
|
||||
| Phase | Name | Mode | Output |
|
||||
|-------|------|------|--------|
|
||||
| 1 | Briefing | Interactive | Research brief |
|
||||
| 2 | Source Collection | Automated (parallel) | Source dossier |
|
||||
| 3 | Unspoken Insights | Automated + checkpoint | Insight report |
|
||||
| 4 | Fragile Assumptions | Automated + checkpoint | Assumption map |
|
||||
| 5 | Investor Stress-Test | Automated + checkpoint | Stress-test results |
|
||||
| 6 | Opportunity Mapping | Automated + checkpoint | Opportunity matrix |
|
||||
| 7 | Action Plan & Save | Automated | Final research document |
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Briefing
|
||||
|
||||
Start by understanding what the user wants to research. This is an interactive conversation — ask questions until you have a clear research brief.
|
||||
|
||||
**Gather:**
|
||||
1. **Target** — What market, industry, or opportunity? (e.g., "yacht brokerage SaaS", "AI flashcards for language teachers", "mobile reading apps")
|
||||
2. **Angle** — What's the user's position? Entering as newcomer, expanding existing product, evaluating partnership?
|
||||
3. **Known competitors** — Any specific companies or products the user already knows about?
|
||||
4. **User-provided sources** — URLs, files, documents the user wants included? Accept any format.
|
||||
5. **Specific questions** — Anything particular the user wants answered beyond the standard framework?
|
||||
|
||||
**Project context:** If the research relates to an existing project the user is working on, ask about the current product, tech stack, and strategic position. This grounds the analysis in real context rather than hypotheticals.
|
||||
|
||||
**Output a research brief** before proceeding:
|
||||
```
|
||||
Research Brief:
|
||||
- Target: [market/opportunity]
|
||||
- Angle: [newcomer / existing player / evaluator]
|
||||
- Known competitors: [list]
|
||||
- User sources: [list of URLs/files]
|
||||
- Key questions: [specific questions beyond standard framework]
|
||||
- Project context: [if applicable, key facts about the user's product]
|
||||
```
|
||||
|
||||
Ask user to confirm before proceeding to Phase 2.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Source Collection
|
||||
|
||||
This is the intelligence-gathering phase. The quality of analysis depends on the quality and diversity of sources.
|
||||
|
||||
Use parallel gatherers only when the current Codex environment supports subagents and the user explicitly asked for delegation or parallel agent work. Otherwise, run the same research tracks yourself in the main thread using batched searches.
|
||||
|
||||
### Tool availability check
|
||||
|
||||
Before starting collection, check Exa MCP availability:
|
||||
- If Exa is available -> use Exa tools for search and crawling
|
||||
- If Exa is unavailable -> use the built-in web search and page-open tools instead
|
||||
|
||||
### What to gather
|
||||
|
||||
Cover 4-5 research tracks, each focused on a different source type. If subagents are available and explicitly requested, run up to 4 gatherers in parallel. Otherwise, execute the tracks yourself in sequence.
|
||||
|
||||
**Subagent 1: Competitor Intelligence**
|
||||
Search for and crawl 5-8 competitor landing pages, product pages, and pricing pages. Extract: value propositions, positioning, pricing models, feature lists, target audience language.
|
||||
|
||||
**Subagent 2: Customer Voice**
|
||||
Search Reddit, forums, review sites (G2, Trustpilot, Product Hunt, App Store reviews) for customer complaints, praise, and unmet needs in this market. Extract: recurring pain points, feature requests, emotional language, switching triggers.
|
||||
|
||||
**Subagent 3: Industry Analysis**
|
||||
Search for industry reports, expert analysis, trend pieces, and earnings call transcripts. Extract: market size, growth trends, key players, regulatory landscape, technology shifts.
|
||||
|
||||
**Subagent 4: Adjacent & Emerging**
|
||||
Search for startups entering this space, adjacent markets that could expand into it, and emerging technologies that could disrupt it. Extract: new entrants, pivot signals, technology trends, funding patterns.
|
||||
|
||||
**Subagent 5: User-Provided Sources** (if any)
|
||||
Crawl all URLs the user provided. Extract full content.
|
||||
|
||||
### Subagent prompt template
|
||||
|
||||
Read `references/gatherer-prompt.md` for the detailed prompt template to use for each gatherer or direct pass. Each pass receives:
|
||||
- The research brief from Phase 1
|
||||
- Its specific focus area
|
||||
- Instructions for which search tool family to use (Exa or built-in web tools)
|
||||
|
||||
### After collection
|
||||
|
||||
Compile all subagent results into a **Source Dossier** — a structured document with all collected evidence organized by source type. Present a summary to the user:
|
||||
|
||||
```
|
||||
Source Dossier Summary:
|
||||
- Search tools used: [Exa MCP / built-in web tools]
|
||||
- X competitor pages analyzed
|
||||
- X customer reviews/complaints collected
|
||||
- X industry reports found
|
||||
- X emerging players identified
|
||||
- X user-provided sources crawled
|
||||
Key themes so far: [2-3 sentences]
|
||||
```
|
||||
|
||||
Ask: "Sources collected. Anything you want me to search for specifically before we start analysis? Or should I proceed?"
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Unspoken Insights
|
||||
|
||||
The first analytical question — the one that separates this from generic "market analysis":
|
||||
|
||||
> "Based on all collected evidence: What does every successful player in this market understand that their customers never say out loud?"
|
||||
|
||||
This question works because it forces the analysis past surface-level features and pricing into the deeper truths that drive the market.
|
||||
|
||||
Run this as a dedicated analysis pass using the prompt from `references/analyst-prompt.md` (Section: Unspoken Insights). If subagents are available and the user explicitly requested delegation, use a subagent. Otherwise, perform the pass directly in the main thread.
|
||||
|
||||
**Present findings** to the user as 3-5 numbered insights, each with:
|
||||
- The insight itself (one clear sentence)
|
||||
- Evidence from sources (specific quotes, data points)
|
||||
- Why this matters strategically
|
||||
|
||||
**Checkpoint:** "Here are the unspoken insights I found. Do any of these surprise you? Want me to dig deeper on any of them, or should we move to fragile assumptions?"
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Fragile Assumptions
|
||||
|
||||
The second power question:
|
||||
|
||||
> "What are the 3-5 assumptions this entire market is built on, and what would have to be true for each one to be wrong?"
|
||||
|
||||
This question maps the market's attack surface — the beliefs everyone takes for granted that could be upended.
|
||||
|
||||
Run this as a dedicated analysis pass with the Source Dossier plus Phase 3 insights. Use the prompt from `references/analyst-prompt.md` (Section: Fragile Assumptions).
|
||||
|
||||
**Present findings** as a structured assumption map:
|
||||
|
||||
For each assumption:
|
||||
- **The assumption** (what everyone believes)
|
||||
- **Evidence it's true** (why people believe this)
|
||||
- **What breaks it** (specific conditions that would make it wrong)
|
||||
- **Fragility score** (1-5: how likely is it to break in the next 2-3 years?)
|
||||
- **If it breaks** (what happens to the market)
|
||||
|
||||
**Checkpoint:** "These are the fragile assumptions I found. Any you disagree with? Want to explore any further?"
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Investor Stress-Test
|
||||
|
||||
The third power question:
|
||||
|
||||
> "Write 5 questions a world-class investor would ask to destroy this business idea, then answer each one using only the evidence in our source dossier."
|
||||
|
||||
This is adversarial by design. The goal is to find every weak point before committing resources.
|
||||
|
||||
Run this as a dedicated analysis pass with the Source Dossier plus all prior analysis. Use the prompt from `references/analyst-prompt.md` (Section: Investor Stress-Test).
|
||||
|
||||
**Present findings** as 5 numbered challenges:
|
||||
|
||||
For each:
|
||||
- **The killer question** (phrased as an investor would ask it)
|
||||
- **The evidence-based answer** (citing only our sources)
|
||||
- **Confidence level** (strong / moderate / weak)
|
||||
- **Remaining risk** (what the answer doesn't fully address)
|
||||
|
||||
### Iterative Deepening
|
||||
|
||||
For any answer rated "weak" confidence, automatically follow up:
|
||||
|
||||
> "What's the strongest version of this argument and where does it still break?"
|
||||
|
||||
Continue until all weak points are either resolved or clearly flagged as genuine risks.
|
||||
|
||||
**Checkpoint:** "Here's the stress-test. X questions have strong answers, Y have remaining risks. Want to dig deeper on any of these?"
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Opportunity Mapping
|
||||
|
||||
Now synthesize everything into actionable opportunities:
|
||||
|
||||
> "Given all the unspoken insights, fragile assumptions, and blind spots we've found — what are the 3 highest-leverage entry points or strategic moves? For each, what's the evidence, what's the risk, and what would you need to validate first?"
|
||||
|
||||
Run this as a dedicated analysis pass with all prior analysis. Use the prompt from `references/analyst-prompt.md` (Section: Opportunity Mapping).
|
||||
|
||||
**Present** as an opportunity matrix:
|
||||
|
||||
| Opportunity | Evidence | Risk | Validation Needed | Leverage (1-5) |
|
||||
|-------------|----------|------|-------------------|----------------|
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
**Checkpoint:** "These are the highest-leverage opportunities I see. Which ones resonate? Should I develop any of them into a concrete action plan?"
|
||||
|
||||
---
|
||||
|
||||
## Phase 7: Action Plan & Save
|
||||
|
||||
Based on user's selections from Phase 6, create a concrete action plan:
|
||||
|
||||
1. **Immediate next steps** (this week)
|
||||
2. **Validation experiments** (this month)
|
||||
3. **Strategic moves** (this quarter)
|
||||
|
||||
### Save the Document
|
||||
|
||||
Compile ALL phases into a single research document and save it.
|
||||
|
||||
Use this format:
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: RESEARCH-YYYY-MM-DD-attack-surface-{slug}
|
||||
created: YYYY-MM-DD
|
||||
topic: Attack Surface Analysis — {Topic}
|
||||
sources: [list of source types used]
|
||||
search_tools: [Exa MCP / built-in web tools]
|
||||
tags: [attack-surface, market-research, {topic-tags}]
|
||||
---
|
||||
|
||||
# Attack Surface: {Topic}
|
||||
|
||||
## Executive Summary
|
||||
[3-5 bullet points with the most important findings]
|
||||
|
||||
## Research Brief
|
||||
[From Phase 1]
|
||||
|
||||
## Source Dossier Summary
|
||||
[From Phase 2 — source counts and key themes]
|
||||
|
||||
## Unspoken Insights
|
||||
[From Phase 3]
|
||||
|
||||
## Fragile Assumptions
|
||||
[From Phase 4 — the assumption map]
|
||||
|
||||
## Investor Stress-Test
|
||||
[From Phase 5 — questions, answers, confidence levels]
|
||||
|
||||
## Opportunity Matrix
|
||||
[From Phase 6]
|
||||
|
||||
## Action Plan
|
||||
[From Phase 7]
|
||||
|
||||
## Raw Sources
|
||||
[Links to all sources consulted]
|
||||
```
|
||||
|
||||
Save to the project root as `RESEARCH-YYYY-MM-DD-attack-surface-{slug}.md`. Tell the user the file path and offer to discuss any findings further.
|
||||
|
||||
---
|
||||
|
||||
## Delegation Guidance
|
||||
|
||||
This skill works without subagents. Use the main thread by default, and only delegate when the user explicitly asks for subagents or parallel agent work and the environment supports it.
|
||||
|
||||
Read the reference files for detailed prompt templates:
|
||||
|
||||
- `references/gatherer-prompt.md` — Prompt template for Phase 2 source collection gatherers
|
||||
- `references/analyst-prompt.md` — Prompt templates for Phases 3-6 analysis passes
|
||||
|
||||
When delegating:
|
||||
- Phase 2: Launch up to 4 gatherers in parallel, one per search focus
|
||||
- Phases 3-6: Run sequentially because each pass depends on prior findings
|
||||
- Use a normal Codex subagent type that fits the environment; do not depend on Claude-specific agent naming
|
||||
- Give gatherers the research brief, search tool instructions, and their focus area
|
||||
- Give analysis passes a condensed Source Dossier plus the raw-source appendix or links when possible; do not bloat context with unnecessary full-page dumps
|
||||
|
||||
### Token Budget
|
||||
|
||||
This skill may require 6-10 major research and analysis passes. Estimated cost:
|
||||
- Phase 2: 4-6 gatherer passes x ~5-15K tokens each
|
||||
- Phases 3-6: 4 analysis passes x ~10-20K tokens each
|
||||
- Total: ~60-150K tokens per full research session
|
||||
|
||||
---
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
| Mistake | Fix |
|
||||
|---------|-----|
|
||||
| Skipping Phase 1 briefing | The research brief focuses everything — never skip |
|
||||
| Generic searches | Use specific, targeted queries from the research brief |
|
||||
| Presenting analysis without evidence | Every insight must cite specific sources |
|
||||
| Moving past weak stress-test answers | Always run iterative deepening on weak answers |
|
||||
| Forgetting to save | Always save the final document at the end |
|
||||
| Ignoring user-provided sources | Crawl them FIRST — the user chose them for a reason |
|
||||
| Not checking available search tools first | Decide on Exa vs. built-in web tools before collecting sources |
|
||||
Reference in New Issue
Block a user