You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CI and test-related PRs have perfect merge rates (100%): Prompts that target CI infrastructure or test coverage appear to have clear, measurable success criteria that reviewers can easily verify.
Prompts referencing specific technical artifacts succeed more: The most successful merged PRs reference exact file names, function names, CI job names, or error messages — providing concrete context.
Engine-specific PRs (codex, claude) have lower success rates: PRs mentioning specific AI engines in their titles (codex: 14 merged vs 17 closed; claude: 9 merged vs 11 closed) perform slightly below average, suggesting engine-configuration changes face more scrutiny.
Prompt length matters less than specificity: Merged and closed PRs have similar median body lengths (~333 words), indicating that quality of context outweighs quantity.
Recommendations
Based on today's analysis:
DO: Use conventional commit prefixes (fix:, feat:, ci:, test:) — PRs with these prefixes average 77–100% success vs 75.5% for untyped titles
DO: Include specific error messages, CI job names, or file/function references in the prompt — these are hallmarks of successful PRs
DO: Focus on test and CI improvements — 100% merge rate suggests these are well-received change types
AVOID: Vague "improvement" or "enhancement" descriptions without concrete before/after context
AVOID: Broad scope changes that touch many unrelated areas — engine-configuration PRs and multi-area refactors show lower success rates
Historical Trends
First run — baseline established. Historical trend data will appear in future reports.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Analysis Period: Last 30 days
Total PRs: 1,000 | Merged: 775 (77.8%) | Closed (not merged): 221 (22.1%) | Open: 4 (0.4%)
This is the first run of the Copilot PR Prompt Pattern Analysis, establishing a baseline for future trend tracking.
Prompt Categories and Success Rates
Commit Type Success Rates
ci:test:docs:fix:feat:perf:chore:refactor:Prompt Analysis
✅ Successful Prompt Patterns
Common characteristics in merged PRs:
fix:,feat:,docs:, etc.)Example successful prompts:
namefield inlistFieldCaseto fix lint-go CI #29239 (fix:):fix: remove unused name field in listFieldCase to fix lint-go CI— described exact CI failure, specific field name → Mergedfeat:):feat: parameterize safe-output boolean controls for reusable workflows— described limitation and need → Mergedfeat:):feat: parameterize safe-output PR policy fields in workflow_call workflows— concrete use case with before/after context → Merged❌ Unsuccessful Prompt Patterns
Common characteristics in closed PRs:
codex,claude) in closed PRsExample closed prompts:
Enable requireCleanGit in skill optimizer config— configuration change without sufficient context → Closedfix(security):): Replacecurl-pipe-bashpattern — security fix with good intent but potentially too broad in scope → Closedfeat:):add shared/gh-skill.md workflow— new workflow addition → ClosedKey Insights
codex,claude) have lower success rates: PRs mentioning specific AI engines in their titles (codex: 14 merged vs 17 closed;claude: 9 merged vs 11 closed) perform slightly below average, suggesting engine-configuration changes face more scrutiny.Recommendations
Based on today's analysis:
fix:,feat:,ci:,test:) — PRs with these prefixes average 77–100% success vs 75.5% for untyped titlesHistorical Trends
First run — baseline established. Historical trend data will appear in future reports.
References: §25148750109
Beta Was this translation helpful? Give feedback.
All reactions