mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-13 05:03:28 +08:00
Ports functionality from 10+ separate plugins into ECC so users only need one plugin installed. Consolidates: pr-review-toolkit, feature-dev, commit-commands, hookify, code-simplifier, security-guidance, frontend-design, explanatory-output-style, and personal skills. New agents (8): code-architect, code-explorer, code-simplifier, comment-analyzer, conversation-analyzer, pr-test-analyzer, silent-failure-hunter, type-design-analyzer New commands (9): commit, commit-push-pr, clean-gone, review-pr, feature-dev, hookify, hookify-list, hookify-configure, hookify-help New skills (8): frontend-design, hookify-rules, github-ops, knowledge-ops, lead-intelligence, oura-health, pmx-guidelines, remotion Enhanced skills (8): article-writing, content-engine, market-research, investor-materials, investor-outreach, x-api, security-scan, autonomous-loops — merged with personal skill content New hook: security-reminder.py (pattern-based OWASP vulnerability warnings on file edits) Totals: 36 agents, 69 commands, 128 skills, 29 hook scripts
4.9 KiB
4.9 KiB
name, description, metadata
| name | description | metadata | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| research-ops | Evidence-first research workflow for Hermes. Use when answering current questions, evaluating a market or tool, enriching leads, comparing strategic options, or deciding whether a request should become ongoing monitored data collection. |
|
Research Ops
Use this when the user asks Hermes to research something current, compare options, enrich people or companies, or turn repeated lookups into an ongoing monitoring workflow.
Skill Stack
Pull these imported skills into the workflow when relevant:
deep-researchfor multi-source cited synthesismarket-researchfor decision-oriented framingexa-searchfor first-pass discovery and current-web retrievalcontinuous-agent-loopwhen the task spans user-provided evidence, fresh verification, and a final recommendation across multiple turnsdata-scraper-agentwhen the user really needs recurring collection or monitoringsearch-firstbefore building new scraping or enrichment logiceval-harnessmindset for claim quality, freshness, and explicit uncertainty
When To Use
- user says
research,look up,find,who should i talk to,what's the latest, or similar - the answer depends on current public information, external sources, or a ranked set of candidates
- the user pastes a compaction summary, copied research, manual calculations, or says
factor this in - the user asks
should i do X or Y,compare these options, or wants an explicit recommendation under uncertainty - the task sounds recurring enough that a scraper or scheduled monitor may be better than a one-off search
Workflow
- Start from the evidence already in the prompt:
- treat compaction summaries, pasted research, copied calculations, and quoted assumptions as loaded inputs
- normalize them into
user-provided evidence,needs verification, andopen questions - do not restart the analysis from zero if the user already gave you a partial model
- Classify the ask before searching:
- quick factual answer
- decision memo or comparison
- lead list or enrichment
- recurring monitoring request
- Build the decision surface before broad searching when the ask is comparative:
- list the options, decision criteria, constraints, and assumptions explicitly
- keep concrete numbers and dates attached to the option they belong to
- mark which variables are already evidenced and which still need outside verification
- Start with the fastest evidence path:
- use
exa-searchfirst for broad current-web discovery - if the question is about a local wrapper, config, or checked-in code path, inspect the live local source before making any web claim
- use
- Deepen only where the evidence justifies it:
- use
deep-researchwhen the user needs synthesis, citations, or multiple angles - use
market-researchwhen the result should end in a recommendation, ranking, or go/no-go call - keep
continuous-agent-loopdiscipline when the task spans user evidence, fresh searches, and recommendation updates across interruptions
- use
- Separate fact from inference:
- label sourced facts clearly
- label user-provided evidence clearly
- label inferred fit, ranking, or recommendation as inference
- include dates when freshness matters
- Decide whether this should stay manual:
- if the user will likely ask for the same scan repeatedly, use
data-scraper-agentpatterns or propose a monitored collection path instead of repeating the same manual research forever
- if the user will likely ask for the same scan repeatedly, use
- Report with evidence:
- group the answer into sourced facts, user-provided evidence, inference, and recommendation when the ask is a comparison or decision
- cite the source or local file behind each important claim
- if evidence is thin or conflicting, say so directly
Pitfalls
- do not answer current questions from stale memory when a fresh search is cheap
- do not conflate local code-backed behavior with market or web evidence
- do not ignore pasted research or compaction context and redo the whole investigation from scratch
- do not mix user-provided assumptions into sourced facts without labeling them
- do not present unsourced numbers or rankings as facts
- do not spin up a heavy deep-research pass for a quick capability check that local code can answer
- do not leave the comparison criteria implicit when the user asked for a recommendation
- do not keep one-off researching a repeated monitoring ask when automation is the better fit
Verification
- important claims have a source, file path, or explicit inference label
- user-provided evidence is surfaced as a distinct layer when it materially affects the answer
- freshness-sensitive answers include concrete dates when relevant
- recurring-monitoring recommendations state whether the task should remain manual or graduate to a scraper/workflow