mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-03-30 13:43:26 +08:00
3.2 KiB
3.2 KiB
name, description, metadata
| name | description | metadata | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| research-ops | Evidence-first research workflow for Hermes. Use when answering current questions, evaluating a market or tool, enriching leads, or deciding whether a request should become ongoing monitored data collection. |
|
Research Ops
Use this when the user asks Hermes to research something current, compare options, enrich people or companies, or turn repeated lookups into an ongoing monitoring workflow.
Skill Stack
Pull these imported skills into the workflow when relevant:
deep-researchfor multi-source cited synthesismarket-researchfor decision-oriented framingexa-searchfor first-pass discovery and current-web retrievaldata-scraper-agentwhen the user really needs recurring collection or monitoringsearch-firstbefore building new scraping or enrichment logiceval-harnessmindset for claim quality, freshness, and explicit uncertainty
When To Use
- user says
research,look up,find,who should i talk to,what's the latest, or similar - the answer depends on current public information, external sources, or a ranked set of candidates
- the task sounds recurring enough that a scraper or scheduled monitor may be better than a one-off search
Workflow
- Classify the ask before searching:
- quick factual answer
- decision memo or comparison
- lead list or enrichment
- recurring monitoring request
- Start with the fastest evidence path:
- use
exa-searchfirst for broad current-web discovery - if the question is about a local wrapper, config, or checked-in code path, inspect the live local source before making any web claim
- use
- Deepen only where the evidence justifies it:
- use
deep-researchwhen the user needs synthesis, citations, or multiple angles - use
market-researchwhen the result should end in a recommendation, ranking, or go/no-go call
- use
- Separate fact from inference:
- label sourced facts clearly
- label inferred fit, ranking, or recommendation as inference
- include dates when freshness matters
- Decide whether this should stay manual:
- if the user will likely ask for the same scan repeatedly, use
data-scraper-agentpatterns or propose a monitored collection path instead of repeating the same manual research forever
- if the user will likely ask for the same scan repeatedly, use
- Report with evidence:
- cite the source or local file behind each important claim
- if evidence is thin or conflicting, say so directly
Pitfalls
- do not answer current questions from stale memory when a fresh search is cheap
- do not conflate local code-backed behavior with market or web evidence
- do not present unsourced numbers or rankings as facts
- do not spin up a heavy deep-research pass for a quick capability check that local code can answer
- do not keep one-off researching a repeated monitoring ask when automation is the better fit
Verification
- important claims have a source, file path, or explicit inference label
- freshness-sensitive answers include concrete dates when relevant
- recurring-monitoring recommendations state whether the task should remain manual or graduate to a scraper/workflow