3.5 KiB
name, description, origin
| name | description | origin |
|---|---|---|
| research-ops | Evidence-first current-state research workflow for ECC. Use when the user wants fresh facts, comparisons, enrichment, or a recommendation built from current public evidence and any supplied local context. | ECC |
Research Ops
Use this when the user asks to research something current, compare options, enrich people or companies, or turn repeated lookups into a monitored workflow.
This is the operator wrapper around the repo's research stack. It is not a replacement for deep-research, exa-search, or market-research; it tells you when and how to use them together.
Skill Stack
Pull these ECC-native skills into the workflow when relevant:
exa-searchfor fast current-web discoverydeep-researchfor multi-source synthesis with citationsmarket-researchwhen the end result should be a recommendation or ranked decisionlead-intelligencewhen the task is people/company targeting instead of generic researchknowledge-opswhen the result should be stored in durable context afterward
When to Use
- user says "research", "look up", "compare", "who should I talk to", or "what's the latest"
- the answer depends on current public information
- the user already supplied evidence and wants it factored into a fresh recommendation
- the task may be recurring enough that it should become a monitor instead of a one-off lookup
Guardrails
- do not answer current questions from stale memory when fresh search is cheap
- separate:
- sourced fact
- user-provided evidence
- inference
- recommendation
- do not spin up a heavyweight research pass if the answer is already in local code or docs
Workflow
1. Start from what the user already gave you
Normalize any supplied material into:
- already-evidenced facts
- needs verification
- open questions
Do not restart the analysis from zero if the user already built part of the model.
2. Classify the ask
Choose the right lane before searching:
- quick factual answer
- comparison or decision memo
- lead/enrichment pass
- recurring monitoring candidate
3. Take the lightest useful evidence path first
- use
exa-searchfor fast discovery - escalate to
deep-researchwhen synthesis or multiple sources matter - use
market-researchwhen the outcome should end in a recommendation - hand off to
lead-intelligencewhen the real ask is target ranking or warm-path discovery
4. Report with explicit evidence boundaries
For important claims, say whether they are:
- sourced facts
- user-supplied context
- inference
- recommendation
Freshness-sensitive answers should include concrete dates.
5. Decide whether the task should stay manual
If the user is likely to ask the same research question repeatedly, say so explicitly and recommend a monitoring or workflow layer instead of repeating the same manual search forever.
Output Format
QUESTION TYPE
- factual / comparison / enrichment / monitoring
EVIDENCE
- sourced facts
- user-provided context
INFERENCE
- what follows from the evidence
RECOMMENDATION
- answer or next move
- whether this should become a monitor
Pitfalls
- do not mix inference into sourced facts without labeling it
- do not ignore user-provided evidence
- do not use a heavy research lane for a question local repo context can answer
- do not give freshness-sensitive answers without dates
Verification
- important claims are labeled by evidence type
- freshness-sensitive outputs include dates
- the final recommendation matches the actual research mode used