mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-07 09:43:30 +08:00
3.8 KiB
3.8 KiB
name, description, origin
| name | description | origin |
|---|---|---|
| knowledge-ops | Evidence-first memory and context retrieval workflow for Hermes. Use when the user asks what Hermes remembers, points to OpenClaw or Hermes memory, or wants context recovered from a compacted session without re-reading already loaded files. | Hermes |
Knowledge Ops
Use this when the user asks Hermes to remember something, recover an older conversation, pull context from a compacted session, or find information that "should be in memory somewhere."
Skill Stack
Pull these companion skills into the workflow when relevant:
continuous-learning-v2for evidence-backed pattern capture and cross-session learningcontinuous-agent-loopwhen the lookup spans multiple stores, compaction summaries, and follow-up recovery stepssearch-firstbefore inventing a new lookup path or assuming a store is emptyeval-harnessmindset for exact source attribution and negative-search reporting
When To Use
- user says
do you remember,it was in memory,it was in openclaw,find the old session, or similar - the prompt contains a compaction summary or
[Files already read ... do NOT re-read these] - the prompt says
use the context summary above,proceed, or otherwise hands off loaded context plus a concrete writing, editing, or response task - the answer depends on Hermes workspace memory, Supermemory, session logs, or the historical knowledge base
Workflow
- Start from the evidence already in the prompt:
- treat compaction summaries and
do NOT re-readmarkers as usable context - if the prompt already says
use the context summary aboveor asks you to proceed with writing, editing, or responding, continue from that loaded context first and search only the missing variables - do not waste turns re-reading the same files unless the summary is clearly insufficient
- treat compaction summaries and
- Search in a fixed order before saying
not found, unless the user already named the store:mcp_supermemory_recallwith a targeted query- grep
$HERMES_WORKSPACE/memory/ - grep
$HERMES_WORKSPACE/more broadly session_searchfor recent Hermes conversations- grep
$KNOWLEDGE_BASE_ROOT/and$OPENCLAW_MEMORY_ROOT/for historical context
- If the user says the answer is in a specific memory store, pivot there immediately after the initial targeted recall:
openclaw memorymeans favor$KNOWLEDGE_BASE_ROOT/and$OPENCLAW_MEMORY_ROOT/not in this sessionmeans stop digging through the current thread and move to persistent stores instead of re-reading current-session files
- Keep the search narrow and evidence-led:
- reuse names, dates, channels, account names, or quoted phrases from the user
- search the most likely store first instead of spraying generic queries everywhere
- Report findings with source evidence:
- give the file path, session id, date, or memory store
- distinguish between a direct hit, a likely match, and an inference
- If nothing turns up, say which sources were checked and what to try next. Do not say
not foundafter a single failed search.
Pitfalls
- do not ignore a compaction summary and start over from zero
- do not keep re-reading files the prompt says are already loaded
- do not turn a loaded-context handoff into a pure retrieval loop when the user already asked for an actual draft, edit, or response
- do not keep searching the current session after the user already named OpenClaw or another persistent store as the likely source
- do not answer from vague memory without a source path, date, or session reference
- do not stop after one failed memory source when others remain
Verification
- the response names the source store or file
- the response separates direct evidence from inference
- failed lookups list the sources checked, not just a bare
not found