mirror of
https://github.com/affaan-m/everything-claude-code.git
synced 2026-04-14 22:13:41 +08:00
5.5 KiB
5.5 KiB
name, description, metadata
| name | description | metadata | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| automation-audit-ops | Evidence-first automation audit workflow for Hermes. Use when auditing cron jobs, tooling, connectors, auth wiring, MCP surfaces, local-system parity, or automation overlap before fixing or pruning anything. |
|
Automation Audit Ops
Use this when the user asks what automations are live, which jobs are broken, where overlap exists, or what tooling and connectors Hermes is actually benefiting from right now.
Skill Stack
Pull these skills into the workflow when relevant:
continuous-agent-loopfor multi-step audits that cross cron, tooling, and config surfacesagentic-engineeringfor decomposing the audit into verifiable units with a clear done conditionterminal-opswhen the audit turns into a concrete local fix or command runresearch-opswhen local inventory has to be compared against current docs, platform support, or missing-capability claimssearch-firstbefore inventing a new helper, wrapper, or inventory patheval-harnessmindset for exact failure signatures, proof paths, and post-fix verification
When To Use
- user asks
what automations do i have set up? - user asks to audit cron overlap, redundancy, broken jobs, or job coverage
- user asks which tools, connectors, auth surfaces, MCP servers, or wrappers are actually live
- user asks what is ported from another local agentic system, what is still missing, or what should be wired next
- the task spans both local inventory and a small number of high-impact fixes
Workflow
- Start from prepared evidence when it exists:
- read the prepared failure summary, request patterns, docs scan, and skill-sync notes before opening raw logs
- if the prepared failure summary says there is no dominant cluster, do not spray-read raw log bundles anyway
- Inventory the live local surface before theorizing:
- for cron work, inspect
$HERMES_HOME/cron/jobs.json, the latest relevant$HERMES_HOME/cron/output/<job>/...files, and matching$HERMES_HOME/cron/delivery-state/*.json - for tooling or connector parity, inspect live config, plugins, helper scripts, wrappers, auth files, and verification logs before claiming something is missing
- when the user names another local system or repo, inspect that live source too before saying Hermes lacks or already has the capability
- for cron work, inspect
- Classify each finding by failure class:
- active broken logic
- auth or provider outage
- stale error that has not rerun yet
- MCP or schema-level infrastructure break
- overlap or redundancy
- missing or unported capability
- Classify each surfaced integration by live state:
- configured
- authenticated
- recently verified
- stale or broken
- missing
- Answer inventory questions with proof, not memory:
- group automations by surface such as cron, messaging, content, GitHub, research, billing, or health
- include schedules, current status, and the proof path behind each claim
- for parity audits, separate
already ported,available but not wired, andmissing entirely - for overlap audits, end with
keep,merge,cut, orfix next, plus the evidence path behind each call
- Freeze audit scope before changing anything:
- if the user asked for an inventory, audit table, comparison, or otherwise kept the task read-only, stay read-only until the user explicitly asks for fixes
- collect evidence with config reads, logs, status files, and non-writing proving steps first
- do not rewrite cron, config, wrappers, or auth surfaces just because a likely fix is visible
- Fix only the highest-impact proved issues:
- keep the pass bounded to one to three changes
- prefer durable config, instruction, or helper-script fixes over one-off replies
- if the prepared evidence shows a failure happened before a later config change, record that as stale-runtime or stale-status risk instead of rewriting the config blindly
- Verify after any change:
- rerun the smallest proving step
- regenerate any affected context artifacts
- record exact remaining risks when runtime state is outside the scope of the current pass
Pitfalls
- do not treat
last_status=erroras proof of a current bug if the job has not rerun since recovery - do not conflate a provider outage or MCP schema error with job-specific prompt logic
- do not claim a tool or connector is live just because a skill references it
- do not treat
present in configas the same thing asauthenticatedorrecently verified - do not say Hermes is missing a capability before comparing the named local system or repo the user pointed at
- do not answer
what automations do i have set up?from memory without reading the current local inventory - do not open broad raw log bundles when the prepared summary already says there is no dominant cluster
- do not turn an audit-only inventory pass into config or cron edits unless the user expands scope
- do not fix low-value redundancy before resolving the concrete broken path the user actually asked about
Verification
- each important claim points to a live file, log, config entry, or exact failure signature
- each surfaced integration is labeled
configured,authenticated,recently verified,stale/broken, ormissing - each fix names the proving command, regenerated artifact, or re-read output that confirmed it
- remaining risks clearly distinguish stale status, runtime drift, and unresolved infrastructure blockers