Deductive is good at clarifying when it needs to, so you don’t need to write the perfect prompt, but a few patterns reliably produce tighter, faster, more useful investigations. This page is the cheat sheet.Documentation Index
Fetch the complete documentation index at: https://docs.deductive.ai/llms.txt
Use this file to discover all available pages before exploring further.
The four patterns that always work
1. Anchor in time
Vague: “Why is the API slow?” Anchored: “Why did the API’s p99 latency jump at 14:30 today and stay elevated?” Time anchors give the agent a window to focus its evidence-gathering. “Yesterday between 11pm and 2am” beats “last night”. “From the last successful deploy onward” beats “recently”.2. Anchor in change
If you suspect something changed (a deploy, a config flip, a feature flag), say so. The agent will pull the change set and correlate against it as a first-class hypothesis.“Did anything in PR #1234 cause the payment errors after we merged it?”
“We flipped the cache_v2 flag to true at 12:00 PT. Did anything regress in the next hour?”
“Has any deploy in the last week touched code paths that the auth-service calls?“
3. Compare
Comparison prompts force the agent to fetch baselines, which is almost always the correct framing for “is this normal?”.“Compare today’s checkout error rate to the same time last week.”
“Which services are noisier in their logs this week than last week?”
“How does the latency profile of payments look against the rolling 30-day baseline?“
4. Ask for the timeline, not the answer
If you don’t yet know what you’re looking at, ask the agent to assemble the timeline first. The right answer often falls out of the timeline.“Walk me through what happened to the payment service between 14:00 and 15:00, in order.”
“Build me a timeline of every event. Deploys, alerts, infra changes. Touching the auth path yesterday.”
A prompt library by role
Steal these. They work because they hit one of the four patterns above.- On-call
- Dev who shipped a PR
- Platform / SRE
- Customer support / TPM
“What was the worst-looking thing in production last night between 11pm and 2am? Build the timeline first, then propose root cause.”
“Has anything weird happened with the <service> in the past 24 hours? Anchor on the last successful deploy.”
“Summarize this week’s incidents, group them by likely cause, and flag any that look like they share a root cause.”
“Walk me through what changed between when <alert> fired and when it cleared.”
“Three pages in two days for the same service. Are they actually the same incident or three different ones?”