Foundational Prompt Structure

Introduction

Below is a structured set of AI prompt patterns and concrete examples, designed explicitly for DFIR and SOC investigations. These are written from the perspective of an experienced Cyber/SOC Analyst, aligned with modern SOC tooling (Defender XDR, Sentinel, Splunk, Velociraptor, KQL, PowerShell) and framed to support repeatable, high-fidelity investigations.

The intent is to help operationalise AI as a junior analyst, investigation assistant, and threat-hunting co-pilot, not as a decision-maker.

Use this structure consistently to maximise signal and minimise hallucination:

Prompt Template

You are a senior DFIR and SOC analyst.

Context:
- Environment: [Cloud / Hybrid / On-prem]
- Platform(s): [Defender XDR, Sentinel, Splunk, Velociraptor, etc.]
- Time window: [UTC]
- Scope: [Users, hosts, IPs, tenants]

Objective:
- [Detection | Triage | Investigation | Threat Hunt | Root Cause Analysis]

Data Provided:
- [Logs, alerts, KQL output, timelines, artefacts]

Constraints:
- Assume enterprise Windows environment
- Map findings to MITRE ATT&CK
- Prioritise evidence-based conclusions
- Highlight uncertainties and next steps

Output Required:
- Findings summary
- Indicators of compromise
- Likely attacker objectives
- Recommended containment actions
- Follow-up queries or artefacts to collect

2. SOC Alert Triage Prompts

Example 1 – Defender XDR Alert Triage


Example 2 – Sentinel Incident Analysis


3. DFIR Investigation Prompts

Example 3 – Host-Based Forensics (Windows)


Example 4 – Memory and Process Analysis


4. Threat Hunting Prompts

Example 5 – Hypothesis-Driven Threat Hunt


Example 6 – Living-off-the-Land (LOLBins)


5. Log Analysis and Query Generation

Example 7 – KQL Query Refinement


Example 8 – Velociraptor DFIR Queries


6. Incident Response & Decision Support Prompts

Example 9 – Containment Strategy


7. Executive and Reporting Prompts

Example 10 – Post-Incident Report Drafting


8. Best Practices for Using AI in SOC & DFIR

Always:

  • Provide logs, not conclusions

  • Ask for evidence-based reasoning

  • Request MITRE ATT&CK mapping

  • Ask for uncertainty and confidence levels

  • Use AI for analysis acceleration, not authority

Never:

  • Blindly trust AI verdicts

  • Use AI outputs without validation

  • Treat AI as legal or forensic final authority

Last updated