Webhook & Slack Notifications
Slim.io can send real-time notifications when findings are detected, policies are triggered, or drift events occur. Notifications are configured as actions within governance policies and can target Slack channels, email addresses, or custom webhook endpoints.
Notification Channels
Slack
Send alerts to Slack channels via incoming webhooks or the Slim.io Slack app.
Setup via incoming webhook:
- In your Slack workspace, create an incoming webhook URL at api.slack.com/messaging/webhooks .
- In Slim.io, navigate to Settings > Integrations > Slack.
- Paste the webhook URL and assign it a channel alias (e.g.,
security-alerts). - Test the connection to verify delivery.
Using the channel alias in policies:
actions:
- type: alert
config:
channels:
- slack://security-alerts
severity: high
template: finding-summarySlack message format:
Notifications include:
- Finding category and confidence score
- File path and cloud provider
- Connector name and scan ID
- Direct link to the finding in the Slim.io dashboard
- Policy name that triggered the alert
Send notifications to individual email addresses or distribution lists.
actions:
- type: alert
config:
channels:
- email://security-team@company.com
- email://compliance@company.com
severity: highEmail notifications are delivered via Slim.io’s notification service. Emails include the same information as Slack notifications plus an HTML-formatted summary of findings.
PagerDuty
Route critical alerts to PagerDuty for on-call incident management and escalation.
Setup:
- In PagerDuty, create a new service or use an existing one and generate an Events API v2 integration key.
- In Slim.io, navigate to Settings > Integrations > PagerDuty.
- Paste the integration key and assign it a service alias (e.g.,
data-security-oncall). - Test the connection to verify delivery.
Using PagerDuty in policies:
actions:
- type: alert
config:
channels:
- pagerduty://data-security-oncall
severity: criticalPagerDuty alerts include finding details, affected resource, connector, and a direct link to the finding in the Slim.io dashboard. Alerts are sent as PagerDuty events with severity mapped to PagerDuty’s urgency levels.
Custom Webhooks
Send structured JSON payloads to any HTTP endpoint.
actions:
- type: webhook
config:
url: https://your-siem.example.com/api/events
method: POST
headers:
Authorization: "Bearer your-api-key"
Content-Type: "application/json"
retry:
max_attempts: 3
backoff_ms: 1000Webhook payload schema:
{
"event_type": "finding.detected",
"timestamp": "2026-03-15T10:30:00Z",
"tenant_id": "tenant-abc",
"scan_id": "scan-xyz",
"findings": [
{
"id": "finding-123",
"category": "Credit Card",
"confidence": 0.92,
"file_path": "s3://prod-data/exports/customers.csv",
"connector": "aws-prod-s3",
"location": { "line": 42, "column": 15 },
"policy": "tokenize-financial-pii"
}
],
"summary": {
"total_findings": 1,
"severity": "high",
"categories": ["Credit Card"]
}
}Notification Triggers
Notifications can be triggered by different event types:
| Event Type | Description | Payload |
|---|---|---|
finding.detected | New PII finding above confidence threshold | Finding details, file path, category |
policy.triggered | Policy conditions matched and actions executed | Policy name, actions taken, findings |
drift.detected | Previously compliant resource became non-compliant | Drift event, baseline vs. current state |
scan.completed | Scan job finished (success or failure) | Scan summary, finding counts, duration |
scan.failed | Scan job encountered an unrecoverable error | Error details, connector, affected scope |
quota.warning | Tenant approaching a scan quota limit (80%) | Current usage, limit, projected date |
Severity Filtering
Control which findings trigger notifications based on severity:
actions:
- type: alert
config:
channels:
- slack://security-alerts
severity: high # Only send for high-severity findings
- type: alert
config:
channels:
- email://compliance@company.com
severity: medium # Send for medium and above| Severity | Confidence Tier | Typical Action |
|---|---|---|
| Critical | Top of the High tier | Immediate notification to all channels |
| High | High tier | Notify security team |
| Medium | Medium tier | Daily digest or on-demand review |
| Low | Low tier | Log only (no notification) |
Notification Templates
Customize the content and format of notifications:
actions:
- type: alert
config:
channels:
- slack://security-alerts
template: custom-summary
template_config:
include_file_path: true
include_confidence: true
include_location: false
include_dashboard_link: true
group_by: categoryBuilt-in templates:
| Template | Description |
|---|---|
finding-summary | One notification per finding with full details |
scan-digest | Aggregated summary after scan completion |
drift-alert | Drift event with baseline comparison |
custom-summary | Configurable fields via template_config |
Rate Limiting
To prevent notification floods during large scans:
- Batching — Findings from the same scan are batched into a single notification (configurable batch window: 30 seconds default)
- Deduplication — Identical findings on the same file are not re-notified within 24 hours
- Rate cap — Maximum 100 notifications per hour per channel (configurable)
For high-volume scans, use the scan-digest template to receive a single summary notification after the scan completes rather than individual alerts per finding.
Alert Routing by Severity
Different severity levels often require different response channels. Configure your policies to route alerts based on urgency:
# Critical findings → PagerDuty for immediate on-call response
actions:
- type: alert
config:
channels:
- pagerduty://data-security-oncall
severity: critical
# High findings → Slack for team awareness
- type: alert
config:
channels:
- slack://security-alerts
severity: high
# Medium findings → Email digest for review
- type: alert
config:
channels:
- email://compliance@company.com
severity: mediumA common pattern is to route critical and high-severity alerts to real-time channels (PagerDuty, Slack) while sending medium-severity alerts to email for daily review. Low-severity findings are typically logged without notification.
You can assign multiple channels to the same severity level. For example, critical alerts can be sent to both PagerDuty and Slack simultaneously to ensure visibility across incident management and team communication tools.
Monitoring Notification Health
Check notification delivery status under Settings > Integrations > Notification Log:
- Delivery status (sent, failed, retrying)
- Response codes from webhook endpoints
- Retry history and failure reasons
- Channel-level delivery metrics