Rule Quality

Precision and recall by outcome-to-label pairs from labeled events

{{ error }}
No snapshot exists for the selected filters. Click Refresh Report to generate one.

Filtering

Hide low-support pairs where both predicted and actual counts are below threshold.

Snapshot as of {{ freezeAt | date: 'medium' }}

{{ polling ? 'Generating snapshot report...' : 'Loading rule quality report...' }}

Labeled Events

{{ totalLabeledEvents }}

Rules Analyzed

{{ uniqueRulesCount() }}

Outcome/Label Pairs

{{ pairMetrics.length }}

Best Rules

Highest average F1 score

Rule Avg F1 Best Pair
{{ rule.rid }} {{ formatPercent(rule.averageF1) }} {{ rule.bestPair || 'N/A' }}
No ranked rules for this support threshold.

Needs Attention

Lowest average F1 score

Rule Avg F1 Worst Pair
{{ rule.rid }} {{ formatPercent(rule.averageF1) }} {{ rule.worstPair || 'N/A' }}
No ranked rules for this support threshold.

Pair Metrics

Each row evaluates one mapping: outcome prediction vs ground-truth label.

No rule-quality pairs available. Configure curated outcome→label pairs in Settings, add labeled events, or lower the min support.
Rule Outcome Label Precision Recall F1 TP FP FN Predicted Actual
{{ metric.rid }} {{ metric.outcome }} {{ metric.label }} {{ formatPercent(metric.precision) }} {{ formatPercent(metric.recall) }} {{ formatPercent(metric.f1) }} {{ metric.truePositive }} {{ metric.falsePositive }} {{ metric.falseNegative }} {{ metric.predictedPositives }} {{ metric.actualPositives }}