feat(monitoring): add priority screenshot pipeline with screenshot_type + docs cleanup
Implement end-to-end support for typed screenshots and priority rendering in monitoring. Added - Accept and forward screenshot_type from MQTT screenshot/dashboard payloads (periodic, event_start, event_stop) - Extend screenshot upload handling to persist typed screenshots and metadata - Add dedicated priority screenshot serving endpoint with fallback behavior - Extend monitoring overview with priority screenshot fields and summary count - Add configurable PRIORITY_SCREENSHOT_TTL_SECONDS window for active priority state Fixed - Ensure screenshot cache-busting updates reliably via screenshot hash updates - Preserve normal periodic screenshot flow while introducing event_start/event_stop priority path Improved - Monitoring dashboard now displays screenshot type badges - Adaptive polling: faster refresh while priority screenshots are active - Priority screenshot presentation is surfaced immediately to operators Docs - Update README and copilot-instructions to match new screenshot_type behavior, priority endpoint, TTL config, monitoring fields, and retention model - Remove redundant/duplicate documentation blocks and improve troubleshooting section clarity
This commit is contained in:
19
.github/copilot-instructions.md
vendored
19
.github/copilot-instructions.md
vendored
@@ -51,7 +51,10 @@ Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/pro
|
||||
|
||||
### Screenshot retention
|
||||
- Screenshots sent via dashboard MQTT are stored in `server/screenshots/`.
|
||||
- For each client, only the latest and last 20 timestamped screenshots are kept; older files are deleted automatically on each upload.
|
||||
- Screenshot payloads support `screenshot_type` with values `periodic`, `event_start`, `event_stop`.
|
||||
- `periodic` is the normal heartbeat/dashboard screenshot path; `event_start` and `event_stop` are high-priority screenshots for monitoring.
|
||||
- For each client, the API keeps `{uuid}.jpg` as latest and the last 20 timestamped screenshots (`{uuid}_..._{type}.jpg`), deleting older timestamped files automatically.
|
||||
- For high-priority screenshots, the API additionally maintains `{uuid}_priority.jpg` and metadata in `{uuid}_meta.json` (`latest_screenshot_type`, `last_priority_*`).
|
||||
|
||||
## Recent changes since last commit
|
||||
|
||||
@@ -61,6 +64,11 @@ Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/pro
|
||||
- End-to-end monitoring pipeline completed: MQTT logs/health → listener persistence → monitoring APIs → superadmin dashboard
|
||||
- API now serves aggregated monitoring via `GET /api/client-logs/monitoring-overview` and system-wide recent errors via `GET /api/client-logs/recent-errors`
|
||||
- Monitoring dashboard (`dashboard/src/monitoring.tsx`) is active and displays client health states, screenshots, process metadata, and recent log activity
|
||||
- **Screenshot Priority Pipeline (no version bump)**:
|
||||
- Listener forwards `screenshot_type` from MQTT screenshot/dashboard payloads to `POST /api/clients/<uuid>/screenshot`.
|
||||
- API stores typed screenshots, tracks latest/priority metadata, and serves priority images via `GET /screenshots/<uuid>/priority`.
|
||||
- Monitoring overview exposes screenshot priority state (`latestScreenshotType`, `priorityScreenshotType`, `priorityScreenshotReceivedAt`, `hasActivePriorityScreenshot`) and `summary.activePriorityScreenshots`.
|
||||
- Monitoring UI shows screenshot type badges and switches to faster refresh while priority screenshots are active.
|
||||
- **Presentation Flags Persistence Fix**:
|
||||
- Fixed persistence for presentation `page_progress` and `auto_progress` to ensure values are reliably stored and returned across create/update paths and detached occurrences
|
||||
|
||||
@@ -129,7 +137,6 @@ Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/pro
|
||||
## Service boundaries & data flow
|
||||
- Database connection string is passed as `DB_CONN` (mysql+pymysql) to Python services.
|
||||
- API builds its engine in `server/database.py` (loads `.env` only in development).
|
||||
- Scheduler loads `DB_CONN` in `scheduler/db_utils.py`. Recurring events are expanded for the next 7 days, and event exceptions (skipped dates, detached occurrences) are respected. Only recurring events with recurrence_end in the future remain active. The scheduler publishes only events that are active at the current time and clears retained topics (publishes `[]`) for groups without active events. Time comparisons are UTC and naive timestamps are normalized.
|
||||
- Listener also creates its own engine for writes to `clients`.
|
||||
- Scheduler queries a future window (default: 7 days) to expand recurring events using RFC 5545 rules, applies event exceptions (skipped dates, detached occurrences), and publishes only events that are active at the current time (UTC). When a group has no active events, the scheduler clears its retained topic by publishing an empty list. Time comparisons are UTC; naive timestamps are normalized. Logging is concise; conversion lookups are cached and logged only once per media.
|
||||
- MQTT topics (paho-mqtt v2, use Callback API v2):
|
||||
@@ -139,7 +146,7 @@ Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/pro
|
||||
- Per-client group assignment (retained): `infoscreen/{uuid}/group_id` via `server/mqtt_helper.py`.
|
||||
- Client logs: `infoscreen/{uuid}/logs/{error|warn|info}` with JSON payload (timestamp, message, context); QoS 1 for ERROR/WARN, QoS 0 for INFO.
|
||||
- Client health: `infoscreen/{uuid}/health` with metrics (expected_state, actual_state, health_metrics); QoS 0, published every 5 seconds.
|
||||
- Screenshots: server-side folders `server/received_screenshots/` and `server/screenshots/`; Nginx exposes `/screenshots/{uuid}.jpg` via `server/wsgi.py` route.
|
||||
- Screenshots: server-side folder `server/screenshots/`; API serves `/screenshots/{uuid}.jpg` (latest) and `/screenshots/{uuid}/priority` (active high-priority fallback to latest).
|
||||
|
||||
- Dev Container guidance: If extensions reappear inside the container, remove UI-only extensions from `devcontainer.json` `extensions` and map them in `remote.extensionKind` as `"ui"`.
|
||||
|
||||
@@ -210,6 +217,7 @@ Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/pro
|
||||
- `GET /api/client-logs/<uuid>/logs` – Query client logs with filters (level, limit, since); admin_or_higher
|
||||
- `GET /api/client-logs/summary` – Log counts by level per client (last 24h); admin_or_higher
|
||||
- `GET /api/client-logs/recent-errors` – System-wide error monitoring; admin_or_higher
|
||||
- `GET /api/client-logs/monitoring-overview` – Includes screenshot priority fields per client plus `summary.activePriorityScreenshots`; superadmin_only
|
||||
- `GET /api/client-logs/test` – Infrastructure validation (no auth); returns recent logs with counts
|
||||
|
||||
Documentation maintenance: keep this file aligned with real patterns; update when routes/session/UTC rules change. Avoid long prose; link exact paths.
|
||||
@@ -272,7 +280,8 @@ Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/pro
|
||||
- Superadmin-only dashboard for client monitoring and diagnostics; menu item is hidden for lower roles and the route redirects non-superadmins.
|
||||
- Uses `GET /api/client-logs/monitoring-overview` for aggregated live status, `GET /api/client-logs/recent-errors` for system-wide errors, and `GET /api/client-logs/<uuid>/logs` for per-client details.
|
||||
- Shows per-client status (`healthy`, `warning`, `critical`, `offline`) based on heartbeat freshness, process state, screen state, and recent log counts.
|
||||
- Displays latest screenshot preview from `/screenshots/{uuid}.jpg`, current process metadata, and recent ERROR/WARN activity.
|
||||
- Displays latest screenshot preview and active priority screenshot (`/screenshots/{uuid}/priority` when active), screenshot type badges, current process metadata, and recent ERROR/WARN activity.
|
||||
- Uses adaptive refresh: normal interval in steady state, faster polling while `activePriorityScreenshots > 0`.
|
||||
|
||||
- Settings page (`dashboard/src/settings.tsx`):
|
||||
- Structure: Syncfusion TabComponent with role-gated tabs
|
||||
@@ -351,6 +360,7 @@ Note: Syncfusion usage in the dashboard is already documented above; if a UI for
|
||||
- VITE_API_URL — Dashboard build-time base URL (prod); in dev the Vite proxy serves `/api` to `server:8000`.
|
||||
- HEARTBEAT_GRACE_PERIOD_DEV / HEARTBEAT_GRACE_PERIOD_PROD — Groups "alive" window (defaults 180s dev / 170s prod). Clients send heartbeats every ~65s; grace periods allow 2 missed heartbeats plus safety margin.
|
||||
- REFRESH_SECONDS — Optional scheduler republish interval; `0` disables periodic refresh.
|
||||
- PRIORITY_SCREENSHOT_TTL_SECONDS — Optional monitoring priority window in seconds (default `120`); controls when event screenshots are considered active priority.
|
||||
|
||||
## Conventions & gotchas
|
||||
- **Datetime Handling**:
|
||||
@@ -360,7 +370,6 @@ Note: Syncfusion usage in the dashboard is already documented above; if a UI for
|
||||
- Frontend **must** append 'Z' before parsing: `const utcStr = dateStr.endsWith('Z') ? dateStr : dateStr + 'Z'; new Date(utcStr);`
|
||||
- Display in local timezone using `toLocaleTimeString('de-DE', { hour: '2-digit', minute: '2-digit' })`
|
||||
- When sending to API, use `date.toISOString()` which includes 'Z' and is UTC
|
||||
- Frontend must append `Z` to API strings before parsing; backend compares in UTC and returns ISO without `Z`.
|
||||
- **JSON Naming Convention**:
|
||||
- Backend uses snake_case internally (Python convention)
|
||||
- API returns camelCase JSON (web standard): `startTime`, `endTime`, `groupId`, etc.
|
||||
|
||||
24
README.md
24
README.md
@@ -39,6 +39,7 @@ A comprehensive multi-service digital signage solution for educational instituti
|
||||
|
||||
Data flow summary:
|
||||
- Listener: consumes discovery and heartbeat messages from the MQTT Broker and updates the API Server (client registration/heartbeats).
|
||||
- Listener screenshot flow: consumes `infoscreen/{uuid}/screenshot` and `infoscreen/{uuid}/dashboard`, extracts `image`/`timestamp`/`screenshot_type` (`periodic`, `event_start`, `event_stop`) and forwards to `POST /api/clients/{uuid}/screenshot`.
|
||||
- Scheduler: reads events from the API Server and publishes only currently active content to the MQTT Broker (retained topics per group). When a group has no active events, the scheduler clears its retained topic by publishing an empty list. All time comparisons are done in UTC; any naive timestamps are normalized.
|
||||
- Clients: send discovery/heartbeat via the MQTT Broker (handled by the Listener) and receive content from the Scheduler via MQTT.
|
||||
- Worker: receives conversion commands directly from the API Server and reports results/status back to the API (no MQTT involved).
|
||||
@@ -226,13 +227,9 @@ For detailed deployment instructions, see:
|
||||
## Recent changes since last commit
|
||||
|
||||
- Monitoring system: End-to-end monitoring is now implemented. The listener ingests `logs/*` and `health` MQTT topics, the API exposes monitoring endpoints (`/api/client-logs/monitoring-overview`, `/api/client-logs/recent-errors`, `/api/client-logs/<uuid>/logs`), and the superadmin dashboard page shows live client status, screenshots, and recent errors.
|
||||
- Screenshot priority flow: Screenshot payloads now support `screenshot_type` (`periodic`, `event_start`, `event_stop`). `event_start` and `event_stop` are treated as high-priority screenshots; the API stores typed screenshots, maintains priority metadata, and serves active priority screenshots through `/screenshots/{uuid}/priority`.
|
||||
- Presentation persistence fix: Fixed persistence of presentation flags so `page_progress` and `auto_progress` are reliably stored and returned for create/update flows and detached occurrences.
|
||||
- Video / Streaming support: Added end-to-end support for video events. The API and dashboard now allow creating `video` events referencing uploaded media. The server exposes a range-capable streaming endpoint at `/api/eventmedia/stream/<media_id>/<filename>` so clients can seek during playback.
|
||||
- Scheduler metadata: Scheduler now performs a best-effort HEAD probe for video stream URLs and includes basic metadata in the retained MQTT payload: `mime_type`, `size` (bytes) and `accept_ranges` (bool). Placeholders for richer metadata (`duration`, `resolution`, `bitrate`, `qualities`, `thumbnails`, `checksum`) are emitted as null/empty until a background worker fills them.
|
||||
- Dashboard & uploads: The dashboard's FileManager upload limits were increased (to support Full-HD uploads) and client-side validation enforces a maximum video length (10 minutes). The event modal exposes playback flags (`autoplay`, `loop`, `volume`, `muted`) and initializes them from system defaults for new events.
|
||||
- DB model & API: `Event` includes `muted` in addition to `autoplay`, `loop`, and `volume`; endpoints accept, persist, and return these fields for video events. Events reference uploaded media via `event_media_id`.
|
||||
- Settings UI: Settings page refactored to nested tabs; added Events → Videos defaults (autoplay, loop, volume, mute) backed by system settings keys (`video_autoplay`, `video_loop`, `video_volume`, `video_muted`).
|
||||
- Academic Calendar UI: Merged “School Holidays Import” and “List” into a single “📥 Import & Liste” tab; nested tab selection is persisted with controlled `selectedItem` state to avoid jumps.
|
||||
- Additional improvements: Video/streaming, scheduler metadata, settings defaults, and UI refinements remain documented in the detailed sections below.
|
||||
|
||||
These changes are designed to be safe if metadata extraction or probes fail — clients should still attempt playback using the provided `url` and fall back to requesting/resolving richer metadata when available.
|
||||
|
||||
@@ -346,8 +343,9 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
- `POST /api/conversions/{media_id}/pdf` - Request conversion
|
||||
- `GET /api/conversions/{media_id}/status` - Check conversion status
|
||||
- `GET /api/eventmedia/stream/<media_id>/<filename>` - Stream media with byte-range support (206) for seeking
|
||||
- `POST /api/clients/{uuid}/screenshot` - Upload screenshot for client (base64 JPEG)
|
||||
- **Screenshot retention:** Only the latest and last 20 timestamped screenshots per client are stored on the server. Older screenshots are automatically deleted.
|
||||
- `POST /api/clients/{uuid}/screenshot` - Upload screenshot for client (base64 JPEG, optional `timestamp`, optional `screenshot_type` = `periodic|event_start|event_stop`)
|
||||
- **Screenshot retention:** The API stores `{uuid}.jpg` as latest plus the last 20 timestamped screenshots per client; older timestamped files are deleted automatically.
|
||||
- **Priority screenshots:** For `event_start`/`event_stop`, the API also keeps `{uuid}_priority.jpg` and metadata (`{uuid}_meta.json`) used by monitoring priority selection.
|
||||
|
||||
### System Settings
|
||||
- `GET /api/system-settings` - List all system settings (admin+)
|
||||
@@ -381,7 +379,8 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
|
||||
### Health & Monitoring
|
||||
- `GET /health` - Service health check
|
||||
- `GET /api/screenshots/{uuid}.jpg` - Client screenshots
|
||||
- `GET /screenshots/{uuid}.jpg` - Latest client screenshot
|
||||
- `GET /screenshots/{uuid}/priority` - Active high-priority screenshot (falls back to latest)
|
||||
- `GET /api/client-logs/monitoring-overview` - Aggregated monitoring overview for dashboard (superadmin)
|
||||
- `GET /api/client-logs/recent-errors` - Recent error feed across clients (admin+)
|
||||
- `GET /api/client-logs/{uuid}/logs` - Filtered per-client logs (admin+)
|
||||
@@ -450,7 +449,8 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
- Resource-based Syncfusion timeline scheduler with resize and drag-drop support
|
||||
- **Monitoring**: Superadmin-only monitoring dashboard
|
||||
- Live client health states (`healthy`, `warning`, `critical`, `offline`) from heartbeat/process/log data
|
||||
- Latest screenshot preview and process metadata per client
|
||||
- Latest screenshot preview with screenshot-type badges (`periodic`, `event_start`, `event_stop`) and process metadata per client
|
||||
- Active priority screenshots are surfaced immediately and polled faster while priority items are active
|
||||
- System-wide recent error stream and per-client log drill-down
|
||||
- **Program info**: Version, build info, tech stack and paginated changelog (reads `dashboard/public/program-info.json`)
|
||||
|
||||
@@ -483,6 +483,7 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
- Dashboard: Nginx availability
|
||||
- **Scheduler**: Logging is concise; conversion lookups are cached and logged only once per media.
|
||||
- Monitoring API: `/api/client-logs/monitoring-overview` and `/api/client-logs/recent-errors` for live diagnostics
|
||||
- Monitoring overview includes screenshot priority state (`latestScreenshotType`, `priorityScreenshotType`, `priorityScreenshotReceivedAt`, `hasActivePriorityScreenshot`) and `summary.activePriorityScreenshots`
|
||||
|
||||
### Logging Strategy
|
||||
- **Development**: Docker Compose logs with service prefixes
|
||||
@@ -557,7 +558,6 @@ docker exec -it infoscreen-db mysqladmin ping
|
||||
# Restart dependent services
|
||||
```
|
||||
|
||||
**MQTT communication issues**
|
||||
**Vite import-analysis errors (Syncfusion splitbuttons)**
|
||||
```bash
|
||||
# Symptom
|
||||
@@ -573,6 +573,8 @@ docker compose rm -sf dashboard
|
||||
docker volume rm <project>_dashboard-node-modules <project>_dashboard-vite-cache || true
|
||||
docker compose up -d --build dashboard
|
||||
```
|
||||
|
||||
**MQTT communication issues**
|
||||
```bash
|
||||
# Test MQTT broker
|
||||
mosquitto_pub -h localhost -t test -m "hello"
|
||||
|
||||
@@ -26,6 +26,10 @@ export interface MonitoringClient {
|
||||
screenHealthStatus?: string | null;
|
||||
lastScreenshotAnalyzed?: string | null;
|
||||
lastScreenshotHash?: string | null;
|
||||
latestScreenshotType?: 'periodic' | 'event_start' | 'event_stop' | null;
|
||||
priorityScreenshotType?: 'event_start' | 'event_stop' | null;
|
||||
priorityScreenshotReceivedAt?: string | null;
|
||||
hasActivePriorityScreenshot?: boolean;
|
||||
screenshotUrl: string;
|
||||
logCounts24h: {
|
||||
error: number;
|
||||
@@ -47,6 +51,7 @@ export interface MonitoringOverview {
|
||||
criticalClients: number;
|
||||
errorLogs: number;
|
||||
warnLogs: number;
|
||||
activePriorityScreenshots: number;
|
||||
};
|
||||
periodHours: number;
|
||||
gracePeriodSeconds: number;
|
||||
|
||||
@@ -194,6 +194,32 @@
|
||||
margin-top: 0.55rem;
|
||||
font-size: 0.88rem;
|
||||
color: #64748b;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.35rem;
|
||||
}
|
||||
|
||||
.monitoring-shot-type {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
border-radius: 999px;
|
||||
padding: 0.15rem 0.55rem;
|
||||
font-size: 0.78rem;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.monitoring-shot-type-periodic {
|
||||
background: #e2e8f0;
|
||||
color: #334155;
|
||||
}
|
||||
|
||||
.monitoring-shot-type-event {
|
||||
background: #ffedd5;
|
||||
color: #9a3412;
|
||||
}
|
||||
|
||||
.monitoring-shot-type-active {
|
||||
box-shadow: 0 0 0 2px #fdba74;
|
||||
}
|
||||
|
||||
.monitoring-error-box {
|
||||
|
||||
@@ -25,6 +25,7 @@ import { DialogComponent } from '@syncfusion/ej2-react-popups';
|
||||
import './monitoring.css';
|
||||
|
||||
const REFRESH_INTERVAL_MS = 15000;
|
||||
const PRIORITY_REFRESH_INTERVAL_MS = 3000;
|
||||
|
||||
const hourOptions = [
|
||||
{ text: 'Letzte 6 Stunden', value: 6 },
|
||||
@@ -95,6 +96,19 @@ function statusBadge(status: string) {
|
||||
);
|
||||
}
|
||||
|
||||
function screenshotTypeBadge(type?: string | null, hasPriority = false) {
|
||||
const normalized = (type || 'periodic').toLowerCase();
|
||||
const map: Record<string, { label: string; className: string }> = {
|
||||
periodic: { label: 'Periodisch', className: 'monitoring-shot-type-periodic' },
|
||||
event_start: { label: 'Event-Start', className: 'monitoring-shot-type-event' },
|
||||
event_stop: { label: 'Event-Stopp', className: 'monitoring-shot-type-event' },
|
||||
};
|
||||
|
||||
const info = map[normalized] || map.periodic;
|
||||
const classes = `monitoring-shot-type ${info.className}${hasPriority ? ' monitoring-shot-type-active' : ''}`;
|
||||
return <span className={classes}>{info.label}</span>;
|
||||
}
|
||||
|
||||
function renderMetricCard(title: string, value: number, subtitle: string, accent: string) {
|
||||
return (
|
||||
<div className="e-card monitoring-metric-card" style={{ borderTop: `4px solid ${accent}` }}>
|
||||
@@ -188,12 +202,14 @@ const MonitoringDashboard: React.FC = () => {
|
||||
}, [hours, loadOverview]);
|
||||
|
||||
React.useEffect(() => {
|
||||
const hasActivePriorityScreenshots = (overview?.summary.activePriorityScreenshots || 0) > 0;
|
||||
const intervalMs = hasActivePriorityScreenshots ? PRIORITY_REFRESH_INTERVAL_MS : REFRESH_INTERVAL_MS;
|
||||
const intervalId = window.setInterval(() => {
|
||||
loadOverview(hours);
|
||||
}, REFRESH_INTERVAL_MS);
|
||||
}, intervalMs);
|
||||
|
||||
return () => window.clearInterval(intervalId);
|
||||
}, [hours, loadOverview]);
|
||||
}, [hours, loadOverview, overview?.summary.activePriorityScreenshots]);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (!selectedClientUuid) {
|
||||
@@ -288,6 +304,7 @@ const MonitoringDashboard: React.FC = () => {
|
||||
{renderMetricCard('Warnungen', overview?.summary.warningClients || 0, 'Warn-Logs oder Übergangszustände', '#d97706')}
|
||||
{renderMetricCard('Kritisch', overview?.summary.criticalClients || 0, 'Crashs oder Fehler-Logs', '#dc2626')}
|
||||
{renderMetricCard('Offline', overview?.summary.offlineClients || 0, 'Keine frischen Signale', '#475569')}
|
||||
{renderMetricCard('Prioritäts-Screens', overview?.summary.activePriorityScreenshots || 0, 'Event-Start/Stop aktiv', '#ea580c')}
|
||||
{renderMetricCard('Fehler-Logs', overview?.summary.errorLogs || 0, 'Im gewählten Zeitraum', '#b91c1c')}
|
||||
</div>
|
||||
|
||||
@@ -380,6 +397,21 @@ const MonitoringDashboard: React.FC = () => {
|
||||
<span>Letzte Analyse</span>
|
||||
<strong>{formatTimestamp(selectedClient.lastScreenshotAnalyzed)}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Screenshot-Typ</span>
|
||||
<strong>
|
||||
{screenshotTypeBadge(
|
||||
selectedClient.latestScreenshotType,
|
||||
!!selectedClient.hasActivePriorityScreenshot
|
||||
)}
|
||||
</strong>
|
||||
</div>
|
||||
{selectedClient.priorityScreenshotReceivedAt && (
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Priorität empfangen</span>
|
||||
<strong>{formatTimestamp(selectedClient.priorityScreenshotReceivedAt)}</strong>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
) : (
|
||||
<MessageComponent severity="Info" content="Wählen Sie links einen Client aus." />
|
||||
@@ -407,7 +439,14 @@ const MonitoringDashboard: React.FC = () => {
|
||||
/>
|
||||
)}
|
||||
<div className="monitoring-screenshot-meta">
|
||||
Empfangen: {formatTimestamp(selectedClient.lastScreenshotAnalyzed)}
|
||||
<span>Empfangen: {formatTimestamp(selectedClient.lastScreenshotAnalyzed)}</span>
|
||||
<span>
|
||||
Typ:{' '}
|
||||
{screenshotTypeBadge(
|
||||
selectedClient.latestScreenshotType,
|
||||
!!selectedClient.hasActivePriorityScreenshot
|
||||
)}
|
||||
</span>
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
|
||||
@@ -158,14 +158,25 @@ def apply_monitoring_update(client_obj, *, event_id=None, process_name=None, pro
|
||||
def _extract_image_and_timestamp(data):
|
||||
image_value = None
|
||||
timestamp_value = None
|
||||
screenshot_type = None
|
||||
|
||||
if not isinstance(data, dict):
|
||||
return None, None
|
||||
return None, None, None
|
||||
|
||||
screenshot_obj = data.get("screenshot") if isinstance(data.get("screenshot"), dict) else None
|
||||
metadata_obj = data.get("metadata") if isinstance(data.get("metadata"), dict) else None
|
||||
screenshot_meta_obj = screenshot_obj.get("metadata") if screenshot_obj and isinstance(screenshot_obj.get("metadata"), dict) else None
|
||||
|
||||
for container in (data, screenshot_obj, metadata_obj, screenshot_meta_obj):
|
||||
if not isinstance(container, dict):
|
||||
continue
|
||||
raw_type = container.get("screenshot_type") or container.get("screenshotType")
|
||||
if raw_type is not None:
|
||||
normalized_type = str(raw_type).strip().lower()
|
||||
if normalized_type in ("periodic", "event_start", "event_stop"):
|
||||
screenshot_type = normalized_type
|
||||
break
|
||||
|
||||
for key in ("image", "data"):
|
||||
if isinstance(data.get(key), str) and data.get(key):
|
||||
image_value = data.get(key)
|
||||
@@ -183,9 +194,9 @@ def _extract_image_and_timestamp(data):
|
||||
value = container.get(key)
|
||||
if value is not None:
|
||||
timestamp_value = value
|
||||
return image_value, timestamp_value
|
||||
return image_value, timestamp_value, screenshot_type
|
||||
|
||||
return image_value, timestamp_value
|
||||
return image_value, timestamp_value, screenshot_type
|
||||
|
||||
|
||||
def handle_screenshot(uuid, payload):
|
||||
@@ -197,12 +208,14 @@ def handle_screenshot(uuid, payload):
|
||||
# Try to parse as JSON first
|
||||
try:
|
||||
data = json.loads(payload.decode())
|
||||
image_b64, timestamp_value = _extract_image_and_timestamp(data)
|
||||
image_b64, timestamp_value, screenshot_type = _extract_image_and_timestamp(data)
|
||||
if image_b64:
|
||||
# Payload is JSON with base64 image
|
||||
api_payload = {"image": image_b64}
|
||||
if timestamp_value is not None:
|
||||
api_payload["timestamp"] = timestamp_value
|
||||
if screenshot_type:
|
||||
api_payload["screenshot_type"] = screenshot_type
|
||||
headers = {"Content-Type": "application/json"}
|
||||
logging.debug(f"Forwarding base64 screenshot from {uuid} to API")
|
||||
else:
|
||||
@@ -261,12 +274,14 @@ def on_message(client, userdata, msg):
|
||||
try:
|
||||
payload_text = msg.payload.decode()
|
||||
data = json.loads(payload_text)
|
||||
image_b64, ts_value = _extract_image_and_timestamp(data)
|
||||
image_b64, ts_value, screenshot_type = _extract_image_and_timestamp(data)
|
||||
if image_b64:
|
||||
logging.debug(f"Dashboard enthält Screenshot für {uuid}; Weiterleitung an API")
|
||||
dashboard_payload = {"image": image_b64}
|
||||
if ts_value is not None:
|
||||
dashboard_payload["timestamp"] = ts_value
|
||||
if screenshot_type:
|
||||
dashboard_payload["screenshot_type"] = screenshot_type
|
||||
api_payload = json.dumps(dashboard_payload).encode("utf-8")
|
||||
handle_screenshot(uuid, api_payload)
|
||||
# Update last_alive if status present
|
||||
|
||||
@@ -306,6 +306,7 @@ def format_event_with_media(event):
|
||||
"autoplay": getattr(event, "autoplay", True),
|
||||
"loop": getattr(event, "loop", False),
|
||||
"volume": getattr(event, "volume", 0.8),
|
||||
"muted": getattr(event, "muted", False),
|
||||
# Best-effort metadata to help clients decide how to stream
|
||||
"mime_type": mime_type,
|
||||
"size": size,
|
||||
|
||||
@@ -11,6 +11,7 @@ import glob
|
||||
from server.serializers import dict_to_camel_case
|
||||
|
||||
client_logs_bp = Blueprint("client_logs", __name__, url_prefix="/api/client-logs")
|
||||
PRIORITY_SCREENSHOT_TTL_SECONDS = int(os.environ.get("PRIORITY_SCREENSHOT_TTL_SECONDS", "120"))
|
||||
|
||||
|
||||
def _grace_period_seconds():
|
||||
@@ -90,6 +91,34 @@ def _infer_last_screenshot_ts(client_uuid):
|
||||
return None
|
||||
|
||||
|
||||
def _load_screenshot_metadata(client_uuid):
|
||||
screenshots_dir = os.path.join(os.path.dirname(__file__), "..", "screenshots")
|
||||
metadata_path = os.path.join(screenshots_dir, f"{client_uuid}_meta.json")
|
||||
if not os.path.exists(metadata_path):
|
||||
return {}
|
||||
|
||||
try:
|
||||
with open(metadata_path, "r", encoding="utf-8") as metadata_file:
|
||||
data = json.load(metadata_file)
|
||||
return data if isinstance(data, dict) else {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def _is_priority_screenshot_active(priority_received_at):
|
||||
if not priority_received_at:
|
||||
return False
|
||||
|
||||
try:
|
||||
normalized = str(priority_received_at).replace("Z", "+00:00")
|
||||
parsed = datetime.fromisoformat(normalized)
|
||||
parsed_utc = _to_utc(parsed)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
return (datetime.now(timezone.utc) - parsed_utc) <= timedelta(seconds=PRIORITY_SCREENSHOT_TTL_SECONDS)
|
||||
|
||||
|
||||
@client_logs_bp.route("/test", methods=["GET"])
|
||||
def test_client_logs():
|
||||
"""Test endpoint to verify logging infrastructure (no auth required)"""
|
||||
@@ -326,6 +355,7 @@ def get_monitoring_overview():
|
||||
"critical_clients": 0,
|
||||
"error_logs": 0,
|
||||
"warn_logs": 0,
|
||||
"active_priority_screenshots": 0,
|
||||
}
|
||||
|
||||
for client, group_name in clients:
|
||||
@@ -352,6 +382,12 @@ def get_monitoring_overview():
|
||||
)
|
||||
|
||||
screenshot_ts = client.last_screenshot_analyzed or _infer_last_screenshot_ts(client.uuid)
|
||||
screenshot_meta = _load_screenshot_metadata(client.uuid)
|
||||
latest_screenshot_type = screenshot_meta.get("latest_screenshot_type") or "periodic"
|
||||
priority_screenshot_type = screenshot_meta.get("last_priority_screenshot_type")
|
||||
priority_screenshot_received_at = screenshot_meta.get("last_priority_received_at")
|
||||
has_active_priority = _is_priority_screenshot_active(priority_screenshot_received_at)
|
||||
screenshot_url = f"/screenshots/{client.uuid}/priority" if has_active_priority else f"/screenshots/{client.uuid}"
|
||||
|
||||
clients_payload.append({
|
||||
"uuid": client.uuid,
|
||||
@@ -372,7 +408,11 @@ def get_monitoring_overview():
|
||||
"screen_health_status": screen_health_status,
|
||||
"last_screenshot_analyzed": screenshot_ts.isoformat() if screenshot_ts else None,
|
||||
"last_screenshot_hash": client.last_screenshot_hash,
|
||||
"screenshot_url": f"/screenshots/{client.uuid}",
|
||||
"latest_screenshot_type": latest_screenshot_type,
|
||||
"priority_screenshot_type": priority_screenshot_type,
|
||||
"priority_screenshot_received_at": priority_screenshot_received_at,
|
||||
"has_active_priority_screenshot": has_active_priority,
|
||||
"screenshot_url": screenshot_url,
|
||||
"log_counts_24h": {
|
||||
"error": log_counts["ERROR"],
|
||||
"warn": log_counts["WARN"],
|
||||
@@ -386,6 +426,8 @@ def get_monitoring_overview():
|
||||
summary_counts["total_clients"] += 1
|
||||
summary_counts["error_logs"] += log_counts["ERROR"]
|
||||
summary_counts["warn_logs"] += log_counts["WARN"]
|
||||
if has_active_priority:
|
||||
summary_counts["active_priority_screenshots"] += 1
|
||||
if is_alive:
|
||||
summary_counts["online_clients"] += 1
|
||||
else:
|
||||
|
||||
@@ -4,11 +4,58 @@ from flask import Blueprint, request, jsonify
|
||||
from server.permissions import admin_or_higher
|
||||
from server.mqtt_helper import publish_client_group, delete_client_group_message, publish_multiple_client_groups
|
||||
import sys
|
||||
import os
|
||||
import glob
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
sys.path.append('/workspace')
|
||||
|
||||
clients_bp = Blueprint("clients", __name__, url_prefix="/api/clients")
|
||||
|
||||
VALID_SCREENSHOT_TYPES = {"periodic", "event_start", "event_stop"}
|
||||
|
||||
|
||||
def _normalize_screenshot_type(raw_type):
|
||||
if raw_type is None:
|
||||
return "periodic"
|
||||
normalized = str(raw_type).strip().lower()
|
||||
if normalized in VALID_SCREENSHOT_TYPES:
|
||||
return normalized
|
||||
return "periodic"
|
||||
|
||||
|
||||
def _parse_screenshot_timestamp(raw_timestamp):
|
||||
if raw_timestamp is None:
|
||||
return None
|
||||
try:
|
||||
if isinstance(raw_timestamp, (int, float)):
|
||||
ts_value = float(raw_timestamp)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
return datetime.fromtimestamp(ts_value, timezone.utc)
|
||||
|
||||
if isinstance(raw_timestamp, str):
|
||||
ts = raw_timestamp.strip()
|
||||
if not ts:
|
||||
return None
|
||||
if ts.isdigit():
|
||||
ts_value = float(ts)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
return datetime.fromtimestamp(ts_value, timezone.utc)
|
||||
|
||||
ts_normalized = ts.replace("Z", "+00:00") if ts.endswith("Z") else ts
|
||||
parsed = datetime.fromisoformat(ts_normalized)
|
||||
if parsed.tzinfo is None:
|
||||
return parsed.replace(tzinfo=timezone.utc)
|
||||
return parsed.astimezone(timezone.utc)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
|
||||
@clients_bp.route("/sync-all-groups", methods=["POST"])
|
||||
@admin_or_higher
|
||||
@@ -282,9 +329,6 @@ def upload_screenshot(uuid):
|
||||
Screenshots are stored as {uuid}.jpg in the screenshots folder.
|
||||
Keeps last 20 screenshots per client (auto-cleanup).
|
||||
"""
|
||||
import os
|
||||
import base64
|
||||
import glob
|
||||
session = Session()
|
||||
client = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if not client:
|
||||
@@ -293,6 +337,7 @@ def upload_screenshot(uuid):
|
||||
|
||||
try:
|
||||
screenshot_timestamp = None
|
||||
screenshot_type = "periodic"
|
||||
|
||||
# Handle JSON payload with base64-encoded image
|
||||
if request.is_json:
|
||||
@@ -300,31 +345,8 @@ def upload_screenshot(uuid):
|
||||
if "image" not in data:
|
||||
return jsonify({"error": "Missing 'image' field in JSON payload"}), 400
|
||||
|
||||
raw_timestamp = data.get("timestamp")
|
||||
if raw_timestamp is not None:
|
||||
try:
|
||||
if isinstance(raw_timestamp, (int, float)):
|
||||
ts_value = float(raw_timestamp)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
screenshot_timestamp = datetime.fromtimestamp(ts_value, timezone.utc)
|
||||
elif isinstance(raw_timestamp, str):
|
||||
ts = raw_timestamp.strip()
|
||||
if ts:
|
||||
if ts.isdigit():
|
||||
ts_value = float(ts)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
screenshot_timestamp = datetime.fromtimestamp(ts_value, timezone.utc)
|
||||
else:
|
||||
ts_normalized = ts.replace("Z", "+00:00") if ts.endswith("Z") else ts
|
||||
screenshot_timestamp = datetime.fromisoformat(ts_normalized)
|
||||
if screenshot_timestamp.tzinfo is None:
|
||||
screenshot_timestamp = screenshot_timestamp.replace(tzinfo=timezone.utc)
|
||||
else:
|
||||
screenshot_timestamp = screenshot_timestamp.astimezone(timezone.utc)
|
||||
except Exception:
|
||||
screenshot_timestamp = None
|
||||
screenshot_timestamp = _parse_screenshot_timestamp(data.get("timestamp"))
|
||||
screenshot_type = _normalize_screenshot_type(data.get("screenshot_type") or data.get("screenshotType"))
|
||||
|
||||
# Decode base64 image
|
||||
image_data = base64.b64decode(data["image"])
|
||||
@@ -341,8 +363,8 @@ def upload_screenshot(uuid):
|
||||
|
||||
# Store screenshot with timestamp to track latest
|
||||
now_utc = screenshot_timestamp or datetime.now(timezone.utc)
|
||||
timestamp = now_utc.strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"{uuid}_{timestamp}.jpg"
|
||||
timestamp = now_utc.strftime("%Y%m%d_%H%M%S_%f")
|
||||
filename = f"{uuid}_{timestamp}_{screenshot_type}.jpg"
|
||||
filepath = os.path.join(screenshots_dir, filename)
|
||||
|
||||
with open(filepath, "wb") as f:
|
||||
@@ -353,13 +375,42 @@ def upload_screenshot(uuid):
|
||||
with open(latest_filepath, "wb") as f:
|
||||
f.write(image_data)
|
||||
|
||||
# Keep a dedicated copy for high-priority event screenshots.
|
||||
if screenshot_type in ("event_start", "event_stop"):
|
||||
priority_filepath = os.path.join(screenshots_dir, f"{uuid}_priority.jpg")
|
||||
with open(priority_filepath, "wb") as f:
|
||||
f.write(image_data)
|
||||
|
||||
metadata_path = os.path.join(screenshots_dir, f"{uuid}_meta.json")
|
||||
metadata = {}
|
||||
if os.path.exists(metadata_path):
|
||||
try:
|
||||
with open(metadata_path, "r", encoding="utf-8") as meta_file:
|
||||
metadata = json.load(meta_file)
|
||||
except Exception:
|
||||
metadata = {}
|
||||
|
||||
metadata.update({
|
||||
"latest_screenshot_type": screenshot_type,
|
||||
"latest_received_at": now_utc.isoformat(),
|
||||
})
|
||||
if screenshot_type in ("event_start", "event_stop"):
|
||||
metadata["last_priority_screenshot_type"] = screenshot_type
|
||||
metadata["last_priority_received_at"] = now_utc.isoformat()
|
||||
|
||||
with open(metadata_path, "w", encoding="utf-8") as meta_file:
|
||||
json.dump(metadata, meta_file)
|
||||
|
||||
# Update screenshot receive timestamp for monitoring dashboard
|
||||
client.last_screenshot_analyzed = now_utc
|
||||
client.last_screenshot_hash = hashlib.md5(image_data).hexdigest()
|
||||
session.commit()
|
||||
|
||||
# Cleanup: keep only last 20 timestamped screenshots per client
|
||||
pattern = os.path.join(screenshots_dir, f"{uuid}_*.jpg")
|
||||
existing_screenshots = sorted(glob.glob(pattern))
|
||||
existing_screenshots = sorted(
|
||||
[path for path in glob.glob(pattern) if not path.endswith("_priority.jpg")]
|
||||
)
|
||||
|
||||
# Keep last 20, delete older ones
|
||||
max_screenshots = 20
|
||||
@@ -376,7 +427,8 @@ def upload_screenshot(uuid):
|
||||
"success": True,
|
||||
"message": f"Screenshot received for client {uuid}",
|
||||
"filename": filename,
|
||||
"size": len(image_data)
|
||||
"size": len(image_data),
|
||||
"screenshot_type": screenshot_type,
|
||||
}), 200
|
||||
|
||||
except Exception as e:
|
||||
|
||||
@@ -68,6 +68,16 @@ def index():
|
||||
return "Hello from Infoscreen‐API!"
|
||||
|
||||
|
||||
@app.route("/screenshots/<uuid>/priority")
|
||||
def get_priority_screenshot(uuid):
|
||||
normalized_uuid = uuid[:-4] if uuid.lower().endswith('.jpg') else uuid
|
||||
priority_filename = f"{normalized_uuid}_priority.jpg"
|
||||
priority_path = os.path.join("screenshots", priority_filename)
|
||||
if os.path.exists(priority_path):
|
||||
return send_from_directory("screenshots", priority_filename)
|
||||
return get_screenshot(uuid)
|
||||
|
||||
|
||||
@app.route("/screenshots/<uuid>")
|
||||
@app.route("/screenshots/<uuid>.jpg")
|
||||
def get_screenshot(uuid):
|
||||
|
||||
Reference in New Issue
Block a user