Compare commits
21 Commits
recurring_
...
mqtt-trans
| Author | SHA1 | Date | |
|---|---|---|---|
| a58e9d3fca | |||
| 90ccbdf920 | |||
| 24cdf07279 | |||
| 9c330f984f | |||
| 3107d0f671 | |||
|
|
7746e26385 | ||
|
|
10f446dfb5 | ||
|
|
5a0c1bc686 | ||
|
|
c193209326 | ||
|
|
df9f29bc6a | ||
|
|
6dcf93f0dd | ||
|
|
452ba3033b | ||
|
|
38800cec68 | ||
|
|
e6c19c189f | ||
|
|
c9cc535fc6 | ||
|
|
3487d33a2f | ||
|
|
150937f2e2 | ||
|
|
7b38b49598 | ||
|
|
a7df3c2708 | ||
|
|
8676370fe2 | ||
|
|
5f0972c79c |
19
.env.example
19
.env.example
@@ -4,6 +4,11 @@
|
||||
# General
|
||||
ENV=development
|
||||
|
||||
# Flask
|
||||
# IMPORTANT: Generate a secure random key for production
|
||||
# e.g., python -c 'import secrets; print(secrets.token_hex(32))'
|
||||
FLASK_SECRET_KEY=dev-secret-key-change-in-production
|
||||
|
||||
# Database (used if DB_CONN not provided)
|
||||
DB_USER=your_user
|
||||
DB_PASSWORD=your_password
|
||||
@@ -24,13 +29,17 @@ MQTT_KEEPALIVE=60
|
||||
# VITE_API_URL=https://your.api.example.com/api
|
||||
|
||||
# Groups alive windows (seconds)
|
||||
HEARTBEAT_GRACE_PERIOD_DEV=15
|
||||
HEARTBEAT_GRACE_PERIOD_PROD=180
|
||||
# Clients send heartbeats every ~65s. Allow 2 missed heartbeats + safety margin
|
||||
# Dev: 65s * 2 + 50s margin = 180s
|
||||
# Prod: 65s * 2 + 40s margin = 170s
|
||||
HEARTBEAT_GRACE_PERIOD_DEV=180
|
||||
HEARTBEAT_GRACE_PERIOD_PROD=170
|
||||
|
||||
# Scheduler
|
||||
# Optional: force periodic republish even without changes
|
||||
# REFRESH_SECONDS=0
|
||||
|
||||
# Default admin bootstrap (server/init_defaults.py)
|
||||
DEFAULT_ADMIN_USERNAME=infoscreen_admin
|
||||
DEFAULT_ADMIN_PASSWORD=Info_screen_admin25!
|
||||
# Default superadmin bootstrap (server/init_defaults.py)
|
||||
# REQUIRED: Must be set for superadmin creation
|
||||
DEFAULT_SUPERADMIN_USERNAME=superadmin
|
||||
DEFAULT_SUPERADMIN_PASSWORD=your_secure_password_here
|
||||
|
||||
347
.github/copilot-instructions.md
vendored
347
.github/copilot-instructions.md
vendored
@@ -6,26 +6,154 @@ Prefer explanations and refactors that align with these structures.
|
||||
|
||||
Use this as your shared context when proposing changes. Keep edits minimal and match existing patterns referenced below.
|
||||
|
||||
## TL;DR
|
||||
Small multi-service digital signage app (Flask API, React dashboard, MQTT scheduler). Edit `server/` for API logic, `scheduler/` for event publishing, and `dashboard/` for UI. If you're asking Copilot for changes, prefer focused prompts that include the target file(s) and the desired behavior.
|
||||
|
||||
### How to ask Copilot
|
||||
- "Add a new route `GET /api/events/summary` that returns counts per event_type — implement in `server/routes/events.py`."
|
||||
- "Create an Alembic migration to add `duration` and `resolution` to `event_media` and update upload handler to populate them."
|
||||
- "Refactor `scheduler/db_utils.py` to prefer precomputed EventMedia metadata and fall back to a HEAD probe."
|
||||
- "Add an ffprobe-based worker that extracts duration/resolution/bitrate and stores them on `EventMedia`."
|
||||
|
||||
Keep docs synced with code. When you change services/MQTT/API/UTC/env or dev/prod run steps, update this file in the same commit (see `AI-INSTRUCTIONS-MAINTENANCE.md`).
|
||||
|
||||
### When not to change
|
||||
- Avoid editing generated assets under `dashboard/dist/` and compiled bundles. Don't modify files produced by CI or Docker builds (unless intentionally updating build outputs).
|
||||
|
||||
### Contact / owner
|
||||
- Primary maintainer: RobbStarkAustria (owner). For architecture questions, ping the repo owner or open an issue and tag `@RobbStarkAustria`.
|
||||
|
||||
### Important files (quick jump targets)
|
||||
- `scheduler/db_utils.py` — event formatting and scheduler-facing logic
|
||||
- `scheduler/scheduler.py` — scheduler main loop and MQTT publisher
|
||||
- `server/routes/eventmedia.py` — file uploads, streaming endpoint
|
||||
- `server/routes/events.py` — event CRUD and recurrence handling
|
||||
- `server/routes/groups.py` — group management, alive status, display order persistence
|
||||
- `dashboard/src/components/CustomEventModal.tsx` — event creation UI
|
||||
- `dashboard/src/media.tsx` — FileManager / upload settings
|
||||
- `dashboard/src/settings.tsx` — settings UI (nested tabs; system defaults for presentations and videos)
|
||||
- `dashboard/src/ressourcen.tsx` — timeline view showing all groups' active events in parallel
|
||||
- `dashboard/src/ressourcen.css` — timeline and resource view styling
|
||||
- `dashboard/src/monitoring.tsx` — superadmin-only monitoring dashboard for client health, screenshots, and logs
|
||||
|
||||
|
||||
|
||||
## Big picture
|
||||
- Multi-service app orchestrated by Docker Compose.
|
||||
- API: Flask + SQLAlchemy (MariaDB), in `server/` exposed on :8000 (health: `/health`).
|
||||
- Dashboard: React + Vite in `dashboard/`, dev on :5173, served via Nginx in prod.
|
||||
- MQTT broker: Eclipse Mosquitto, config in `mosquitto/config/mosquitto.conf`.
|
||||
- Listener: MQTT consumer handling discovery + heartbeats in `listener/listener.py`.
|
||||
- Scheduler: Publishes active events (per group) to MQTT retained topics in `scheduler/scheduler.py`.
|
||||
- Listener: MQTT consumer handling discovery, heartbeats, and dashboard screenshot uploads in `listener/listener.py`.
|
||||
- Scheduler: Publishes only currently active events (per group, at "now") to MQTT retained topics in `scheduler/scheduler.py`. It queries a future window (default: 7 days) to expand recurring events using RFC 5545 rules and applies event exceptions, but only publishes events that are active at the current time. When a group has no active events, the scheduler clears its retained topic by publishing an empty list. All time comparisons are UTC; any naive timestamps are normalized. Logging is concise; conversion lookups are cached and logged only once per media.
|
||||
- Nginx: Reverse proxy routes `/api/*` and `/screenshots/*` to API; everything else to dashboard (`nginx.conf`).
|
||||
|
||||
- Dev Container (hygiene): UI-only `Dev Containers` extension runs on host UI via `remote.extensionKind`; do not install it in-container. Dashboard installs use `npm ci`; shell aliases in `postStartCommand` are appended idempotently.
|
||||
|
||||
### Screenshot retention
|
||||
- Screenshots sent via dashboard MQTT are stored in `server/screenshots/`.
|
||||
- Screenshot payloads support `screenshot_type` with values `periodic`, `event_start`, `event_stop`.
|
||||
- `periodic` is the normal heartbeat/dashboard screenshot path; `event_start` and `event_stop` are high-priority screenshots for monitoring.
|
||||
- For each client, the API keeps `{uuid}.jpg` as latest and the last 20 timestamped screenshots (`{uuid}_..._{type}.jpg`), deleting older timestamped files automatically.
|
||||
- For high-priority screenshots, the API additionally maintains `{uuid}_priority.jpg` and metadata in `{uuid}_meta.json` (`latest_screenshot_type`, `last_priority_*`).
|
||||
|
||||
## Recent changes since last commit
|
||||
|
||||
### Latest (March 2026)
|
||||
|
||||
- **Monitoring System Completion (no version bump)**:
|
||||
- End-to-end monitoring pipeline completed: MQTT logs/health → listener persistence → monitoring APIs → superadmin dashboard
|
||||
- API now serves aggregated monitoring via `GET /api/client-logs/monitoring-overview` and system-wide recent errors via `GET /api/client-logs/recent-errors`
|
||||
- Monitoring dashboard (`dashboard/src/monitoring.tsx`) is active and displays client health states, screenshots, process metadata, and recent log activity
|
||||
- **Screenshot Priority Pipeline (no version bump)**:
|
||||
- Listener forwards `screenshot_type` from MQTT screenshot/dashboard payloads to `POST /api/clients/<uuid>/screenshot`.
|
||||
- API stores typed screenshots, tracks latest/priority metadata, and serves priority images via `GET /screenshots/<uuid>/priority`.
|
||||
- Monitoring overview exposes screenshot priority state (`latestScreenshotType`, `priorityScreenshotType`, `priorityScreenshotReceivedAt`, `hasActivePriorityScreenshot`) and `summary.activePriorityScreenshots`.
|
||||
- Monitoring UI shows screenshot type badges and switches to faster refresh while priority screenshots are active.
|
||||
- **MQTT Dashboard Payload v2 Cutover (no version bump)**:
|
||||
- Dashboard payload parsing in `listener/listener.py` is now v2-only (`message`, `content`, `runtime`, `metadata`).
|
||||
- Legacy top-level dashboard fallback was removed after migration soak (`legacy_fallback=0`).
|
||||
- Listener observability summarizes parser health using `v2_success` and `parse_failures` counters.
|
||||
- **Presentation Flags Persistence Fix**:
|
||||
- Fixed persistence for presentation `page_progress` and `auto_progress` to ensure values are reliably stored and returned across create/update paths and detached occurrences
|
||||
|
||||
### Earlier (January 2026)
|
||||
|
||||
- **Ressourcen Page (Timeline View)**:
|
||||
- New 'Ressourcen' page with parallel timeline view showing active events for all room groups
|
||||
- Compact timeline display with adjustable row height (65px per group)
|
||||
- Real-time view of currently running events with type, title, and time window
|
||||
- Customizable group ordering with visual reordering panel (drag up/down buttons)
|
||||
- Group order persisted via `GET/POST /api/groups/order` endpoints
|
||||
- Color-coded event bars matching group theme
|
||||
- Timeline modes: Day and Week views (day view by default)
|
||||
- Dynamic height calculation based on number of groups
|
||||
- Syncfusion ScheduleComponent with TimelineViews, Resize, and DragAndDrop support
|
||||
- Files: `dashboard/src/ressourcen.tsx` (page), `dashboard/src/ressourcen.css` (styles)
|
||||
|
||||
### Earlier (November 2025)
|
||||
|
||||
- **API Naming Convention Standardization (camelCase)**:
|
||||
- Backend: Created `server/serializers.py` with `dict_to_camel_case()` utility for consistent JSON serialization
|
||||
- Events API: `GET /api/events` and `GET /api/events/<id>` now return camelCase fields (`id`, `subject`, `startTime`, `endTime`, `type`, `groupId`, etc.) instead of PascalCase
|
||||
- Frontend: Dashboard and appointments page updated to consume camelCase API responses
|
||||
- Appointments page maintains internal PascalCase for Syncfusion scheduler compatibility with automatic mapping from API responses
|
||||
- **Breaking**: External API consumers must update field names from PascalCase to camelCase
|
||||
|
||||
- **UTC Time Handling**:
|
||||
- Database stores all timestamps in UTC (naive timestamps normalized by backend)
|
||||
- API returns ISO strings without 'Z' suffix: `"2025-11-27T20:03:00"`
|
||||
- Frontend: Dashboard and appointments automatically append 'Z' to parse as UTC and display in user's local timezone
|
||||
- Time formatting functions use `toLocaleTimeString('de-DE')` for German locale display
|
||||
- All time comparisons use UTC; `new Date().toISOString()` sends UTC back to API
|
||||
- API returns ISO strings without `Z`; frontend must append `Z` before parsing to ensure UTC
|
||||
|
||||
- **Dashboard Enhancements**:
|
||||
- New card-based design for Raumgruppen (room groups) with Syncfusion components
|
||||
- Global statistics summary: total infoscreens, online/offline counts, warning groups
|
||||
- Filter buttons: All, Online, Offline, Warnings with dynamic counts
|
||||
- Active event display per group: shows currently playing content with type icon, title, date, and time
|
||||
- Health visualization with color-coded progress bars per group
|
||||
- Expandable client details with last alive timestamps
|
||||
- Bulk restart functionality for offline clients per group
|
||||
- Manual refresh button with toast notifications
|
||||
- 15-second auto-refresh interval
|
||||
|
||||
### Earlier changes
|
||||
|
||||
- Scheduler: when formatting video events the scheduler now performs a best-effort HEAD probe of the streaming URL and includes basic metadata in the emitted payload (mime_type, size, accept_ranges). Placeholders for richer metadata (duration, resolution, bitrate, qualities, thumbnails, checksum) are included for later population by a background worker.
|
||||
- Streaming endpoint: a range-capable streaming endpoint was added at `/api/eventmedia/stream/<media_id>/<filename>` that supports byte-range requests (206 Partial Content) to enable seeking from clients.
|
||||
- Event model & API: `Event` gained video-related fields (`event_media_id`, `autoplay`, `loop`, `volume`) and the API accepts and persists these when creating/updating video events.
|
||||
- Dashboard: UI updated to allow selecting uploaded videos for events and to specify autoplay/loop/volume. File upload settings were increased (maxFileSize raised) and the client now validates video duration (max 10 minutes) before upload.
|
||||
- FileManager: uploads compute basic metadata and enqueue conversions for office formats as before; video uploads now surface size and are streamable via the new endpoint.
|
||||
|
||||
- Event model & API (new): Added `muted` (Boolean) for video events; create/update and GET endpoints accept, persist, and return `muted` alongside `autoplay`, `loop`, and `volume`.
|
||||
- Dashboard — Settings: Settings page refactored to nested tabs; added Events → Videos defaults (autoplay, loop, volume, mute) backed by system settings keys (`video_autoplay`, `video_loop`, `video_volume`, `video_muted`).
|
||||
- Dashboard — Events UI: CustomEventModal now exposes per-event video `muted` and initializes all video fields from system defaults when creating a new event.
|
||||
- Dashboard — Academic Calendar: Merged “School Holidays Import” and “List” into a single “📥 Import & Liste” tab; nested tab selection is persisted with controlled `selectedItem` state to avoid jumps.
|
||||
|
||||
Note: these edits are intentionally backwards-compatible — if the probe fails, the scheduler still emits the stream URL and the client should fallback to a direct play attempt or request richer metadata when available.
|
||||
|
||||
Backend rework notes (no version bump):
|
||||
- Dev container hygiene: UI-only Remote Containers; reproducible dashboard installs (`npm ci`); idempotent shell aliases.
|
||||
- Serialization consistency: snake_case internal → camelCase external via `server/serializers.py` for all JSON.
|
||||
- UTC normalization across routes/scheduler; enums and datetimes serialize consistently.
|
||||
|
||||
## Service boundaries & data flow
|
||||
- Database connection string is passed as `DB_CONN` (mysql+pymysql) to Python services.
|
||||
- API builds its engine in `server/database.py` (loads `.env` only in development).
|
||||
- Scheduler loads `DB_CONN` in `scheduler/db_utils.py`.
|
||||
- Listener also creates its own engine for writes to `clients`.
|
||||
- Scheduler queries a future window (default: 7 days) to expand recurring events using RFC 5545 rules, applies event exceptions (skipped dates, detached occurrences), and publishes only events that are active at the current time (UTC). When a group has no active events, the scheduler clears its retained topic by publishing an empty list. Time comparisons are UTC; naive timestamps are normalized. Logging is concise; conversion lookups are cached and logged only once per media.
|
||||
- MQTT topics (paho-mqtt v2, use Callback API v2):
|
||||
- Discovery: `infoscreen/discovery` (JSON includes `uuid`, hw/ip data). ACK to `infoscreen/{uuid}/discovery_ack`. See `listener/listener.py`.
|
||||
- Heartbeat: `infoscreen/{uuid}/heartbeat` updates `Client.last_alive` (UTC).
|
||||
- Heartbeat: `infoscreen/{uuid}/heartbeat` updates `Client.last_alive` (UTC); enhanced payload includes `current_process`, `process_pid`, `process_status`, `current_event_id`.
|
||||
- Event lists (retained): `infoscreen/events/{group_id}` from `scheduler/scheduler.py`.
|
||||
- Per-client group assignment (retained): `infoscreen/{uuid}/group_id` via `server/mqtt_helper.py`.
|
||||
- Screenshots: server-side folders `server/received_screenshots/` and `server/screenshots/`; Nginx exposes `/screenshots/{uuid}.jpg` via `server/wsgi.py` route.
|
||||
- Client logs: `infoscreen/{uuid}/logs/{error|warn|info}` with JSON payload (timestamp, message, context); QoS 1 for ERROR/WARN, QoS 0 for INFO.
|
||||
- Client health: `infoscreen/{uuid}/health` with metrics (expected_state, actual_state, health_metrics); QoS 0, published every 5 seconds.
|
||||
- Dashboard screenshots: `infoscreen/{uuid}/dashboard` uses grouped v2 payload blocks (`message`, `content`, `runtime`, `metadata`); listener reads screenshot data from `content.screenshot` and capture type from `metadata.capture.type`.
|
||||
- Screenshots: server-side folder `server/screenshots/`; API serves `/screenshots/{uuid}.jpg` (latest) and `/screenshots/{uuid}/priority` (active high-priority fallback to latest).
|
||||
|
||||
- Dev Container guidance: If extensions reappear inside the container, remove UI-only extensions from `devcontainer.json` `extensions` and map them in `remote.extensionKind` as `"ui"`.
|
||||
|
||||
- Presentation conversion (PPT/PPTX/ODP → PDF):
|
||||
- Trigger: on upload in `server/routes/eventmedia.py` for media types `ppt|pptx|odp` (compute sha256, upsert `Conversion`, enqueue job).
|
||||
@@ -36,10 +164,32 @@ Use this as your shared context when proposing changes. Keep edits minimal and m
|
||||
- Storage: originals under `server/media/…`, outputs under `server/media/converted/` (prod compose mounts a shared volume for this path).
|
||||
|
||||
## Data model highlights (see `models/models.py`)
|
||||
- Enums: `EventType` (presentation, website, video, message, webuntis), `MediaType` (file/website types), and `AcademicPeriodType` (schuljahr, semester, trimester).
|
||||
- Tables: `clients`, `client_groups`, `events`, `event_media`, `users`, `academic_periods`, `school_holidays`.
|
||||
- Academic periods: `academic_periods` table supports educational institution cycles (school years, semesters). Events and media can be optionally linked via `academic_period_id` (nullable for backward compatibility).
|
||||
- Times are stored as timezone-aware; treat comparisons in UTC (see scheduler and routes/events).
|
||||
- User model: Includes 7 new audit/security fields (migration: `4f0b8a3e5c20_add_user_audit_fields.py`):
|
||||
- `last_login_at`, `last_password_change_at`: TIMESTAMP (UTC) tracking for auth events
|
||||
- `failed_login_attempts`, `last_failed_login_at`: Security monitoring for brute-force detection
|
||||
- `locked_until`: TIMESTAMP placeholder for account lockout (infrastructure in place, not yet enforced)
|
||||
- `deactivated_at`, `deactivated_by`: Soft-delete audit trail (FK self-reference); soft deactivation is the default, hard delete superadmin-only
|
||||
- Role hierarchy (privilege escalation enforced): `user` < `editor` < `admin` < `superadmin`
|
||||
- Client monitoring (migration: `c1d2e3f4g5h6_add_client_monitoring.py`):
|
||||
- `ClientLog` model: Centralized log storage with fields (id, client_uuid, timestamp, level, message, context, created_at); FK to clients.uuid (CASCADE)
|
||||
- `Client` model extended: 7 health monitoring fields (`current_event_id`, `current_process`, `process_status`, `process_pid`, `last_screenshot_analyzed`, `screen_health_status`, `last_screenshot_hash`)
|
||||
- Enums: `LogLevel` (ERROR, WARN, INFO, DEBUG), `ProcessStatus` (running, crashed, starting, stopped), `ScreenHealthStatus` (OK, BLACK, FROZEN, UNKNOWN)
|
||||
- Indexes: (client_uuid, timestamp DESC), (level, timestamp DESC), (created_at DESC) for performance
|
||||
- System settings: `system_settings` key–value store via `SystemSetting` for global configuration (e.g., WebUntis/Vertretungsplan supplement-table). Managed through routes in `server/routes/system_settings.py`.
|
||||
- Presentation defaults (system-wide):
|
||||
- `presentation_interval` (seconds, default "10")
|
||||
- `presentation_page_progress` ("true"/"false", default "true")
|
||||
- `presentation_auto_progress` ("true"/"false", default "true")
|
||||
Seeded in `server/init_defaults.py` if missing.
|
||||
- Video defaults (system-wide):
|
||||
- `video_autoplay` ("true"/"false", default "true")
|
||||
- `video_loop` ("true"/"false", default "true")
|
||||
- `video_volume` (0.0–1.0, default "0.8")
|
||||
- `video_muted` ("true"/"false", default "false")
|
||||
Used as initial values when creating new video events; editable per event.
|
||||
- Events: Added `page_progress` (Boolean) and `auto_progress` (Boolean) for presentation behavior per event.
|
||||
- Event (video fields): `event_media_id`, `autoplay`, `loop`, `volume`, `muted`.
|
||||
- WebUntis URL: WebUntis uses the existing Vertretungsplan/Supplement-Table URL (`supplement_table_url`). There is no separate `webuntis_url` setting; use `GET/POST /api/system-settings/supplement-table`.
|
||||
|
||||
- Conversions:
|
||||
- Enum `ConversionStatus`: `pending`, `processing`, `ready`, `failed`.
|
||||
@@ -51,29 +201,56 @@ Use this as your shared context when proposing changes. Keep edits minimal and m
|
||||
- Session usage: instantiate `Session()` per request, commit when mutating, and always `session.close()` before returning.
|
||||
- Examples:
|
||||
- Clients: `server/routes/clients.py` includes bulk group updates and MQTT sync (`publish_multiple_client_groups`).
|
||||
- Groups: `server/routes/groups.py` computes “alive” using a grace period that varies by `ENV`.
|
||||
- Events: `server/routes/events.py` serializes enum values to strings and normalizes times to UTC.
|
||||
- Groups: `server/routes/groups.py` computes “alive” using a grace period that varies by `ENV`. - `GET /api/groups/order` — retrieve saved group display order
|
||||
- `POST /api/groups/order` — persist group display order (array of group IDs) - Events: `server/routes/events.py` serializes enum values to strings and normalizes times to UTC. Recurring events are only deactivated after their recurrence_end (UNTIL); non-recurring events deactivate after their end time. Event exceptions are respected and rendered in scheduler output.
|
||||
- Media: `server/routes/eventmedia.py` implements a simple file manager API rooted at `server/media/`.
|
||||
- System settings: `server/routes/system_settings.py` exposes key–value CRUD (`/api/system-settings`) and a convenience endpoint for WebUntis/Vertretungsplan supplement-table: `GET/POST /api/system-settings/supplement-table` (admin+).
|
||||
- Academic periods: `server/routes/academic_periods.py` exposes:
|
||||
- `GET /api/academic_periods` — list all periods
|
||||
- `GET /api/academic_periods/active` — currently active period
|
||||
- `POST /api/academic_periods/active` — set active period (deactivates others)
|
||||
- `GET /api/academic_periods/for_date?date=YYYY-MM-DD` — period covering given date
|
||||
- User management: `server/routes/users.py` exposes comprehensive CRUD for users (admin+):
|
||||
- `GET /api/users` — list all users (role-filtered: admin sees user/editor/admin, superadmin sees all); includes audit fields in camelCase (lastLoginAt, lastPasswordChangeAt, failedLoginAttempts, deactivatedAt, deactivatedBy)
|
||||
- `POST /api/users` — create user with username, password (min 6 chars), role, and status; admin cannot create superadmin; initializes audit fields
|
||||
- `GET /api/users/<id>` — get detailed user record with all audit fields
|
||||
- `PUT /api/users/<id>` — update user (cannot change own role/status; admin cannot modify superadmin accounts)
|
||||
- `PUT /api/users/<id>/password` — admin password reset (requires backend check to reject self-reset for consistency)
|
||||
- `DELETE /api/users/<id>` — hard delete (superadmin only, with self-deletion check)
|
||||
- Auth routes (`server/routes/auth.py`): Enhanced to track login events (sets `last_login_at`, resets `failed_login_attempts` on success; increments `failed_login_attempts` and `last_failed_login_at` on failure). Self-service password change via `PUT /api/auth/change-password` requires current password verification.
|
||||
- Client logs (`server/routes/client_logs.py`): Centralized log retrieval for monitoring:
|
||||
- `GET /api/client-logs/<uuid>/logs` – Query client logs with filters (level, limit, since); admin_or_higher
|
||||
- `GET /api/client-logs/summary` – Log counts by level per client (last 24h); admin_or_higher
|
||||
- `GET /api/client-logs/recent-errors` – System-wide error monitoring; admin_or_higher
|
||||
- `GET /api/client-logs/monitoring-overview` – Includes screenshot priority fields per client plus `summary.activePriorityScreenshots`; superadmin_only
|
||||
- `GET /api/client-logs/test` – Infrastructure validation (no auth); returns recent logs with counts
|
||||
|
||||
Documentation maintenance: keep this file aligned with real patterns; update when routes/session/UTC rules change. Avoid long prose; link exact paths.
|
||||
|
||||
## Frontend patterns (dashboard)
|
||||
- Vite React app; proxies `/api` and `/screenshots` to API in dev (`vite.config.ts`).
|
||||
- Uses Syncfusion components; Vite config pre-bundles specific packages to avoid alias issues.
|
||||
- Environment: `VITE_API_URL` provided at build/run; in dev compose, proxy handles `/api` so local fetches can use relative `/api/...` paths.
|
||||
- Theming: Syncfusion Material 3 theme is used. All component CSS is imported centrally in `dashboard/src/main.tsx` (base, navigations, buttons, inputs, dropdowns, popups, kanban, grids, schedule, filemanager, notifications, layouts, lists, calendars, splitbuttons, icons). Tailwind CSS has been removed.
|
||||
- **API Response Format**: All API endpoints return camelCase JSON (e.g., `startTime`, `endTime`, `groupId`). Frontend consumes camelCase directly.
|
||||
- **UTC Time Parsing**: API returns ISO strings without 'Z' suffix. Frontend appends 'Z' before parsing to ensure UTC interpretation: `const utcString = dateStr.endsWith('Z') ? dateStr : dateStr + 'Z'; new Date(utcString);`. Display uses `toLocaleTimeString('de-DE')` for German format.
|
||||
|
||||
- Dev Container: When adding frontend deps, prefer `npm ci` and, if using named volumes, recreate dashboard `node_modules` volume so installs occur inside the container.
|
||||
- Theming: Syncfusion Material 3 theme is used. All component CSS is imported centrally in `dashboard/src/main.tsx` (base, navigations, buttons, inputs, dropdowns, popups, kanban, grids, schedule, filemanager, notifications, layouts, lists, calendars, splitbuttons, icons). Tailwind CSS has been removed.
|
||||
- Scheduler (appointments page): top bar includes Group and Academic Period selectors (Syncfusion DropDownList). Selecting a period calls `POST /api/academic_periods/active`, moves the calendar to today’s month/day within the period year, and refreshes a right-aligned indicator row showing:
|
||||
- Holidays present in the current view (count)
|
||||
- Period label (display_name or name) with a badge indicating whether any holidays exist in that period (overlap check)
|
||||
|
||||
- Recurrence & holidays (latest):
|
||||
- Backend stores holiday skips in `EventException` and emits `RecurrenceException` (EXDATE) for master events in `GET /api/events`. EXDATE timestamps match each occurrence start time (UTC) so Syncfusion excludes instances on holidays reliably.
|
||||
- Frontend manually expands recurring events due to Syncfusion EXDATE handling bugs. Daily/Weekly recurrence patterns are expanded client-side with proper EXDATE filtering and DST timezone tolerance (2-hour window).
|
||||
- Single occurrence editing: Users can detach individual occurrences from recurring series via confirmation dialog. The detach operation creates `EventException` records, generates EXDATE entries, and creates standalone events without affecting the master series.
|
||||
- UI: Events with `SkipHolidays` render a TentTree icon directly after the main event icon in the scheduler event template. Icon color: black.
|
||||
- Recurrence & holidays (latest):
|
||||
- Backend stores holiday skips in `EventException` and emits `RecurrenceException` (EXDATE) for master events in `GET /api/events`. EXDATE tokens are formatted in RFC 5545 compact form (`yyyyMMddTHHmmssZ`) and correspond to each occurrence start time (UTC). Syncfusion uses these to exclude holiday instances reliably.
|
||||
- Frontend lets Syncfusion handle all recurrence patterns natively (no client-side expansion). Scheduler field mappings include `recurrenceID`, `recurrenceRule`, and `recurrenceException` so series and edited occurrences are recognized correctly.
|
||||
- Event deletion: All event types (single, single-in-series, entire series) are handled with custom dialogs. The frontend intercepts Syncfusion's built-in RecurrenceAlert and DeleteAlert popups to provide a unified, user-friendly deletion flow:
|
||||
- Single (non-recurring) event: deleted directly after confirmation.
|
||||
- Single occurrence of a recurring series: user can delete just that instance.
|
||||
- Entire recurring series: user can delete all occurrences after a final custom confirmation dialog.
|
||||
- Detached occurrences (edited/broken out): treated as single events.
|
||||
- Single occurrence editing: Users can detach individual occurrences from recurring series. The frontend hooks `actionComplete`/`onActionCompleted` with `requestType='eventChanged'` to persist changes: it calls `POST /api/events/<id>/occurrences/<date>/detach` for single-occurrence edits and `PUT /api/events/<id>` for series or single events as appropriate. The backend creates `EventException` and a standalone `Event` without modifying the master beyond EXDATEs.
|
||||
- UI: Events with `SkipHolidays` render a TentTree icon next to the main event icon. The custom recurrence icon in the header was removed; rely on Syncfusion’s native lower-right recurrence badge.
|
||||
- Website & WebUntis: Both event types display a website. WebUntis reads its URL from the system `supplement_table_url` and does not provide a per-event URL field.
|
||||
|
||||
- Program info page (`dashboard/src/programminfo.tsx`):
|
||||
- Loads data from `dashboard/public/program-info.json` (app name, version, build info, tech stack, changelog).
|
||||
@@ -84,6 +261,85 @@ Use this as your shared context when proposing changes. Keep edits minimal and m
|
||||
- Migrated to Syncfusion inputs and popups: Buttons, TextBox, DropDownList, Dialog; Kanban remains for drag/drop.
|
||||
- Unified toast/dialog wording; replaced legacy alerts with toasts; spacing handled via inline styles to avoid Tailwind dependency.
|
||||
|
||||
- Header user menu (top-right):
|
||||
- Shows current username and role; click opens a menu with "Passwort ändern" (lock icon), "Profil", and "Abmelden".
|
||||
- Implemented with Syncfusion DropDownButton (`@syncfusion/ej2-react-splitbuttons`).
|
||||
- "Passwort ändern": Opens self-service password change dialog (available to all authenticated users); requires current password verification, new password min 6 chars, must match confirm field; calls `PUT /api/auth/change-password`
|
||||
- "Abmelden" navigates to `/logout`; the page invokes backend logout and redirects to `/login`.
|
||||
|
||||
- User management page (`dashboard/src/users.tsx`):
|
||||
- Full CRUD interface for managing users (admin+ only in menu); accessible via "Benutzer" sidebar entry
|
||||
- Syncfusion GridComponent: 20 per page (configurable), sortable columns (ID, username, role), custom action button template with role-based visibility
|
||||
- Statistics cards: total users, active (non-deactivated), inactive (deactivated) counts
|
||||
- Dialogs: Create (username/password/role/status), Edit (with self-edit protections), Password Reset (admin only, no current password required), Delete (superadmin only, self-check), Details (read-only audit info with formatted timestamps)
|
||||
- Role badges: Color-coded display (user: gray, editor: blue, admin: green, superadmin: red)
|
||||
- Audit information displayed: last login, password change, last failed login, deactivation timestamps and deactivating user
|
||||
- Role-based permissions (enforced backend + frontend):
|
||||
- Admin: can manage user/editor/admin roles (not superadmin); soft-deactivate only; cannot see/edit superadmin accounts
|
||||
- Superadmin: can manage all roles including other superadmins; can permanently hard-delete users
|
||||
- Security rules enforced: cannot change own role, cannot deactivate own account, cannot delete self, cannot reset own password via admin route (must use self-service)
|
||||
- API client in `dashboard/src/apiUsers.ts` for all user operations (listUsers, getUser, createUser, updateUser, resetUserPassword, deleteUser)
|
||||
- Menu visibility: "Benutzer" menu item only visible to admin+ (role-gated in App.tsx)
|
||||
|
||||
- Monitoring page (`dashboard/src/monitoring.tsx`):
|
||||
- Superadmin-only dashboard for client monitoring and diagnostics; menu item is hidden for lower roles and the route redirects non-superadmins.
|
||||
- Uses `GET /api/client-logs/monitoring-overview` for aggregated live status, `GET /api/client-logs/recent-errors` for system-wide errors, and `GET /api/client-logs/<uuid>/logs` for per-client details.
|
||||
- Shows per-client status (`healthy`, `warning`, `critical`, `offline`) based on heartbeat freshness, process state, screen state, and recent log counts.
|
||||
- Displays latest screenshot preview and active priority screenshot (`/screenshots/{uuid}/priority` when active), screenshot type badges, current process metadata, and recent ERROR/WARN activity.
|
||||
- Uses adaptive refresh: normal interval in steady state, faster polling while `activePriorityScreenshots > 0`.
|
||||
|
||||
- Settings page (`dashboard/src/settings.tsx`):
|
||||
- Structure: Syncfusion TabComponent with role-gated tabs
|
||||
- 📅 Academic Calendar (all users)
|
||||
- 📥 Import & Liste: CSV/TXT import and list combined
|
||||
- 🗂️ Perioden: select and set active period (uses `/api/academic_periods` routes)
|
||||
- 🖥️ Display & Clients (admin+)
|
||||
- Default Settings: placeholders for heartbeat, screenshots, defaults
|
||||
- Client Configuration: quick links to Clients and Groups pages
|
||||
- 🎬 Media & Files (admin+)
|
||||
- Upload Settings: placeholders for limits and types
|
||||
- Conversion Status: placeholder for conversions overview
|
||||
- 🗓️ Events (admin+)
|
||||
- WebUntis / Vertretungsplan: system-wide supplement table URL with enable/disable, save, and preview; persists via `/api/system-settings/supplement-table`
|
||||
- Presentations: general defaults for slideshow interval, page-progress, and auto-progress; persisted via `/api/system-settings` keys (`presentation_interval`, `presentation_page_progress`, `presentation_auto_progress`). These defaults are applied when creating new presentation events (the custom event modal reads them and falls back to per-event values when editing).
|
||||
- Videos: system-wide defaults for `autoplay`, `loop`, `volume`, and `muted`; persisted via `/api/system-settings` keys (`video_autoplay`, `video_loop`, `video_volume`, `video_muted`). These defaults are applied when creating new video events (the custom event modal reads them and falls back to per-event values when editing).
|
||||
- Other event types (website, message, other): placeholders for defaults
|
||||
- ⚙️ System (superadmin)
|
||||
- Organization Info and Advanced Configuration placeholders
|
||||
- Role gating: Admin/Superadmin tabs are hidden if the user lacks permission; System is superadmin-only
|
||||
- API clients use relative `/api/...` URLs so Vite dev proxy handles requests without CORS issues. The settings UI calls are centralized in `dashboard/src/apiSystemSettings.ts`.
|
||||
- Nested tabs: implemented as controlled components using `selectedItem` with stateful handlers to prevent sub-tab resets during updates.
|
||||
|
||||
- Dashboard page (`dashboard/src/dashboard.tsx`):
|
||||
- Card-based overview of all Raumgruppen (room groups) with real-time status monitoring
|
||||
- Global statistics: total infoscreens, online/offline counts, warning groups
|
||||
- Filter buttons: All / Online / Offline / Warnings with dynamic counts
|
||||
- Per-group cards show:
|
||||
- Currently active event (title, type, date/time in local timezone)
|
||||
- Health bar with online/offline ratio and color-coded status
|
||||
- Expandable client list with last alive timestamps
|
||||
- Bulk restart button for offline clients
|
||||
- Uses Syncfusion ButtonComponent, ToastComponent, and card CSS classes
|
||||
- Auto-refresh every 15 seconds; manual refresh button available
|
||||
- "Nicht zugeordnet" group always appears last in sorted list
|
||||
|
||||
- Ressourcen page (`dashboard/src/ressourcen.tsx`):
|
||||
- Timeline view showing all groups and their active events in parallel
|
||||
- Uses Syncfusion ScheduleComponent with TimelineViews (day/week modes)
|
||||
- Compact row display: 65px height per group, dynamically calculated total height
|
||||
- Group ordering panel with drag up/down controls; order persisted to backend via `/api/groups/order`
|
||||
- Filters out "Nicht zugeordnet" group from timeline display
|
||||
- Fetches events per group for current date range; displays first active event per group
|
||||
- Color-coded event bars using `getGroupColor()` from `groupColors.ts`
|
||||
- Resource-based timeline: each group is a resource row, events mapped to `ResourceId`
|
||||
- Real-time updates: loads events on mount and when view/date changes
|
||||
- Custom CSS in `dashboard/src/ressourcen.css` for timeline styling and controls
|
||||
|
||||
- User dropdown technical notes:
|
||||
- Dependencies: `@syncfusion/ej2-react-splitbuttons` and `@syncfusion/ej2-splitbuttons` must be installed.
|
||||
- Vite: add both to `optimizeDeps.include` in `vite.config.ts` to avoid import-analysis errors.
|
||||
- Dev containers: when `node_modules` is a named volume, recreate the dashboard node_modules volume after adding dependencies so `npm ci` runs inside the container.
|
||||
|
||||
Note: Syncfusion usage in the dashboard is already documented above; if a UI for conversion status/downloads is added later, link its routes and components here.
|
||||
|
||||
## Local development
|
||||
@@ -94,6 +350,7 @@ Note: Syncfusion usage in the dashboard is already documented above; if a UI for
|
||||
- Common env vars: `DB_CONN`, `DB_USER`, `DB_PASSWORD`, `DB_HOST=db`, `DB_NAME`, `ENV`, `MQTT_USER`, `MQTT_PASSWORD`.
|
||||
- Alembic: prod compose runs `alembic ... upgrade head` and `server/init_defaults.py` before gunicorn.
|
||||
- Local dev: prefer `python server/initialize_database.py` for one-shot setup (migrations + defaults + academic periods).
|
||||
- Defaults: `server/init_defaults.py` seeds initial system settings like `supplement_table_url` and `supplement_table_enabled` if missing.
|
||||
- `server/init_academic_periods.py` remains available to (re)seed school years.
|
||||
|
||||
## Production
|
||||
@@ -106,33 +363,60 @@ Note: Syncfusion usage in the dashboard is already documented above; if a UI for
|
||||
- ENV — `development` or `production`; in development, `server/database.py` loads `.env`.
|
||||
- MQTT_BROKER_HOST, MQTT_BROKER_PORT — Defaults `mqtt` and `1883`; MQTT_USER/MQTT_PASSWORD optional (dev often anonymous per Mosquitto config).
|
||||
- VITE_API_URL — Dashboard build-time base URL (prod); in dev the Vite proxy serves `/api` to `server:8000`.
|
||||
- HEARTBEAT_GRACE_PERIOD_DEV / HEARTBEAT_GRACE_PERIOD_PROD — Groups “alive” window (defaults ~15s dev / 180s prod).
|
||||
- HEARTBEAT_GRACE_PERIOD_DEV / HEARTBEAT_GRACE_PERIOD_PROD — Groups "alive" window (defaults 180s dev / 170s prod). Clients send heartbeats every ~65s; grace periods allow 2 missed heartbeats plus safety margin.
|
||||
- REFRESH_SECONDS — Optional scheduler republish interval; `0` disables periodic refresh.
|
||||
- PRIORITY_SCREENSHOT_TTL_SECONDS — Optional monitoring priority window in seconds (default `120`); controls when event screenshots are considered active priority.
|
||||
|
||||
## Conventions & gotchas
|
||||
- Always compare datetimes in UTC; some DB values may be naive—normalize before comparing (see `routes/events.py`).
|
||||
- **Datetime Handling**:
|
||||
- Always compare datetimes in UTC; some DB values may be naive—normalize before comparing (see `routes/events.py`).
|
||||
- Database stores timestamps in UTC (naive datetimes are normalized to UTC by backend)
|
||||
- API returns ISO strings **without** 'Z' suffix: `"2025-11-27T20:03:00"`
|
||||
- Frontend **must** append 'Z' before parsing: `const utcStr = dateStr.endsWith('Z') ? dateStr : dateStr + 'Z'; new Date(utcStr);`
|
||||
- Display in local timezone using `toLocaleTimeString('de-DE', { hour: '2-digit', minute: '2-digit' })`
|
||||
- When sending to API, use `date.toISOString()` which includes 'Z' and is UTC
|
||||
- **JSON Naming Convention**:
|
||||
- Backend uses snake_case internally (Python convention)
|
||||
- API returns camelCase JSON (web standard): `startTime`, `endTime`, `groupId`, etc.
|
||||
- Use `dict_to_camel_case()` from `server/serializers.py` before `jsonify()`
|
||||
- Frontend consumes camelCase directly; Syncfusion scheduler maintains internal PascalCase with field mappings
|
||||
- Scheduler enforces UTC comparisons and normalizes naive timestamps. It publishes only currently active events and clears retained topics for groups with no active events. It also queries a future window (default: 7 days) and expands recurring events using RFC 5545 rules. Event exceptions are respected. Logging is concise and conversion lookups are cached.
|
||||
- Use retained MQTT messages for state that clients must recover after reconnect (events per group, client group_id).
|
||||
- Clients should parse `event_type` and then read the corresponding nested payload (`presentation`, `website`, `video`, etc.). `website` and `webuntis` use the same nested `website` payload with `type: browser` and a `url`. Video events include `autoplay`, `loop`, `volume`, and `muted`.
|
||||
- In-container DB host is `db`; do not use `localhost` inside services.
|
||||
- No separate dev vs prod secret conventions: use the same env var keys across environments (e.g., `DB_CONN`, `MQTT_USER`, `MQTT_PASSWORD`).
|
||||
- When adding a new route:
|
||||
1) Create a Blueprint in `server/routes/...`,
|
||||
2) Register it in `server/wsgi.py`,
|
||||
3) Manage `Session()` lifecycle, and
|
||||
4) Return JSON-safe values (serialize enums and datetimes).
|
||||
3) Manage `Session()` lifecycle,
|
||||
4) Return JSON-safe values (serialize enums and datetimes), and
|
||||
5) Use `dict_to_camel_case()` for camelCase JSON responses
|
||||
|
||||
Docs maintenance guardrails (solo-friendly): Update this file alongside code changes (services/MQTT/API/UTC/env). Keep it concise (20–50 lines per section). Never include secrets.
|
||||
- When extending media types, update `MediaType` and any logic in `eventmedia` and dashboard that depends on it.
|
||||
- Academic periods: Events/media can be optionally associated with periods for educational organization. Only one period should be active at a time (`is_active=True`).
|
||||
- Initialization scripts: legacy DB init scripts were removed; use Alembic and `initialize_database.py` going forward.
|
||||
- Initialization scripts: legacy DB init scripts were removed; use Alembic and `initialize_database.py` going forward.
|
||||
|
||||
### Recurrence & holidays: conventions
|
||||
- Do not pre-expand recurrences on the backend. Always send master event with `RecurrenceRule` + `RecurrenceException`.
|
||||
- Ensure EXDATE tokens include the occurrence start time (HH:mm:ss) in UTC to match manual expansion logic.
|
||||
- Do not pre-expand recurrences on the backend. Always send master events with `RecurrenceRule` + `RecurrenceException`.
|
||||
- Ensure EXDATE tokens are RFC 5545 timestamps (`yyyyMMddTHHmmssZ`) matching the occurrence start time (UTC) so Syncfusion can exclude them natively.
|
||||
- When `skip_holidays` or recurrence changes, regenerate `EventException` rows so `RecurrenceException` stays in sync.
|
||||
- Single occurrence detach: Use `POST /api/events/<id>/occurrences/<date>/detach` to create standalone events and add EXDATE entries without modifying master events.
|
||||
- Single occurrence detach: Use `POST /api/events/<id>/occurrences/<date>/detach` to create standalone events and add EXDATE entries without modifying master events. The frontend persists edits via `actionComplete` (`requestType='eventChanged'`).
|
||||
|
||||
## Quick examples
|
||||
- Add client description persists to DB and publishes group via MQTT: see `PUT /api/clients/<uuid>/description` in `routes/clients.py`.
|
||||
- Bulk group assignment emits retained messages for each client: `PUT /api/clients/group`.
|
||||
- Listener heartbeat path: `infoscreen/<uuid>/heartbeat` → sets `clients.last_alive`.
|
||||
- Listener heartbeat path: `infoscreen/<uuid>/heartbeat` → sets `clients.last_alive` and captures process health data.
|
||||
- Client monitoring flow: Client publishes to `infoscreen/{uuid}/logs/error` and `infoscreen/{uuid}/health` → listener stores/updates monitoring state → API serves `/api/client-logs/monitoring-overview`, `/api/client-logs/recent-errors`, and `/api/client-logs/<uuid>/logs` → superadmin monitoring dashboard displays live status.
|
||||
|
||||
## Scheduler payloads: presentation extras
|
||||
- Presentation event payloads now include `page_progress` and `auto_progress` in addition to `slide_interval` and media files. These are sourced from per-event fields in the database (with system defaults applied on event creation).
|
||||
|
||||
## Scheduler payloads: website & webuntis
|
||||
- For both `website` and `webuntis`, the scheduler emits a nested `website` object:
|
||||
- `{ "type": "browser", "url": "https://..." }`
|
||||
- The `event_type` remains `website` or `webuntis`. Clients should treat both identically for rendering.
|
||||
- The WebUntis URL is set at event creation by reading the system `supplement_table_url`.
|
||||
|
||||
Questions or unclear areas? Tell us if you need: exact devcontainer debugging steps, stricter Alembic workflow, or a seed dataset beyond `init_defaults.py`.
|
||||
|
||||
@@ -152,3 +436,14 @@ Questions or unclear areas? Tell us if you need: exact devcontainer debugging st
|
||||
- Breaking changes must be prefixed with `BREAKING:`
|
||||
- Keep ≤ 8–10 bullets; summarize or group micro-changes
|
||||
- JSON hygiene: valid JSON, no trailing commas, don’t edit historical entries except typos
|
||||
|
||||
## Versioning Convention (Tech vs UI)
|
||||
|
||||
- Use one unified app version across technical and user-facing release notes.
|
||||
- `dashboard/public/program-info.json` is user-facing and should list only user-visible changes.
|
||||
- `TECH-CHANGELOG.md` can include deeper technical details for the same released version.
|
||||
- If server/infrastructure work is implemented but not yet released or not user-visible, document it under the latest released section as:
|
||||
- `Backend technical work (post-release notes; no version bump)`
|
||||
- Do not create a new version header in `TECH-CHANGELOG.md` for internal milestones alone.
|
||||
- Bump version numbers when a release is actually cut/deployed (or when user-facing release notes are published), not for intermediate backend-only steps.
|
||||
- When UI integration lands later, include the user-visible part in the next release version and reference prior post-release technical groundwork when useful.
|
||||
|
||||
137
.gitignore
vendored
137
.gitignore
vendored
@@ -1,75 +1,7 @@
|
||||
# OS/Editor
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
.vscode/
|
||||
.idea/
|
||||
|
||||
# Python
|
||||
__pycache__/
|
||||
*.pyc
|
||||
.pytest_cache/
|
||||
|
||||
# Node
|
||||
node_modules/
|
||||
dashboard/node_modules/
|
||||
dashboard/.vite/
|
||||
|
||||
# Env files (never commit secrets)
|
||||
.env
|
||||
.env.local
|
||||
|
||||
# Docker
|
||||
*.log
|
||||
# Python-related
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.pyo
|
||||
*.pyd
|
||||
*.pdb
|
||||
*.egg-info/
|
||||
*.eggs/
|
||||
*.env
|
||||
.env
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
|
||||
# Virtual environments
|
||||
venv/
|
||||
env/
|
||||
.venv/
|
||||
.env/
|
||||
|
||||
# Logs and databases
|
||||
*.log
|
||||
*.sqlite3
|
||||
*.db
|
||||
|
||||
# Docker-related
|
||||
*.pid
|
||||
*.tar
|
||||
docker-compose.override.yml
|
||||
docker-compose.override.*.yml
|
||||
docker-compose.override.*.yaml
|
||||
|
||||
# Node.js-related
|
||||
node_modules/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# Dash and Flask cache
|
||||
*.cache
|
||||
*.pytest_cache/
|
||||
instance/
|
||||
*.mypy_cache/
|
||||
*.hypothesis/
|
||||
*.coverage
|
||||
.coverage.*
|
||||
|
||||
# IDE and editor files
|
||||
desktop.ini
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
@@ -77,24 +9,69 @@ instance/
|
||||
*.bak
|
||||
*.tmp
|
||||
|
||||
# OS-generated files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
desktop.ini
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
*.pdb
|
||||
*.egg-info/
|
||||
*.eggs/
|
||||
.pytest_cache/
|
||||
*.mypy_cache/
|
||||
*.hypothesis/
|
||||
*.coverage
|
||||
.coverage.*
|
||||
*.cache
|
||||
instance/
|
||||
|
||||
# Devcontainer-related
|
||||
# Virtual environments
|
||||
venv/
|
||||
env/
|
||||
.venv/
|
||||
.env/
|
||||
|
||||
# Environment files
|
||||
.env
|
||||
.env.local
|
||||
|
||||
# Logs and databases
|
||||
*.log
|
||||
*.log.1
|
||||
*.sqlite3
|
||||
*.db
|
||||
|
||||
# Node.js
|
||||
node_modules/
|
||||
dashboard/node_modules/
|
||||
dashboard/.vite/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
.pnpm-store/
|
||||
|
||||
# Docker
|
||||
*.pid
|
||||
*.tar
|
||||
docker-compose.override.yml
|
||||
docker-compose.override.*.yml
|
||||
docker-compose.override.*.yaml
|
||||
|
||||
# Devcontainer
|
||||
.devcontainer/
|
||||
|
||||
# Project-specific
|
||||
received_screenshots/
|
||||
mosquitto/
|
||||
alte/
|
||||
screenshots/
|
||||
media/
|
||||
mosquitto/
|
||||
certs/
|
||||
alte/
|
||||
sync.ffs_db
|
||||
dashboard/manitine_test.py
|
||||
dashboard/pages/test.py
|
||||
.gitignore
|
||||
dashboard/sidebar_test.py
|
||||
dashboard/assets/responsive-sidebar.css
|
||||
certs/
|
||||
sync.ffs_db
|
||||
.pnpm-store/
|
||||
dashboard/src/nested_tabs.js
|
||||
scheduler/scheduler.log.2
|
||||
|
||||
@@ -98,3 +98,6 @@ exit 0 # warn only; do not block commit
|
||||
- MQTT workers: `listener/listener.py`, `scheduler/scheduler.py`, `server/mqtt_helper.py`
|
||||
- Frontend: `dashboard/vite.config.ts`, `dashboard/package.json`, `dashboard/src/*`
|
||||
- Dev/Prod docs: `deployment.md`, `.env.example`
|
||||
|
||||
## Documentation sync log
|
||||
- 2026-03-24: Synced docs for completed monitoring rollout and presentation flag persistence fix (`page_progress` / `auto_progress`). Updated `.github/copilot-instructions.md`, `README.md`, `TECH-CHANGELOG.md`, `DEV-CHANGELOG.md`, and `CLIENT_MONITORING_IMPLEMENTATION_GUIDE.md` without a user-version bump.
|
||||
|
||||
264
AUTH_QUICKREF.md
Normal file
264
AUTH_QUICKREF.md
Normal file
@@ -0,0 +1,264 @@
|
||||
# Authentication Quick Reference
|
||||
|
||||
## For Backend Developers
|
||||
|
||||
### Protecting a Route
|
||||
|
||||
```python
|
||||
from flask import Blueprint
|
||||
from server.permissions import require_role, admin_or_higher, editor_or_higher
|
||||
|
||||
my_bp = Blueprint("myroute", __name__, url_prefix="/api/myroute")
|
||||
|
||||
# Specific role(s)
|
||||
@my_bp.route("/admin")
|
||||
@require_role('admin', 'superadmin')
|
||||
def admin_only():
|
||||
return {"message": "Admin only"}
|
||||
|
||||
# Convenience decorators
|
||||
@my_bp.route("/settings")
|
||||
@admin_or_higher
|
||||
def settings():
|
||||
return {"message": "Admin or superadmin"}
|
||||
|
||||
@my_bp.route("/create", methods=["POST"])
|
||||
@editor_or_higher
|
||||
def create():
|
||||
return {"message": "Editor, admin, or superadmin"}
|
||||
```
|
||||
|
||||
### Getting Current User in Route
|
||||
|
||||
```python
|
||||
from flask import session
|
||||
|
||||
@my_bp.route("/profile")
|
||||
@require_auth
|
||||
def profile():
|
||||
user_id = session.get('user_id')
|
||||
username = session.get('username')
|
||||
role = session.get('role')
|
||||
return {
|
||||
"user_id": user_id,
|
||||
"username": username,
|
||||
"role": role
|
||||
}
|
||||
```
|
||||
|
||||
## For Frontend Developers
|
||||
|
||||
### Using the Auth Hook
|
||||
|
||||
```typescript
|
||||
import { useAuth } from './useAuth';
|
||||
|
||||
function MyComponent() {
|
||||
const { user, isAuthenticated, login, logout, loading } = useAuth();
|
||||
|
||||
if (loading) return <div>Loading...</div>;
|
||||
|
||||
if (!isAuthenticated) {
|
||||
return <button onClick={() => login('user', 'pass')}>Login</button>;
|
||||
}
|
||||
|
||||
return (
|
||||
<div>
|
||||
<p>Welcome {user?.username}</p>
|
||||
<p>Role: {user?.role}</p>
|
||||
<button onClick={logout}>Logout</button>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Conditional Rendering
|
||||
|
||||
```typescript
|
||||
import { useCurrentUser } from './useAuth';
|
||||
import { isAdminOrHigher, isEditorOrHigher } from './apiAuth';
|
||||
|
||||
function Navigation() {
|
||||
const user = useCurrentUser();
|
||||
|
||||
return (
|
||||
<nav>
|
||||
<a href="/">Home</a>
|
||||
|
||||
{/* Show for all authenticated users */}
|
||||
{user && <a href="/events">Events</a>}
|
||||
|
||||
{/* Show for editor+ */}
|
||||
{isEditorOrHigher(user) && (
|
||||
<a href="/events/new">Create Event</a>
|
||||
)}
|
||||
|
||||
{/* Show for admin+ */}
|
||||
{isAdminOrHigher(user) && (
|
||||
<a href="/admin">Admin Panel</a>
|
||||
)}
|
||||
</nav>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Making Authenticated API Calls
|
||||
|
||||
```typescript
|
||||
// Always include credentials for session cookies
|
||||
const response = await fetch('/api/protected-route', {
|
||||
credentials: 'include',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
// ... other options
|
||||
});
|
||||
```
|
||||
|
||||
## Role Hierarchy
|
||||
|
||||
```
|
||||
superadmin > admin > editor > user
|
||||
```
|
||||
|
||||
| Role | Can Do |
|
||||
|------|--------|
|
||||
| **user** | View events |
|
||||
| **editor** | user + CRUD events/media |
|
||||
| **admin** | editor + manage users/groups/settings |
|
||||
| **superadmin** | admin + manage superadmins + system config |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
# Required for sessions
|
||||
FLASK_SECRET_KEY=your_secret_key_here
|
||||
|
||||
# Required for superadmin creation
|
||||
DEFAULT_SUPERADMIN_USERNAME=superadmin
|
||||
DEFAULT_SUPERADMIN_PASSWORD=your_password_here
|
||||
```
|
||||
|
||||
Generate a secret key:
|
||||
```bash
|
||||
python -c 'import secrets; print(secrets.token_hex(32))'
|
||||
```
|
||||
|
||||
## Testing Endpoints
|
||||
|
||||
```bash
|
||||
# Login
|
||||
curl -X POST http://localhost:8000/api/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"username":"superadmin","password":"your_password"}' \
|
||||
-c cookies.txt
|
||||
|
||||
# Check current user
|
||||
curl http://localhost:8000/api/auth/me -b cookies.txt
|
||||
|
||||
# Check auth status (lightweight)
|
||||
curl http://localhost:8000/api/auth/check -b cookies.txt
|
||||
|
||||
# Logout
|
||||
curl -X POST http://localhost:8000/api/auth/logout -b cookies.txt
|
||||
|
||||
# Test protected route
|
||||
curl http://localhost:8000/api/protected -b cookies.txt
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Backend: Optional Auth
|
||||
|
||||
```python
|
||||
from flask import session
|
||||
|
||||
@my_bp.route("/public-with-extras")
|
||||
def public_route():
|
||||
user_id = session.get('user_id')
|
||||
|
||||
if user_id:
|
||||
# Show extra content for authenticated users
|
||||
return {"data": "...", "extras": "..."}
|
||||
else:
|
||||
# Public content only
|
||||
return {"data": "..."}
|
||||
```
|
||||
|
||||
### Frontend: Redirect After Login
|
||||
|
||||
```typescript
|
||||
const { login } = useAuth();
|
||||
|
||||
const handleLogin = async (username: string, password: string) => {
|
||||
try {
|
||||
await login(username, password);
|
||||
window.location.href = '/dashboard';
|
||||
} catch (err) {
|
||||
console.error('Login failed:', err);
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Frontend: Protected Route Component
|
||||
|
||||
```typescript
|
||||
import { useAuth } from './useAuth';
|
||||
import { Navigate } from 'react-router-dom';
|
||||
|
||||
function ProtectedRoute({ children }: { children: React.ReactNode }) {
|
||||
const { isAuthenticated, loading } = useAuth();
|
||||
|
||||
if (loading) return <div>Loading...</div>;
|
||||
|
||||
if (!isAuthenticated) {
|
||||
return <Navigate to="/login" />;
|
||||
}
|
||||
|
||||
return <>{children}</>;
|
||||
}
|
||||
|
||||
// Usage in routes:
|
||||
<Route path="/admin" element={
|
||||
<ProtectedRoute>
|
||||
<AdminPanel />
|
||||
</ProtectedRoute>
|
||||
} />
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Authentication required" on /api/auth/me
|
||||
|
||||
✅ **Normal** - User is not logged in. This is expected behavior.
|
||||
|
||||
### Session not persisting across requests
|
||||
|
||||
- Check `credentials: 'include'` in fetch calls
|
||||
- Verify `FLASK_SECRET_KEY` is set
|
||||
- Check browser cookies are enabled
|
||||
|
||||
### 403 Forbidden on decorated route
|
||||
|
||||
- Verify user is logged in
|
||||
- Check user role matches required role
|
||||
- Inspect response for `required_roles` and `your_role`
|
||||
|
||||
## Files Reference
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `server/routes/auth.py` | Auth endpoints (login, logout, /me) |
|
||||
| `server/permissions.py` | Permission decorators |
|
||||
| `dashboard/src/apiAuth.ts` | Frontend API client |
|
||||
| `dashboard/src/useAuth.tsx` | React context/hooks |
|
||||
| `models/models.py` | User model and UserRole enum |
|
||||
|
||||
## Full Documentation
|
||||
|
||||
See `AUTH_SYSTEM.md` for complete documentation including:
|
||||
- Architecture details
|
||||
- Security considerations
|
||||
- API reference
|
||||
- Testing guide
|
||||
- Production checklist
|
||||
522
AUTH_SYSTEM.md
Normal file
522
AUTH_SYSTEM.md
Normal file
@@ -0,0 +1,522 @@
|
||||
# Authentication System Documentation
|
||||
|
||||
This document describes the authentication and authorization system implemented in the infoscreen_2025 project.
|
||||
|
||||
## Overview
|
||||
|
||||
The system provides session-based authentication with role-based access control (RBAC). It includes:
|
||||
|
||||
- **Backend**: Flask session-based auth with bcrypt password hashing
|
||||
- **Frontend**: React context/hooks for managing authentication state
|
||||
- **Permissions**: Decorators for protecting routes based on user roles
|
||||
- **Roles**: Four levels (user, editor, admin, superadmin)
|
||||
|
||||
## Architecture
|
||||
|
||||
### Backend Components
|
||||
|
||||
#### 1. Auth Routes (`server/routes/auth.py`)
|
||||
|
||||
Provides authentication endpoints:
|
||||
|
||||
- **`POST /api/auth/login`** - Authenticate user and create session
|
||||
- **`POST /api/auth/logout`** - End user session
|
||||
- **`GET /api/auth/me`** - Get current user info (protected)
|
||||
- **`GET /api/auth/check`** - Quick auth status check
|
||||
|
||||
#### 2. Permission Decorators (`server/permissions.py`)
|
||||
|
||||
Decorators for protecting routes:
|
||||
|
||||
```python
|
||||
from server.permissions import require_role, admin_or_higher, editor_or_higher
|
||||
|
||||
# Require specific role(s)
|
||||
@app.route('/admin-settings')
|
||||
@require_role('admin', 'superadmin')
|
||||
def admin_settings():
|
||||
return "Admin only"
|
||||
|
||||
# Convenience decorators
|
||||
@app.route('/settings')
|
||||
@admin_or_higher # admin or superadmin
|
||||
def settings():
|
||||
return "Settings"
|
||||
|
||||
@app.route('/events', methods=['POST'])
|
||||
@editor_or_higher # editor, admin, or superadmin
|
||||
def create_event():
|
||||
return "Create event"
|
||||
```
|
||||
|
||||
Available decorators:
|
||||
- `@require_auth` - Just require authentication
|
||||
- `@require_role(*roles)` - Require any of specified roles
|
||||
- `@superadmin_only` - Superadmin only
|
||||
- `@admin_or_higher` - Admin or superadmin
|
||||
- `@editor_or_higher` - Editor, admin, or superadmin
|
||||
|
||||
#### 3. Session Configuration (`server/wsgi.py`)
|
||||
|
||||
Flask session configured with:
|
||||
- Secret key from `FLASK_SECRET_KEY` environment variable
|
||||
- HTTPOnly cookies (prevent XSS)
|
||||
- SameSite=Lax (CSRF protection)
|
||||
- Secure flag in production (HTTPS only)
|
||||
|
||||
### Frontend Components
|
||||
|
||||
#### 1. API Client (`dashboard/src/apiAuth.ts`)
|
||||
|
||||
TypeScript functions for auth operations:
|
||||
|
||||
```typescript
|
||||
import { login, logout, fetchCurrentUser } from './apiAuth';
|
||||
|
||||
// Login
|
||||
await login('username', 'password');
|
||||
|
||||
// Get current user
|
||||
const user = await fetchCurrentUser();
|
||||
|
||||
// Logout
|
||||
await logout();
|
||||
|
||||
// Check auth status (lightweight)
|
||||
const { authenticated, role } = await checkAuth();
|
||||
```
|
||||
|
||||
Helper functions:
|
||||
```typescript
|
||||
import { hasRole, hasAnyRole, isAdminOrHigher } from './apiAuth';
|
||||
|
||||
if (isAdminOrHigher(user)) {
|
||||
// Show admin UI
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. Auth Context/Hooks (`dashboard/src/useAuth.tsx`)
|
||||
|
||||
React context for managing auth state:
|
||||
|
||||
```typescript
|
||||
import { useAuth, useCurrentUser, useIsAuthenticated } from './useAuth';
|
||||
|
||||
function MyComponent() {
|
||||
// Full auth context
|
||||
const { user, login, logout, loading, error, isAuthenticated } = useAuth();
|
||||
|
||||
// Or just what you need
|
||||
const user = useCurrentUser();
|
||||
const isAuth = useIsAuthenticated();
|
||||
|
||||
if (loading) return <div>Loading...</div>;
|
||||
|
||||
if (!isAuthenticated) {
|
||||
return <LoginForm onLogin={login} />;
|
||||
}
|
||||
|
||||
return <div>Welcome {user.username}!</div>;
|
||||
}
|
||||
```
|
||||
|
||||
## User Roles
|
||||
|
||||
Four hierarchical roles with increasing permissions:
|
||||
|
||||
| Role | Value | Description | Use Case |
|
||||
|------|-------|-------------|----------|
|
||||
| **User** | `user` | Read-only access | View events only |
|
||||
| **Editor** | `editor` | Can CRUD events/media | Content managers |
|
||||
| **Admin** | `admin` | Manage settings, users (except superadmin), groups | Organization staff |
|
||||
| **Superadmin** | `superadmin` | Full system access | Developers, system admins |
|
||||
|
||||
### Permission Matrix
|
||||
|
||||
| Action | User | Editor | Admin | Superadmin |
|
||||
|--------|------|--------|-------|------------|
|
||||
| View events | ✅ | ✅ | ✅ | ✅ |
|
||||
| Create/edit events | ❌ | ✅ | ✅ | ✅ |
|
||||
| Manage media | ❌ | ✅ | ✅ | ✅ |
|
||||
| Manage groups/clients | ❌ | ❌ | ✅ | ✅ |
|
||||
| Manage users (non-superadmin) | ❌ | ❌ | ✅ | ✅ |
|
||||
| Manage settings | ❌ | ❌ | ✅ | ✅ |
|
||||
| Manage superadmins | ❌ | ❌ | ❌ | ✅ |
|
||||
| System configuration | ❌ | ❌ | ❌ | ✅ |
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
### 1. Environment Configuration
|
||||
|
||||
Add to your `.env` file:
|
||||
|
||||
```bash
|
||||
# Flask session secret key (REQUIRED)
|
||||
# Generate with: python -c 'import secrets; print(secrets.token_hex(32))'
|
||||
FLASK_SECRET_KEY=your_secret_key_here
|
||||
|
||||
# Superadmin account (REQUIRED for initial setup)
|
||||
DEFAULT_SUPERADMIN_USERNAME=superadmin
|
||||
DEFAULT_SUPERADMIN_PASSWORD=your_secure_password
|
||||
```
|
||||
|
||||
### 2. Database Initialization
|
||||
|
||||
The superadmin user is created automatically when containers start. See `SUPERADMIN_SETUP.md` for details.
|
||||
|
||||
### 3. Frontend Integration
|
||||
|
||||
Wrap your app with `AuthProvider` in `main.tsx` or `App.tsx`:
|
||||
|
||||
```typescript
|
||||
import { AuthProvider } from './useAuth';
|
||||
|
||||
function App() {
|
||||
return (
|
||||
<AuthProvider>
|
||||
{/* Your app components */}
|
||||
</AuthProvider>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Backend: Protecting Routes
|
||||
|
||||
```python
|
||||
from flask import Blueprint
|
||||
from server.permissions import require_role, admin_or_higher
|
||||
|
||||
users_bp = Blueprint("users", __name__, url_prefix="/api/users")
|
||||
|
||||
@users_bp.route("", methods=["GET"])
|
||||
@admin_or_higher
|
||||
def list_users():
|
||||
"""List all users - admin+ only"""
|
||||
# Implementation
|
||||
pass
|
||||
|
||||
@users_bp.route("", methods=["POST"])
|
||||
@require_role('superadmin')
|
||||
def create_superadmin():
|
||||
"""Create superadmin - superadmin only"""
|
||||
# Implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### Frontend: Conditional Rendering
|
||||
|
||||
```typescript
|
||||
import { useAuth } from './useAuth';
|
||||
import { isAdminOrHigher, isEditorOrHigher } from './apiAuth';
|
||||
|
||||
function NavigationMenu() {
|
||||
const { user } = useAuth();
|
||||
|
||||
return (
|
||||
<nav>
|
||||
<a href="/dashboard">Dashboard</a>
|
||||
<a href="/events">Events</a>
|
||||
|
||||
{isEditorOrHigher(user) && (
|
||||
<a href="/events/new">Create Event</a>
|
||||
)}
|
||||
|
||||
{isAdminOrHigher(user) && (
|
||||
<>
|
||||
<a href="/settings">Settings</a>
|
||||
<a href="/users">Manage Users</a>
|
||||
<a href="/groups">Manage Groups</a>
|
||||
</>
|
||||
)}
|
||||
</nav>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Frontend: Login Form Example
|
||||
|
||||
```typescript
|
||||
import { useState } from 'react';
|
||||
import { useAuth } from './useAuth';
|
||||
|
||||
function LoginPage() {
|
||||
const [username, setUsername] = useState('');
|
||||
const [password, setPassword] = useState('');
|
||||
const { login, loading, error } = useAuth();
|
||||
|
||||
const handleSubmit = async (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
try {
|
||||
await login(username, password);
|
||||
// Redirect on success
|
||||
window.location.href = '/dashboard';
|
||||
} catch (err) {
|
||||
// Error is already in auth context
|
||||
console.error('Login failed:', err);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<form onSubmit={handleSubmit}>
|
||||
<h1>Login</h1>
|
||||
{error && <div className="error">{error}</div>}
|
||||
|
||||
<input
|
||||
type="text"
|
||||
placeholder="Username"
|
||||
value={username}
|
||||
onChange={(e) => setUsername(e.target.value)}
|
||||
disabled={loading}
|
||||
/>
|
||||
|
||||
<input
|
||||
type="password"
|
||||
placeholder="Password"
|
||||
value={password}
|
||||
onChange={(e) => setPassword(e.target.value)}
|
||||
disabled={loading}
|
||||
/>
|
||||
|
||||
<button type="submit" disabled={loading}>
|
||||
{loading ? 'Logging in...' : 'Login'}
|
||||
</button>
|
||||
</form>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Backend Security
|
||||
|
||||
1. **Password Hashing**: All passwords hashed with bcrypt (salt rounds default)
|
||||
2. **Session Security**:
|
||||
- HTTPOnly cookies (prevent XSS access)
|
||||
- SameSite=Lax (CSRF protection)
|
||||
- Secure flag in production (HTTPS only)
|
||||
3. **Secret Key**: Must be set via environment variable, not hardcoded
|
||||
4. **Role Checking**: Server-side validation on every protected route
|
||||
|
||||
### Frontend Security
|
||||
|
||||
1. **Credentials**: Always use `credentials: 'include'` in fetch calls
|
||||
2. **No Password Storage**: Never store passwords in localStorage/sessionStorage
|
||||
3. **Role Gating**: UI gating is convenience, not security (always validate server-side)
|
||||
4. **HTTPS**: Always use HTTPS in production
|
||||
|
||||
### Production Checklist
|
||||
|
||||
- [ ] Generate strong `FLASK_SECRET_KEY` (32+ bytes)
|
||||
- [ ] Set `SESSION_COOKIE_SECURE=True` (handled automatically by ENV=production)
|
||||
- [ ] Use HTTPS with valid TLS certificate
|
||||
- [ ] Change default superadmin password after first login
|
||||
- [ ] Review and audit user roles regularly
|
||||
- [ ] Enable audit logging (future enhancement)
|
||||
|
||||
## API Reference
|
||||
|
||||
### Authentication Endpoints
|
||||
|
||||
#### POST /api/auth/login
|
||||
|
||||
Authenticate user and create session.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"username": "string",
|
||||
"password": "string"
|
||||
}
|
||||
```
|
||||
|
||||
**Response (200):**
|
||||
```json
|
||||
{
|
||||
"message": "Login successful",
|
||||
"user": {
|
||||
"id": 1,
|
||||
"username": "admin",
|
||||
"role": "admin"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Errors:**
|
||||
- `400` - Missing username or password
|
||||
- `401` - Invalid credentials or account disabled
|
||||
|
||||
#### POST /api/auth/logout
|
||||
|
||||
End current session.
|
||||
|
||||
**Response (200):**
|
||||
```json
|
||||
{
|
||||
"message": "Logout successful"
|
||||
}
|
||||
```
|
||||
|
||||
#### GET /api/auth/me
|
||||
|
||||
Get current user information (requires authentication).
|
||||
|
||||
**Response (200):**
|
||||
```json
|
||||
{
|
||||
"id": 1,
|
||||
"username": "admin",
|
||||
"role": "admin",
|
||||
"is_active": true
|
||||
}
|
||||
```
|
||||
|
||||
**Errors:**
|
||||
- `401` - Not authenticated or account disabled
|
||||
|
||||
#### GET /api/auth/check
|
||||
|
||||
Quick authentication status check.
|
||||
|
||||
**Response (200):**
|
||||
```json
|
||||
{
|
||||
"authenticated": true,
|
||||
"role": "admin"
|
||||
}
|
||||
```
|
||||
|
||||
Or if not authenticated:
|
||||
```json
|
||||
{
|
||||
"authenticated": false
|
||||
}
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Manual Testing
|
||||
|
||||
1. **Create test users** (via database or future user management UI):
|
||||
```sql
|
||||
INSERT INTO users (username, password_hash, role, is_active)
|
||||
VALUES ('testuser', '<bcrypt_hash>', 'user', 1);
|
||||
```
|
||||
|
||||
2. **Test login**:
|
||||
```bash
|
||||
curl -X POST http://localhost:8000/api/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"username":"superadmin","password":"your_password"}' \
|
||||
-c cookies.txt
|
||||
```
|
||||
|
||||
3. **Test /me endpoint**:
|
||||
```bash
|
||||
curl http://localhost:8000/api/auth/me -b cookies.txt
|
||||
```
|
||||
|
||||
4. **Test protected route**:
|
||||
```bash
|
||||
# Should fail without auth
|
||||
curl http://localhost:8000/api/protected
|
||||
|
||||
# Should work with cookie
|
||||
curl http://localhost:8000/api/protected -b cookies.txt
|
||||
```
|
||||
|
||||
### Automated Testing
|
||||
|
||||
Example test cases (to be implemented):
|
||||
|
||||
```python
|
||||
def test_login_success():
|
||||
response = client.post('/api/auth/login', json={
|
||||
'username': 'testuser',
|
||||
'password': 'testpass'
|
||||
})
|
||||
assert response.status_code == 200
|
||||
assert 'user' in response.json
|
||||
|
||||
def test_login_invalid_credentials():
|
||||
response = client.post('/api/auth/login', json={
|
||||
'username': 'testuser',
|
||||
'password': 'wrongpass'
|
||||
})
|
||||
assert response.status_code == 401
|
||||
|
||||
def test_me_authenticated():
|
||||
# Login first
|
||||
client.post('/api/auth/login', json={'username': 'testuser', 'password': 'testpass'})
|
||||
response = client.get('/api/auth/me')
|
||||
assert response.status_code == 200
|
||||
assert response.json['username'] == 'testuser'
|
||||
|
||||
def test_me_not_authenticated():
|
||||
response = client.get('/api/auth/me')
|
||||
assert response.status_code == 401
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Login Not Working
|
||||
|
||||
**Symptoms**: Login endpoint returns 401 even with correct credentials
|
||||
|
||||
**Solutions**:
|
||||
1. Verify user exists in database: `SELECT * FROM users WHERE username='...'`
|
||||
2. Check password hash is valid bcrypt format
|
||||
3. Verify user `is_active=1`
|
||||
4. Check server logs for bcrypt errors
|
||||
|
||||
### Session Not Persisting
|
||||
|
||||
**Symptoms**: `/api/auth/me` returns 401 after successful login
|
||||
|
||||
**Solutions**:
|
||||
1. Verify `FLASK_SECRET_KEY` is set
|
||||
2. Check frontend is sending `credentials: 'include'` in fetch
|
||||
3. Verify cookies are being set (check browser DevTools)
|
||||
4. Check CORS settings if frontend/backend on different domains
|
||||
|
||||
### Permission Denied on Protected Route
|
||||
|
||||
**Symptoms**: 403 error on decorated routes
|
||||
|
||||
**Solutions**:
|
||||
1. Verify user is logged in (`/api/auth/me`)
|
||||
2. Check user role matches required role
|
||||
3. Verify decorator is applied correctly
|
||||
4. Check session hasn't expired
|
||||
|
||||
### TypeScript Errors in Frontend
|
||||
|
||||
**Symptoms**: Type errors when using auth hooks
|
||||
|
||||
**Solutions**:
|
||||
1. Ensure `AuthProvider` is wrapping your app
|
||||
2. Import types correctly: `import type { User } from './apiAuth'`
|
||||
3. Check TypeScript config for `verbatimModuleSyntax`
|
||||
|
||||
## Next Steps
|
||||
|
||||
See `userrole-management.md` for the complete implementation roadmap:
|
||||
|
||||
1. ✅ **Extend User Model** - Done
|
||||
2. ✅ **Seed Superadmin** - Done (`init_defaults.py`)
|
||||
3. ✅ **Expose Current User Role** - Done (this document)
|
||||
4. ⏳ **Implement Minimal Role Enforcement** - Apply decorators to existing routes
|
||||
5. ⏳ **Test the Flow** - Verify permissions work correctly
|
||||
6. ⏳ **Frontend Role Gating** - Update UI components
|
||||
7. ⏳ **User Management UI** - Build admin interface
|
||||
|
||||
## References
|
||||
|
||||
- User model: `models/models.py`
|
||||
- Auth routes: `server/routes/auth.py`
|
||||
- Permissions: `server/permissions.py`
|
||||
- API client: `dashboard/src/apiAuth.ts`
|
||||
- Auth context: `dashboard/src/useAuth.tsx`
|
||||
- Flask sessions: https://flask.palletsprojects.com/en/latest/api/#sessions
|
||||
- Bcrypt: https://pypi.org/project/bcrypt/
|
||||
757
CLIENT_MONITORING_IMPLEMENTATION_GUIDE.md
Normal file
757
CLIENT_MONITORING_IMPLEMENTATION_GUIDE.md
Normal file
@@ -0,0 +1,757 @@
|
||||
# 🚀 Client Monitoring Implementation Guide
|
||||
|
||||
**Phase-based implementation guide for basic monitoring in development phase**
|
||||
|
||||
---
|
||||
|
||||
## ✅ Phase 1: Server-Side Database Foundation
|
||||
**Status:** ✅ COMPLETE
|
||||
**Dependencies:** None - Already implemented
|
||||
**Time estimate:** Completed
|
||||
|
||||
### ✅ Step 1.1: Database Migration
|
||||
**File:** `server/alembic/versions/c1d2e3f4g5h6_add_client_monitoring.py`
|
||||
**What it does:**
|
||||
- Creates `client_logs` table for centralized logging
|
||||
- Adds health monitoring columns to `clients` table
|
||||
- Creates indexes for efficient querying
|
||||
|
||||
**To apply:**
|
||||
```bash
|
||||
cd /workspace/server
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
### ✅ Step 1.2: Update Data Models
|
||||
**File:** `models/models.py`
|
||||
**What was added:**
|
||||
- New enums: `LogLevel`, `ProcessStatus`, `ScreenHealthStatus`
|
||||
- Updated `Client` model with health tracking fields
|
||||
- New `ClientLog` model for log storage
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Phase 2: Server-Side Backend Logic
|
||||
**Status:** ✅ COMPLETE
|
||||
**Dependencies:** Phase 1 complete
|
||||
**Time estimate:** 2-3 hours
|
||||
|
||||
### Step 2.1: Extend MQTT Listener
|
||||
**File:** `listener/listener.py`
|
||||
**What to add:**
|
||||
|
||||
```python
|
||||
# Add new topic subscriptions in on_connect():
|
||||
client.subscribe("infoscreen/+/logs/error")
|
||||
client.subscribe("infoscreen/+/logs/warn")
|
||||
client.subscribe("infoscreen/+/logs/info") # Dev mode only
|
||||
client.subscribe("infoscreen/+/health")
|
||||
|
||||
# Add new handler in on_message():
|
||||
def handle_log_message(uuid, level, payload):
|
||||
"""Store client log in database"""
|
||||
from models.models import ClientLog, LogLevel
|
||||
from server.database import Session
|
||||
import json
|
||||
|
||||
session = Session()
|
||||
try:
|
||||
log_entry = ClientLog(
|
||||
client_uuid=uuid,
|
||||
timestamp=payload.get('timestamp', datetime.now(timezone.utc)),
|
||||
level=LogLevel[level],
|
||||
message=payload.get('message', ''),
|
||||
context=json.dumps(payload.get('context', {}))
|
||||
)
|
||||
session.add(log_entry)
|
||||
session.commit()
|
||||
print(f"[LOG] {uuid} {level}: {payload.get('message', '')}")
|
||||
except Exception as e:
|
||||
print(f"Error saving log: {e}")
|
||||
session.rollback()
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
def handle_health_message(uuid, payload):
|
||||
"""Update client health status"""
|
||||
from models.models import Client, ProcessStatus
|
||||
from server.database import Session
|
||||
|
||||
session = Session()
|
||||
try:
|
||||
client = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if client:
|
||||
client.current_event_id = payload.get('expected_state', {}).get('event_id')
|
||||
client.current_process = payload.get('actual_state', {}).get('process')
|
||||
|
||||
status_str = payload.get('actual_state', {}).get('status')
|
||||
if status_str:
|
||||
client.process_status = ProcessStatus[status_str]
|
||||
|
||||
client.process_pid = payload.get('actual_state', {}).get('pid')
|
||||
session.commit()
|
||||
except Exception as e:
|
||||
print(f"Error updating health: {e}")
|
||||
session.rollback()
|
||||
finally:
|
||||
session.close()
|
||||
```
|
||||
|
||||
**Topic routing logic:**
|
||||
```python
|
||||
# In on_message callback, add routing:
|
||||
if topic.endswith('/logs/error'):
|
||||
handle_log_message(uuid, 'ERROR', payload)
|
||||
elif topic.endswith('/logs/warn'):
|
||||
handle_log_message(uuid, 'WARN', payload)
|
||||
elif topic.endswith('/logs/info'):
|
||||
handle_log_message(uuid, 'INFO', payload)
|
||||
elif topic.endswith('/health'):
|
||||
handle_health_message(uuid, payload)
|
||||
```
|
||||
|
||||
### Step 2.2: Create API Routes
|
||||
**File:** `server/routes/client_logs.py` (NEW)
|
||||
|
||||
```python
|
||||
from flask import Blueprint, jsonify, request
|
||||
from server.database import Session
|
||||
from server.permissions import admin_or_higher
|
||||
from models.models import ClientLog, Client
|
||||
from sqlalchemy import desc
|
||||
import json
|
||||
|
||||
client_logs_bp = Blueprint("client_logs", __name__, url_prefix="/api/client-logs")
|
||||
|
||||
@client_logs_bp.route("/<uuid>/logs", methods=["GET"])
|
||||
@admin_or_higher
|
||||
def get_client_logs(uuid):
|
||||
"""
|
||||
Get logs for a specific client
|
||||
Query params:
|
||||
- level: ERROR, WARN, INFO, DEBUG (optional)
|
||||
- limit: number of entries (default 50, max 500)
|
||||
- since: ISO timestamp (optional)
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
level = request.args.get('level')
|
||||
limit = min(int(request.args.get('limit', 50)), 500)
|
||||
since = request.args.get('since')
|
||||
|
||||
query = session.query(ClientLog).filter_by(client_uuid=uuid)
|
||||
|
||||
if level:
|
||||
from models.models import LogLevel
|
||||
query = query.filter_by(level=LogLevel[level])
|
||||
|
||||
if since:
|
||||
from datetime import datetime
|
||||
since_dt = datetime.fromisoformat(since.replace('Z', '+00:00'))
|
||||
query = query.filter(ClientLog.timestamp >= since_dt)
|
||||
|
||||
logs = query.order_by(desc(ClientLog.timestamp)).limit(limit).all()
|
||||
|
||||
result = []
|
||||
for log in logs:
|
||||
result.append({
|
||||
"id": log.id,
|
||||
"timestamp": log.timestamp.isoformat() if log.timestamp else None,
|
||||
"level": log.level.value if log.level else None,
|
||||
"message": log.message,
|
||||
"context": json.loads(log.context) if log.context else {}
|
||||
})
|
||||
|
||||
session.close()
|
||||
return jsonify({"logs": result, "count": len(result)})
|
||||
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"error": str(e)}), 500
|
||||
|
||||
@client_logs_bp.route("/summary", methods=["GET"])
|
||||
@admin_or_higher
|
||||
def get_logs_summary():
|
||||
"""Get summary of errors/warnings across all clients"""
|
||||
session = Session()
|
||||
try:
|
||||
from sqlalchemy import func
|
||||
from models.models import LogLevel
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# Last 24 hours
|
||||
since = datetime.utcnow() - timedelta(hours=24)
|
||||
|
||||
stats = session.query(
|
||||
ClientLog.client_uuid,
|
||||
ClientLog.level,
|
||||
func.count(ClientLog.id).label('count')
|
||||
).filter(
|
||||
ClientLog.timestamp >= since
|
||||
).group_by(
|
||||
ClientLog.client_uuid,
|
||||
ClientLog.level
|
||||
).all()
|
||||
|
||||
result = {}
|
||||
for stat in stats:
|
||||
uuid = stat.client_uuid
|
||||
if uuid not in result:
|
||||
result[uuid] = {"ERROR": 0, "WARN": 0, "INFO": 0}
|
||||
result[uuid][stat.level.value] = stat.count
|
||||
|
||||
session.close()
|
||||
return jsonify({"summary": result, "period_hours": 24})
|
||||
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"error": str(e)}), 500
|
||||
```
|
||||
|
||||
**Register in `server/wsgi.py`:**
|
||||
```python
|
||||
from server.routes.client_logs import client_logs_bp
|
||||
app.register_blueprint(client_logs_bp)
|
||||
```
|
||||
|
||||
### Step 2.3: Add Health Data to Heartbeat Handler
|
||||
**File:** `listener/listener.py` (extend existing heartbeat handler)
|
||||
|
||||
```python
|
||||
# Modify existing heartbeat handler to capture health data
|
||||
def on_message(client, userdata, message):
|
||||
topic = message.topic
|
||||
|
||||
# Existing heartbeat logic...
|
||||
if '/heartbeat' in topic:
|
||||
uuid = extract_uuid_from_topic(topic)
|
||||
try:
|
||||
payload = json.loads(message.payload.decode())
|
||||
|
||||
# Update last_alive (existing)
|
||||
session = Session()
|
||||
client_obj = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if client_obj:
|
||||
client_obj.last_alive = datetime.now(timezone.utc)
|
||||
|
||||
# NEW: Update health data if present in heartbeat
|
||||
if 'process_status' in payload:
|
||||
client_obj.process_status = ProcessStatus[payload['process_status']]
|
||||
if 'current_process' in payload:
|
||||
client_obj.current_process = payload['current_process']
|
||||
if 'process_pid' in payload:
|
||||
client_obj.process_pid = payload['process_pid']
|
||||
if 'current_event_id' in payload:
|
||||
client_obj.current_event_id = payload['current_event_id']
|
||||
|
||||
session.commit()
|
||||
session.close()
|
||||
except Exception as e:
|
||||
print(f"Error processing heartbeat: {e}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🖥️ Phase 3: Client-Side Implementation
|
||||
**Status:** ✅ COMPLETE
|
||||
**Dependencies:** Phase 2 complete
|
||||
**Time estimate:** 3-4 hours
|
||||
|
||||
### Step 3.1: Create Client Watchdog Script
|
||||
**File:** `client/watchdog.py` (NEW - on client device)
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Client-side process watchdog
|
||||
Monitors VLC, Chromium, PDF viewer and reports health to server
|
||||
"""
|
||||
import psutil
|
||||
import paho.mqtt.client as mqtt
|
||||
import json
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
import sys
|
||||
import os
|
||||
|
||||
class MediaWatchdog:
|
||||
def __init__(self, client_uuid, mqtt_broker, mqtt_port=1883):
|
||||
self.uuid = client_uuid
|
||||
self.mqtt_client = mqtt.Client()
|
||||
self.mqtt_client.connect(mqtt_broker, mqtt_port, 60)
|
||||
self.mqtt_client.loop_start()
|
||||
|
||||
self.current_process = None
|
||||
self.current_event_id = None
|
||||
self.restart_attempts = 0
|
||||
self.MAX_RESTARTS = 3
|
||||
|
||||
def send_log(self, level, message, context=None):
|
||||
"""Send log message to server via MQTT"""
|
||||
topic = f"infoscreen/{self.uuid}/logs/{level.lower()}"
|
||||
payload = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"message": message,
|
||||
"context": context or {}
|
||||
}
|
||||
self.mqtt_client.publish(topic, json.dumps(payload), qos=1)
|
||||
print(f"[{level}] {message}")
|
||||
|
||||
def send_health(self, process_name, pid, status, event_id=None):
|
||||
"""Send health status to server"""
|
||||
topic = f"infoscreen/{self.uuid}/health"
|
||||
payload = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"expected_state": {
|
||||
"event_id": event_id
|
||||
},
|
||||
"actual_state": {
|
||||
"process": process_name,
|
||||
"pid": pid,
|
||||
"status": status # 'running', 'crashed', 'starting', 'stopped'
|
||||
}
|
||||
}
|
||||
self.mqtt_client.publish(topic, json.dumps(payload), qos=1, retain=False)
|
||||
|
||||
def is_process_running(self, process_name):
|
||||
"""Check if a process is running"""
|
||||
for proc in psutil.process_iter(['name', 'pid']):
|
||||
try:
|
||||
if process_name.lower() in proc.info['name'].lower():
|
||||
return proc.info['pid']
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
||||
pass
|
||||
return None
|
||||
|
||||
def monitor_loop(self):
|
||||
"""Main monitoring loop"""
|
||||
print(f"Watchdog started for client {self.uuid}")
|
||||
self.send_log("INFO", "Watchdog service started", {"uuid": self.uuid})
|
||||
|
||||
while True:
|
||||
try:
|
||||
# Check expected process (would be set by main event handler)
|
||||
if self.current_process:
|
||||
pid = self.is_process_running(self.current_process)
|
||||
|
||||
if pid:
|
||||
# Process is running
|
||||
self.send_health(
|
||||
self.current_process,
|
||||
pid,
|
||||
"running",
|
||||
self.current_event_id
|
||||
)
|
||||
self.restart_attempts = 0 # Reset on success
|
||||
else:
|
||||
# Process crashed
|
||||
self.send_log(
|
||||
"ERROR",
|
||||
f"Process {self.current_process} crashed or stopped",
|
||||
{
|
||||
"event_id": self.current_event_id,
|
||||
"process": self.current_process,
|
||||
"restart_attempt": self.restart_attempts
|
||||
}
|
||||
)
|
||||
|
||||
if self.restart_attempts < self.MAX_RESTARTS:
|
||||
self.send_log("WARN", f"Attempting restart ({self.restart_attempts + 1}/{self.MAX_RESTARTS})")
|
||||
self.restart_attempts += 1
|
||||
# TODO: Implement restart logic (call event handler)
|
||||
else:
|
||||
self.send_log("ERROR", "Max restart attempts exceeded", {
|
||||
"event_id": self.current_event_id
|
||||
})
|
||||
|
||||
time.sleep(5) # Check every 5 seconds
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("Watchdog stopped by user")
|
||||
break
|
||||
except Exception as e:
|
||||
self.send_log("ERROR", f"Watchdog error: {str(e)}", {
|
||||
"exception": str(e),
|
||||
"traceback": str(sys.exc_info())
|
||||
})
|
||||
time.sleep(10) # Wait longer on error
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
if len(sys.argv) < 3:
|
||||
print("Usage: python watchdog.py <client_uuid> <mqtt_broker>")
|
||||
sys.exit(1)
|
||||
|
||||
uuid = sys.argv[1]
|
||||
broker = sys.argv[2]
|
||||
|
||||
watchdog = MediaWatchdog(uuid, broker)
|
||||
watchdog.monitor_loop()
|
||||
```
|
||||
|
||||
### Step 3.2: Integrate with Existing Event Handler
|
||||
**File:** `client/event_handler.py` (modify existing)
|
||||
|
||||
```python
|
||||
# When starting a new event, notify watchdog
|
||||
def play_event(event_data):
|
||||
event_type = event_data.get('event_type')
|
||||
event_id = event_data.get('id')
|
||||
|
||||
if event_type == 'video':
|
||||
process_name = 'vlc'
|
||||
# Start VLC...
|
||||
elif event_type == 'website':
|
||||
process_name = 'chromium'
|
||||
# Start Chromium...
|
||||
elif event_type == 'presentation':
|
||||
process_name = 'pdf_viewer' # or your PDF tool
|
||||
# Start PDF viewer...
|
||||
|
||||
# Notify watchdog about expected process
|
||||
watchdog.current_process = process_name
|
||||
watchdog.current_event_id = event_id
|
||||
watchdog.restart_attempts = 0
|
||||
```
|
||||
|
||||
### Step 3.3: Enhanced Heartbeat Payload
|
||||
**File:** `client/heartbeat.py` (modify existing)
|
||||
|
||||
```python
|
||||
# Modify existing heartbeat to include process status
|
||||
def send_heartbeat(mqtt_client, uuid):
|
||||
# Get current process status
|
||||
current_process = None
|
||||
process_pid = None
|
||||
process_status = "stopped"
|
||||
|
||||
# Check if expected process is running
|
||||
if watchdog.current_process:
|
||||
pid = watchdog.is_process_running(watchdog.current_process)
|
||||
if pid:
|
||||
current_process = watchdog.current_process
|
||||
process_pid = pid
|
||||
process_status = "running"
|
||||
|
||||
payload = {
|
||||
"uuid": uuid,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
# Existing fields...
|
||||
# NEW health fields:
|
||||
"current_process": current_process,
|
||||
"process_pid": process_pid,
|
||||
"process_status": process_status,
|
||||
"current_event_id": watchdog.current_event_id
|
||||
}
|
||||
|
||||
mqtt_client.publish(f"infoscreen/{uuid}/heartbeat", json.dumps(payload))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Phase 4: Dashboard UI Integration
|
||||
**Status:** ✅ COMPLETE
|
||||
**Dependencies:** Phases 2 & 3 complete
|
||||
**Time estimate:** 2-3 hours
|
||||
|
||||
### Step 4.1: Create Log Viewer Component
|
||||
**File:** `dashboard/src/ClientLogs.tsx` (NEW)
|
||||
|
||||
```typescript
|
||||
import React from 'react';
|
||||
import { GridComponent, ColumnsDirective, ColumnDirective, Page, Inject } from '@syncfusion/ej2-react-grids';
|
||||
|
||||
interface LogEntry {
|
||||
id: number;
|
||||
timestamp: string;
|
||||
level: 'ERROR' | 'WARN' | 'INFO' | 'DEBUG';
|
||||
message: string;
|
||||
context: any;
|
||||
}
|
||||
|
||||
interface ClientLogsProps {
|
||||
clientUuid: string;
|
||||
}
|
||||
|
||||
export const ClientLogs: React.FC<ClientLogsProps> = ({ clientUuid }) => {
|
||||
const [logs, setLogs] = React.useState<LogEntry[]>([]);
|
||||
const [loading, setLoading] = React.useState(false);
|
||||
|
||||
const loadLogs = async (level?: string) => {
|
||||
setLoading(true);
|
||||
try {
|
||||
const params = new URLSearchParams({ limit: '50' });
|
||||
if (level) params.append('level', level);
|
||||
|
||||
const response = await fetch(`/api/client-logs/${clientUuid}/logs?${params}`);
|
||||
const data = await response.json();
|
||||
setLogs(data.logs);
|
||||
} catch (err) {
|
||||
console.error('Failed to load logs:', err);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
React.useEffect(() => {
|
||||
loadLogs();
|
||||
const interval = setInterval(() => loadLogs(), 30000); // Refresh every 30s
|
||||
return () => clearInterval(interval);
|
||||
}, [clientUuid]);
|
||||
|
||||
const levelTemplate = (props: any) => {
|
||||
const colors = {
|
||||
ERROR: 'text-red-600 bg-red-100',
|
||||
WARN: 'text-yellow-600 bg-yellow-100',
|
||||
INFO: 'text-blue-600 bg-blue-100',
|
||||
DEBUG: 'text-gray-600 bg-gray-100'
|
||||
};
|
||||
return (
|
||||
<span className={`px-2 py-1 rounded ${colors[props.level as keyof typeof colors]}`}>
|
||||
{props.level}
|
||||
</span>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<div>
|
||||
<div className="mb-4 flex gap-2">
|
||||
<button onClick={() => loadLogs()} className="e-btn e-primary">All</button>
|
||||
<button onClick={() => loadLogs('ERROR')} className="e-btn e-danger">Errors</button>
|
||||
<button onClick={() => loadLogs('WARN')} className="e-btn e-warning">Warnings</button>
|
||||
<button onClick={() => loadLogs('INFO')} className="e-btn e-info">Info</button>
|
||||
</div>
|
||||
|
||||
<GridComponent
|
||||
dataSource={logs}
|
||||
allowPaging={true}
|
||||
pageSettings={{ pageSize: 20 }}
|
||||
>
|
||||
<ColumnsDirective>
|
||||
<ColumnDirective field='timestamp' headerText='Time' width='180' format='yMd HH:mm:ss' />
|
||||
<ColumnDirective field='level' headerText='Level' width='100' template={levelTemplate} />
|
||||
<ColumnDirective field='message' headerText='Message' width='400' />
|
||||
</ColumnsDirective>
|
||||
<Inject services={[Page]} />
|
||||
</GridComponent>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### Step 4.2: Add Health Indicators to Client Cards
|
||||
**File:** `dashboard/src/clients.tsx` (modify existing)
|
||||
|
||||
```typescript
|
||||
// Add health indicator to client card
|
||||
const getHealthBadge = (client: Client) => {
|
||||
if (!client.process_status) {
|
||||
return <span className="badge badge-secondary">Unknown</span>;
|
||||
}
|
||||
|
||||
const badges = {
|
||||
running: <span className="badge badge-success">✓ Running</span>,
|
||||
crashed: <span className="badge badge-danger">✗ Crashed</span>,
|
||||
starting: <span className="badge badge-warning">⟳ Starting</span>,
|
||||
stopped: <span className="badge badge-secondary">■ Stopped</span>
|
||||
};
|
||||
|
||||
return badges[client.process_status] || null;
|
||||
};
|
||||
|
||||
// In client card render:
|
||||
<div className="client-card">
|
||||
<h3>{client.hostname || client.uuid}</h3>
|
||||
<div>Status: {getHealthBadge(client)}</div>
|
||||
<div>Process: {client.current_process || 'None'}</div>
|
||||
<div>Event ID: {client.current_event_id || 'None'}</div>
|
||||
<button onClick={() => showLogs(client.uuid)}>View Logs</button>
|
||||
</div>
|
||||
```
|
||||
|
||||
### Step 4.3: Add System Health Dashboard (Superadmin)
|
||||
**File:** `dashboard/src/SystemMonitor.tsx` (NEW)
|
||||
|
||||
```typescript
|
||||
import React from 'react';
|
||||
import { ClientLogs } from './ClientLogs';
|
||||
|
||||
export const SystemMonitor: React.FC = () => {
|
||||
const [summary, setSummary] = React.useState<any>({});
|
||||
|
||||
const loadSummary = async () => {
|
||||
const response = await fetch('/api/client-logs/summary');
|
||||
const data = await response.json();
|
||||
setSummary(data.summary);
|
||||
};
|
||||
|
||||
React.useEffect(() => {
|
||||
loadSummary();
|
||||
const interval = setInterval(loadSummary, 30000);
|
||||
return () => clearInterval(interval);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div className="system-monitor">
|
||||
<h2>System Health Monitor (Superadmin)</h2>
|
||||
|
||||
<div className="alert-panel">
|
||||
<h3>Active Issues</h3>
|
||||
{Object.entries(summary).map(([uuid, stats]: [string, any]) => (
|
||||
stats.ERROR > 0 || stats.WARN > 5 ? (
|
||||
<div key={uuid} className="alert">
|
||||
🔴 {uuid}: {stats.ERROR} errors, {stats.WARN} warnings (24h)
|
||||
</div>
|
||||
) : null
|
||||
))}
|
||||
</div>
|
||||
|
||||
{/* Real-time log stream */}
|
||||
<div className="log-stream">
|
||||
<h3>Recent Logs (All Clients)</h3>
|
||||
{/* Implement real-time log aggregation */}
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Phase 5: Testing & Validation
|
||||
**Status:** ✅ COMPLETE
|
||||
**Dependencies:** All previous phases
|
||||
**Time estimate:** 1-2 hours
|
||||
|
||||
### Step 5.1: Server-Side Tests
|
||||
|
||||
```bash
|
||||
# Test database migration
|
||||
cd /workspace/server
|
||||
alembic upgrade head
|
||||
alembic downgrade -1
|
||||
alembic upgrade head
|
||||
|
||||
# Test API endpoints
|
||||
curl -X GET "http://localhost:8000/api/client-logs/<uuid>/logs?limit=10"
|
||||
curl -X GET "http://localhost:8000/api/client-logs/summary"
|
||||
```
|
||||
|
||||
### Step 5.2: Client-Side Tests
|
||||
|
||||
```bash
|
||||
# On client device
|
||||
python3 watchdog.py <your-uuid> <mqtt-broker-ip>
|
||||
|
||||
# Simulate process crash
|
||||
pkill vlc # Should trigger error log and restart attempt
|
||||
|
||||
# Check MQTT messages
|
||||
mosquitto_sub -h <broker> -t "infoscreen/+/logs/#" -v
|
||||
mosquitto_sub -h <broker> -t "infoscreen/+/health" -v
|
||||
```
|
||||
|
||||
### Step 5.3: Dashboard Tests
|
||||
|
||||
1. Open dashboard and navigate to Clients page
|
||||
2. Verify health indicators show correct status
|
||||
3. Click "View Logs" and verify logs appear
|
||||
4. Navigate to System Monitor (superadmin)
|
||||
5. Verify summary statistics are correct
|
||||
|
||||
---
|
||||
|
||||
## 📝 Configuration Summary
|
||||
|
||||
### Environment Variables
|
||||
|
||||
**Server (docker-compose.yml):**
|
||||
```yaml
|
||||
- LOG_RETENTION_DAYS=90 # How long to keep logs
|
||||
- DEBUG_MODE=true # Enable INFO level logging via MQTT
|
||||
```
|
||||
|
||||
**Client:**
|
||||
```bash
|
||||
export MQTT_BROKER="your-server-ip"
|
||||
export CLIENT_UUID="abc-123-def"
|
||||
export WATCHDOG_ENABLED=true
|
||||
```
|
||||
|
||||
### MQTT Topics Reference
|
||||
|
||||
| Topic Pattern | Direction | Purpose |
|
||||
|--------------|-----------|---------|
|
||||
| `infoscreen/{uuid}/logs/error` | Client → Server | Error messages |
|
||||
| `infoscreen/{uuid}/logs/warn` | Client → Server | Warning messages |
|
||||
| `infoscreen/{uuid}/logs/info` | Client → Server | Info (dev only) |
|
||||
| `infoscreen/{uuid}/health` | Client → Server | Health metrics |
|
||||
| `infoscreen/{uuid}/heartbeat` | Client → Server | Enhanced heartbeat |
|
||||
|
||||
### Database Tables
|
||||
|
||||
**client_logs:**
|
||||
- Stores all centralized logs
|
||||
- Indexed by client_uuid, timestamp, level
|
||||
- Auto-cleanup after 90 days (recommended)
|
||||
|
||||
**clients (extended):**
|
||||
- `current_event_id`: Which event should be playing
|
||||
- `current_process`: Expected process name
|
||||
- `process_status`: running/crashed/starting/stopped
|
||||
- `process_pid`: Process ID
|
||||
- `screen_health_status`: OK/BLACK/FROZEN/UNKNOWN
|
||||
- `last_screenshot_analyzed`: Last analysis time
|
||||
- `last_screenshot_hash`: For frozen detection
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Next Steps After Implementation
|
||||
|
||||
1. **Deploy Phase 1-2** to staging environment
|
||||
2. **Test with 1-2 pilot clients** before full rollout
|
||||
3. **Monitor traffic & performance** (should be minimal)
|
||||
4. **Fine-tune log levels** based on actual noise
|
||||
5. **Add alerting** (email/Slack when errors > threshold)
|
||||
6. **Implement screenshot analysis** (Phase 2 enhancement)
|
||||
7. **Add trending/analytics** (which clients are least reliable)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
**Logs not appearing in database:**
|
||||
- Check MQTT broker logs: `docker logs infoscreen-mqtt`
|
||||
- Verify listener subscriptions: Check `listener/listener.py` logs
|
||||
- Test MQTT manually: `mosquitto_pub -h broker -t "infoscreen/test/logs/error" -m '{"message":"test"}'`
|
||||
|
||||
**High database growth:**
|
||||
- Check log_retention cleanup cronjob
|
||||
- Reduce INFO level logging frequency
|
||||
- Add sampling (log every 10th occurrence instead of all)
|
||||
|
||||
**Client watchdog not detecting crashes:**
|
||||
- Verify psutil can see processes: `ps aux | grep vlc`
|
||||
- Check permissions (may need sudo for some process checks)
|
||||
- Increase monitor loop frequency for faster detection
|
||||
|
||||
---
|
||||
|
||||
## ✅ Completion Checklist
|
||||
|
||||
- [x] Phase 1: Database migration applied
|
||||
- [x] Phase 2: Listener extended for log topics
|
||||
- [x] Phase 2: API endpoints created and tested
|
||||
- [x] Phase 3: Client watchdog implemented
|
||||
- [x] Phase 3: Enhanced heartbeat deployed
|
||||
- [x] Phase 4: Dashboard log viewer working
|
||||
- [x] Phase 4: Health indicators visible
|
||||
- [x] Phase 5: End-to-end testing complete
|
||||
- [x] Documentation updated with new features
|
||||
- [x] Production deployment plan created
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2026-03-24
|
||||
**Author:** GitHub Copilot
|
||||
**For:** Infoscreen 2025 Project
|
||||
979
CLIENT_MONITORING_SPECIFICATION.md
Normal file
979
CLIENT_MONITORING_SPECIFICATION.md
Normal file
@@ -0,0 +1,979 @@
|
||||
# Client-Side Monitoring Specification
|
||||
|
||||
**Version:** 1.0
|
||||
**Date:** 2026-03-10
|
||||
**For:** Infoscreen Client Implementation
|
||||
**Server Endpoint:** `192.168.43.201:8000` (or your production server)
|
||||
**MQTT Broker:** `192.168.43.201:1883` (or your production MQTT broker)
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
Each infoscreen client must implement health monitoring and logging capabilities to report status to the central server via MQTT.
|
||||
|
||||
### 1.1 Goals
|
||||
- **Detect failures:** Process crashes, frozen screens, content mismatches
|
||||
- **Provide visibility:** Real-time health status visible on server dashboard
|
||||
- **Enable remote diagnosis:** Centralized log storage for debugging
|
||||
- **Auto-recovery:** Attempt automatic restart on failure
|
||||
|
||||
### 1.2 Architecture
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Infoscreen Client │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Media Player │ │ Watchdog │ │
|
||||
│ │ (VLC/Chrome) │◄───│ Monitor │ │
|
||||
│ └──────────────┘ └──────┬───────┘ │
|
||||
│ │ │
|
||||
│ ┌──────────────┐ │ │
|
||||
│ │ Event Mgr │ │ │
|
||||
│ │ (receives │ │ │
|
||||
│ │ schedule) │◄───────────┘ │
|
||||
│ └──────┬───────┘ │
|
||||
│ │ │
|
||||
│ ┌──────▼───────────────────────┐ │
|
||||
│ │ MQTT Client │ │
|
||||
│ │ - Heartbeat (every 60s) │ │
|
||||
│ │ - Logs (error/warn/info) │ │
|
||||
│ │ - Health metrics (every 5s) │ │
|
||||
│ └──────┬────────────────────────┘ │
|
||||
└─────────┼──────────────────────────────┘
|
||||
│
|
||||
│ MQTT over TCP
|
||||
▼
|
||||
┌─────────────┐
|
||||
│ MQTT Broker │
|
||||
│ (server) │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
### 1.3 Current Compatibility Notes
|
||||
- The server now accepts both the original specification payloads and the currently implemented Phase 3 client payloads.
|
||||
- `infoscreen/{uuid}/health` may currently contain a reduced payload with only `expected_state.event_id` and `actual_state.process|pid|status`. Additional `health_metrics` fields from this specification remain recommended.
|
||||
- `event_id` is still specified as an integer. For compatibility with the current Phase 3 client, the server also tolerates string values such as `event_123` and extracts the numeric suffix where possible.
|
||||
- If the client sends `process_health` inside `infoscreen/{uuid}/dashboard`, the server treats it as a fallback source for `current_process`, `process_pid`, `process_status`, and `current_event_id`.
|
||||
- Long term, the preferred client payload remains the structure in this specification so the server can surface richer monitoring data such as screen state and resource metrics.
|
||||
|
||||
---
|
||||
|
||||
## 2. MQTT Protocol Specification
|
||||
|
||||
### 2.1 Connection Parameters
|
||||
```
|
||||
Broker: 192.168.43.201 (or DNS hostname)
|
||||
Port: 1883 (standard MQTT)
|
||||
Protocol: MQTT v3.1.1
|
||||
Client ID: "infoscreen-{client_uuid}"
|
||||
Clean Session: false (retain subscriptions)
|
||||
Keep Alive: 60 seconds
|
||||
Username/Password: (if configured on broker)
|
||||
```
|
||||
|
||||
### 2.2 QoS Levels
|
||||
- **Heartbeat:** QoS 0 (fire and forget, high frequency)
|
||||
- **Logs (ERROR/WARN):** QoS 1 (at least once delivery, important)
|
||||
- **Logs (INFO):** QoS 0 (optional, high volume)
|
||||
- **Health metrics:** QoS 0 (frequent, latest value matters)
|
||||
|
||||
---
|
||||
|
||||
## 3. Topic Structure & Payload Formats
|
||||
|
||||
### 3.1 Log Messages
|
||||
|
||||
#### Topic Pattern:
|
||||
```
|
||||
infoscreen/{client_uuid}/logs/{level}
|
||||
```
|
||||
|
||||
Where `{level}` is one of: `error`, `warn`, `info`
|
||||
|
||||
#### Payload Format (JSON):
|
||||
```json
|
||||
{
|
||||
"timestamp": "2026-03-10T07:30:00Z",
|
||||
"message": "Human-readable error description",
|
||||
"context": {
|
||||
"event_id": 42,
|
||||
"process": "vlc",
|
||||
"error_code": "NETWORK_TIMEOUT",
|
||||
"additional_key": "any relevant data"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Field Specifications:
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `timestamp` | string (ISO 8601 UTC) | Yes | When the event occurred. Use `YYYY-MM-DDTHH:MM:SSZ` format |
|
||||
| `message` | string | Yes | Human-readable description of the event (max 1000 chars) |
|
||||
| `context` | object | No | Additional structured data (will be stored as JSON) |
|
||||
|
||||
#### Example Topics:
|
||||
```
|
||||
infoscreen/9b8d1856-ff34-4864-a726-12de072d0f77/logs/error
|
||||
infoscreen/9b8d1856-ff34-4864-a726-12de072d0f77/logs/warn
|
||||
infoscreen/9b8d1856-ff34-4864-a726-12de072d0f77/logs/info
|
||||
```
|
||||
|
||||
#### When to Send Logs:
|
||||
|
||||
**ERROR (Always send):**
|
||||
- Process crashed (VLC/Chromium/PDF viewer terminated unexpectedly)
|
||||
- Content failed to load (404, network timeout, corrupt file)
|
||||
- Hardware failure detected (display off, audio device missing)
|
||||
- Exception caught in main event loop
|
||||
- Maximum restart attempts exceeded
|
||||
|
||||
**WARN (Always send):**
|
||||
- Process restarted automatically (after crash)
|
||||
- High resource usage (CPU >80%, RAM >90%)
|
||||
- Slow performance (frame drops, lag)
|
||||
- Non-critical failures (screenshot capture failed, cache full)
|
||||
- Fallback content displayed (primary source unavailable)
|
||||
|
||||
**INFO (Send in development, optional in production):**
|
||||
- Process started successfully
|
||||
- Event transition (switched from video to presentation)
|
||||
- Content loaded successfully
|
||||
- Watchdog service started/stopped
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Health Metrics
|
||||
|
||||
#### Topic Pattern:
|
||||
```
|
||||
infoscreen/{client_uuid}/health
|
||||
```
|
||||
|
||||
#### Payload Format (JSON):
|
||||
```json
|
||||
{
|
||||
"timestamp": "2026-03-10T07:30:00Z",
|
||||
"expected_state": {
|
||||
"event_id": 42,
|
||||
"event_type": "video",
|
||||
"media_file": "presentation.mp4",
|
||||
"started_at": "2026-03-10T07:15:00Z"
|
||||
},
|
||||
"actual_state": {
|
||||
"process": "vlc",
|
||||
"pid": 1234,
|
||||
"status": "running",
|
||||
"uptime_seconds": 900,
|
||||
"position": 45.3,
|
||||
"duration": 180.0
|
||||
},
|
||||
"health_metrics": {
|
||||
"screen_on": true,
|
||||
"last_frame_update": "2026-03-10T07:29:58Z",
|
||||
"frames_dropped": 2,
|
||||
"network_errors": 0,
|
||||
"cpu_percent": 15.3,
|
||||
"memory_mb": 234
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Field Specifications:
|
||||
|
||||
**expected_state:**
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `event_id` | integer | Yes | Current event ID from scheduler |
|
||||
| `event_type` | string | Yes | `presentation`, `video`, `website`, `webuntis`, `message` |
|
||||
| `media_file` | string | No | Filename or URL of current content |
|
||||
| `started_at` | string (ISO 8601) | Yes | When this event started playing |
|
||||
|
||||
**actual_state:**
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `process` | string | Yes | `vlc`, `chromium`, `pdf_viewer`, `none` |
|
||||
| `pid` | integer | No | Process ID (if running) |
|
||||
| `status` | string | Yes | `running`, `crashed`, `starting`, `stopped` |
|
||||
| `uptime_seconds` | integer | No | How long process has been running |
|
||||
| `position` | float | No | Current playback position (seconds, for video/audio) |
|
||||
| `duration` | float | No | Total content duration (seconds) |
|
||||
|
||||
**health_metrics:**
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `screen_on` | boolean | Yes | Is display powered on? |
|
||||
| `last_frame_update` | string (ISO 8601) | No | Last time screen content changed |
|
||||
| `frames_dropped` | integer | No | Video frames dropped (performance indicator) |
|
||||
| `network_errors` | integer | No | Count of network errors in last interval |
|
||||
| `cpu_percent` | float | No | CPU usage (0-100) |
|
||||
| `memory_mb` | integer | No | RAM usage in megabytes |
|
||||
|
||||
#### Sending Frequency:
|
||||
- **Normal operation:** Every 5 seconds
|
||||
- **During startup/transition:** Every 1 second
|
||||
- **After error:** Immediately + every 2 seconds until recovered
|
||||
|
||||
---
|
||||
|
||||
### 3.3 Enhanced Heartbeat
|
||||
|
||||
The existing heartbeat topic should be enhanced to include process status.
|
||||
|
||||
#### Topic Pattern:
|
||||
```
|
||||
infoscreen/{client_uuid}/heartbeat
|
||||
```
|
||||
|
||||
#### Enhanced Payload Format (JSON):
|
||||
```json
|
||||
{
|
||||
"uuid": "9b8d1856-ff34-4864-a726-12de072d0f77",
|
||||
"timestamp": "2026-03-10T07:30:00Z",
|
||||
"current_process": "vlc",
|
||||
"process_pid": 1234,
|
||||
"process_status": "running",
|
||||
"current_event_id": 42
|
||||
}
|
||||
```
|
||||
|
||||
#### New Fields (add to existing heartbeat):
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `current_process` | string | No | Name of active media player process |
|
||||
| `process_pid` | integer | No | Process ID |
|
||||
| `process_status` | string | No | `running`, `crashed`, `starting`, `stopped` |
|
||||
| `current_event_id` | integer | No | Event ID currently being displayed |
|
||||
|
||||
#### Sending Frequency:
|
||||
- Keep existing: **Every 60 seconds**
|
||||
- Include new fields if available
|
||||
|
||||
---
|
||||
|
||||
## 4. Process Monitoring Requirements
|
||||
|
||||
### 4.1 Processes to Monitor
|
||||
|
||||
| Media Type | Process Name | How to Detect |
|
||||
|------------|--------------|---------------|
|
||||
| Video | `vlc` | `ps aux \| grep vlc` or `pgrep vlc` |
|
||||
| Website/WebUntis | `chromium` or `chromium-browser` | `pgrep chromium` |
|
||||
| PDF Presentation | `evince`, `okular`, or custom viewer | `pgrep {viewer_name}` |
|
||||
|
||||
### 4.2 Monitoring Checks (Every 5 seconds)
|
||||
|
||||
#### Check 1: Process Alive
|
||||
```
|
||||
Goal: Verify expected process is running
|
||||
Method:
|
||||
- Get list of running processes (psutil or `ps`)
|
||||
- Check if expected process name exists
|
||||
- Match PID if known
|
||||
Result:
|
||||
- If missing → status = "crashed"
|
||||
- If found → status = "running"
|
||||
Action on crash:
|
||||
- Send ERROR log immediately
|
||||
- Attempt restart (max 3 attempts)
|
||||
- Send WARN log on each restart
|
||||
- If max restarts exceeded → send ERROR log, display fallback
|
||||
```
|
||||
|
||||
#### Check 2: Process Responsive
|
||||
```
|
||||
Goal: Detect frozen processes
|
||||
Method:
|
||||
- For VLC: Query HTTP interface (status.json)
|
||||
- For Chromium: Use DevTools Protocol (CDP)
|
||||
- For custom viewers: Check last screen update time
|
||||
Result:
|
||||
- If same frame >30 seconds → likely frozen
|
||||
- If playback position not advancing → frozen
|
||||
Action on freeze:
|
||||
- Send WARN log
|
||||
- Force refresh (reload page, seek video, next slide)
|
||||
- If refresh fails → restart process
|
||||
```
|
||||
|
||||
#### Check 3: Content Match
|
||||
```
|
||||
Goal: Verify correct content is displayed
|
||||
Method:
|
||||
- Compare expected event_id with actual media/URL
|
||||
- Check scheduled time window (is event still active?)
|
||||
Result:
|
||||
- Mismatch → content error
|
||||
Action:
|
||||
- Send WARN log
|
||||
- Reload correct event from scheduler
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Process Control Interface Requirements
|
||||
|
||||
### 5.1 VLC Control
|
||||
|
||||
**Requirement:** Enable VLC HTTP interface for monitoring
|
||||
|
||||
**Launch Command:**
|
||||
```bash
|
||||
vlc --intf http --http-host 127.0.0.1 --http-port 8080 --http-password "vlc_password" \
|
||||
--fullscreen --loop /path/to/video.mp4
|
||||
```
|
||||
|
||||
**Status Query:**
|
||||
```bash
|
||||
curl http://127.0.0.1:8080/requests/status.json --user ":vlc_password"
|
||||
```
|
||||
|
||||
**Response Fields to Monitor:**
|
||||
```json
|
||||
{
|
||||
"state": "playing", // "playing", "paused", "stopped"
|
||||
"position": 0.25, // 0.0-1.0 (25% through)
|
||||
"time": 45, // seconds into playback
|
||||
"length": 180, // total duration in seconds
|
||||
"volume": 256 // 0-512
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Chromium Control
|
||||
|
||||
**Requirement:** Enable Chrome DevTools Protocol (CDP)
|
||||
|
||||
**Launch Command:**
|
||||
```bash
|
||||
chromium --remote-debugging-port=9222 --kiosk --app=https://example.com
|
||||
```
|
||||
|
||||
**Status Query:**
|
||||
```bash
|
||||
curl http://127.0.0.1:9222/json
|
||||
```
|
||||
|
||||
**Response Fields to Monitor:**
|
||||
```json
|
||||
[
|
||||
{
|
||||
"url": "https://example.com",
|
||||
"title": "Page Title",
|
||||
"type": "page"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Advanced:** Use CDP WebSocket for events (page load, navigation, errors)
|
||||
|
||||
---
|
||||
|
||||
### 5.3 PDF Viewer (Custom or Standard)
|
||||
|
||||
**Option A: Standard Viewer (e.g., Evince)**
|
||||
- No built-in API
|
||||
- Monitor via process check + screenshot comparison
|
||||
|
||||
**Option B: Custom Python Viewer**
|
||||
- Implement REST API for status queries
|
||||
- Track: current page, total pages, last transition time
|
||||
|
||||
---
|
||||
|
||||
## 6. Watchdog Service Architecture
|
||||
|
||||
### 6.1 Service Components
|
||||
|
||||
**Component 1: Process Monitor Thread**
|
||||
```
|
||||
Responsibilities:
|
||||
- Check process alive every 5 seconds
|
||||
- Detect crashes and frozen processes
|
||||
- Attempt automatic restart
|
||||
- Send health metrics via MQTT
|
||||
|
||||
State Machine:
|
||||
IDLE → STARTING → RUNNING → (if crash) → RESTARTING → RUNNING
|
||||
→ (if max restarts) → FAILED
|
||||
```
|
||||
|
||||
**Component 2: MQTT Publisher Thread**
|
||||
```
|
||||
Responsibilities:
|
||||
- Maintain MQTT connection
|
||||
- Send heartbeat every 60 seconds
|
||||
- Send logs on-demand (queued from other components)
|
||||
- Send health metrics every 5 seconds
|
||||
- Reconnect on connection loss
|
||||
```
|
||||
|
||||
**Component 3: Event Manager Integration**
|
||||
```
|
||||
Responsibilities:
|
||||
- Receive event schedule from server
|
||||
- Notify watchdog of expected process/content
|
||||
- Launch media player processes
|
||||
- Handle event transitions
|
||||
```
|
||||
|
||||
### 6.2 Service Lifecycle
|
||||
|
||||
**On Startup:**
|
||||
1. Load configuration (client UUID, MQTT broker, etc.)
|
||||
2. Connect to MQTT broker
|
||||
3. Send INFO log: "Watchdog service started"
|
||||
4. Wait for first event from scheduler
|
||||
|
||||
**During Operation:**
|
||||
1. Monitor loop runs every 5 seconds
|
||||
2. Check expected vs actual process state
|
||||
3. Send health metrics
|
||||
4. Handle failures (log + restart)
|
||||
|
||||
**On Shutdown:**
|
||||
1. Send INFO log: "Watchdog service stopping"
|
||||
2. Gracefully stop monitored processes
|
||||
3. Disconnect from MQTT
|
||||
4. Exit cleanly
|
||||
|
||||
---
|
||||
|
||||
## 7. Auto-Recovery Logic
|
||||
|
||||
### 7.1 Restart Strategy
|
||||
|
||||
**Step 1: Detect Failure**
|
||||
```
|
||||
Trigger: Process not found in process list
|
||||
Action:
|
||||
- Log ERROR: "Process {name} crashed"
|
||||
- Increment restart counter
|
||||
- Check if within retry limit (max 3)
|
||||
```
|
||||
|
||||
**Step 2: Attempt Restart**
|
||||
```
|
||||
If restart_attempts < MAX_RESTARTS:
|
||||
- Log WARN: "Attempting restart ({attempt}/{MAX_RESTARTS})"
|
||||
- Kill any zombie processes
|
||||
- Wait 2 seconds (cooldown)
|
||||
- Launch process with same parameters
|
||||
- Wait 5 seconds for startup
|
||||
- Verify process is running
|
||||
- If success: reset restart counter, log INFO
|
||||
- If fail: increment counter, repeat
|
||||
```
|
||||
|
||||
**Step 3: Permanent Failure**
|
||||
```
|
||||
If restart_attempts >= MAX_RESTARTS:
|
||||
- Log ERROR: "Max restart attempts exceeded, failing over"
|
||||
- Display fallback content (static image with error message)
|
||||
- Send notification to server (separate alert topic, optional)
|
||||
- Wait for manual intervention or scheduler event change
|
||||
```
|
||||
|
||||
### 7.2 Restart Cooldown
|
||||
|
||||
**Purpose:** Prevent rapid restart loops that waste resources
|
||||
|
||||
**Implementation:**
|
||||
```
|
||||
After each restart attempt:
|
||||
- Wait 2 seconds before next restart
|
||||
- After 3 failures: wait 30 seconds before trying again
|
||||
- Reset counter on successful run >5 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Resource Monitoring
|
||||
|
||||
### 8.1 System Metrics to Track
|
||||
|
||||
**CPU Usage:**
|
||||
```
|
||||
Method: Read /proc/stat or use psutil.cpu_percent()
|
||||
Frequency: Every 5 seconds
|
||||
Threshold: Warn if >80% for >60 seconds
|
||||
```
|
||||
|
||||
**Memory Usage:**
|
||||
```
|
||||
Method: Read /proc/meminfo or use psutil.virtual_memory()
|
||||
Frequency: Every 5 seconds
|
||||
Threshold: Warn if >90% for >30 seconds
|
||||
```
|
||||
|
||||
**Display Status:**
|
||||
```
|
||||
Method: Check DPMS state or xset query
|
||||
Frequency: Every 30 seconds
|
||||
Threshold: Error if display off (unexpected)
|
||||
```
|
||||
|
||||
**Network Connectivity:**
|
||||
```
|
||||
Method: Ping server or check MQTT connection
|
||||
Frequency: Every 60 seconds
|
||||
Threshold: Warn if no server connectivity
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Development vs Production Mode
|
||||
|
||||
### 9.1 Development Mode
|
||||
|
||||
**Enable via:** Environment variable `DEBUG=true` or `ENV=development`
|
||||
|
||||
**Behavior:**
|
||||
- Send INFO level logs
|
||||
- More verbose logging to console
|
||||
- Shorter monitoring intervals (faster feedback)
|
||||
- Screenshot capture every 30 seconds
|
||||
- No rate limiting on logs
|
||||
|
||||
### 9.2 Production Mode
|
||||
|
||||
**Enable via:** `ENV=production`
|
||||
|
||||
**Behavior:**
|
||||
- Send only ERROR and WARN logs
|
||||
- Minimal console output
|
||||
- Standard monitoring intervals
|
||||
- Screenshot capture every 60 seconds
|
||||
- Rate limiting: max 10 logs per minute per level
|
||||
|
||||
---
|
||||
|
||||
## 10. Configuration File Format
|
||||
|
||||
### 10.1 Recommended Config: JSON
|
||||
|
||||
**File:** `/etc/infoscreen/config.json` or `~/.config/infoscreen/config.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"client": {
|
||||
"uuid": "9b8d1856-ff34-4864-a726-12de072d0f77",
|
||||
"hostname": "infoscreen-room-101"
|
||||
},
|
||||
"mqtt": {
|
||||
"broker": "192.168.43.201",
|
||||
"port": 1883,
|
||||
"username": "",
|
||||
"password": "",
|
||||
"keepalive": 60
|
||||
},
|
||||
"monitoring": {
|
||||
"enabled": true,
|
||||
"health_interval_seconds": 5,
|
||||
"heartbeat_interval_seconds": 60,
|
||||
"max_restart_attempts": 3,
|
||||
"restart_cooldown_seconds": 2
|
||||
},
|
||||
"logging": {
|
||||
"level": "INFO",
|
||||
"send_info_logs": false,
|
||||
"console_output": true,
|
||||
"local_log_file": "/var/log/infoscreen/watchdog.log"
|
||||
},
|
||||
"processes": {
|
||||
"vlc": {
|
||||
"http_port": 8080,
|
||||
"http_password": "vlc_password"
|
||||
},
|
||||
"chromium": {
|
||||
"debug_port": 9222
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Error Scenarios & Expected Behavior
|
||||
|
||||
### Scenario 1: VLC Crashes Mid-Video
|
||||
```
|
||||
1. Watchdog detects: process_status = "crashed"
|
||||
2. Send ERROR log: "VLC process crashed"
|
||||
3. Attempt 1: Restart VLC with same video, seek to last position
|
||||
4. If success: Send INFO log "VLC restarted successfully"
|
||||
5. If fail: Repeat 2 more times
|
||||
6. After 3 failures: Send ERROR "Max restarts exceeded", show fallback
|
||||
```
|
||||
|
||||
### Scenario 2: Network Timeout Loading Website
|
||||
```
|
||||
1. Chromium fails to load page (CDP reports error)
|
||||
2. Send WARN log: "Page load timeout"
|
||||
3. Attempt reload (Chromium refresh)
|
||||
4. If success after 10s: Continue monitoring
|
||||
5. If timeout again: Send ERROR, try restarting Chromium
|
||||
```
|
||||
|
||||
### Scenario 3: Display Powers Off (Hardware)
|
||||
```
|
||||
1. DPMS check detects display off
|
||||
2. Send ERROR log: "Display powered off"
|
||||
3. Attempt to wake display (xset dpms force on)
|
||||
4. If success: Send INFO log
|
||||
5. If fail: Hardware issue, alert admin
|
||||
```
|
||||
|
||||
### Scenario 4: High CPU Usage
|
||||
```
|
||||
1. CPU >80% for 60 seconds
|
||||
2. Send WARN log: "High CPU usage: 85%"
|
||||
3. Check if expected (e.g., video playback is normal)
|
||||
4. If unexpected: investigate process causing it
|
||||
5. If critical (>95%): consider restarting offending process
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12. Testing & Validation
|
||||
|
||||
### 12.1 Manual Tests (During Development)
|
||||
|
||||
**Test 1: Process Crash Simulation**
|
||||
```bash
|
||||
# Start video, then kill VLC manually
|
||||
killall vlc
|
||||
# Expected: ERROR log sent, automatic restart within 5 seconds
|
||||
```
|
||||
|
||||
**Test 2: MQTT Connectivity**
|
||||
```bash
|
||||
# Subscribe to all client topics on server
|
||||
mosquitto_sub -h 192.168.43.201 -t "infoscreen/{uuid}/#" -v
|
||||
# Expected: See heartbeat every 60s, health every 5s
|
||||
```
|
||||
|
||||
**Test 3: Log Levels**
|
||||
```bash
|
||||
# Trigger error condition and verify log appears in database
|
||||
curl http://192.168.43.201:8000/api/client-logs/test
|
||||
# Expected: See new log entry with correct level/message
|
||||
```
|
||||
|
||||
### 12.2 Acceptance Criteria
|
||||
|
||||
✅ **Client must:**
|
||||
1. Send heartbeat every 60 seconds without gaps
|
||||
2. Send ERROR log within 5 seconds of process crash
|
||||
3. Attempt automatic restart (max 3 times)
|
||||
4. Report health metrics every 5 seconds
|
||||
5. Survive MQTT broker restart (reconnect automatically)
|
||||
6. Survive network interruption (buffer logs, send when reconnected)
|
||||
7. Use correct timestamp format (ISO 8601 UTC)
|
||||
8. Only send logs for real client UUID (FK constraint)
|
||||
|
||||
---
|
||||
|
||||
## 13. Python Libraries (Recommended)
|
||||
|
||||
**For process monitoring:**
|
||||
- `psutil` - Cross-platform process and system utilities
|
||||
|
||||
**For MQTT:**
|
||||
- `paho-mqtt` - Official MQTT client (use v2.x with Callback API v2)
|
||||
|
||||
**For VLC control:**
|
||||
- `requests` - HTTP client for status queries
|
||||
|
||||
**For Chromium control:**
|
||||
- `websocket-client` or `pychrome` - Chrome DevTools Protocol
|
||||
|
||||
**For datetime:**
|
||||
- `datetime` (stdlib) - Use `datetime.now(timezone.utc).isoformat()`
|
||||
|
||||
**Example requirements.txt:**
|
||||
```
|
||||
paho-mqtt>=2.0.0
|
||||
psutil>=5.9.0
|
||||
requests>=2.31.0
|
||||
python-dateutil>=2.8.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 14. Security Considerations
|
||||
|
||||
### 14.1 MQTT Security
|
||||
- If broker requires auth, store credentials in config file with restricted permissions (`chmod 600`)
|
||||
- Consider TLS/SSL for MQTT (port 8883) if on untrusted network
|
||||
- Use unique client ID to prevent impersonation
|
||||
|
||||
### 14.2 Process Control APIs
|
||||
- VLC HTTP password should be random, not default
|
||||
- Chromium debug port should bind to `127.0.0.1` only (not `0.0.0.0`)
|
||||
- Restrict file system access for media player processes
|
||||
|
||||
### 14.3 Log Content
|
||||
- **Do not log:** Passwords, API keys, personal data
|
||||
- **Sanitize:** File paths (strip user directories), URLs (remove query params with tokens)
|
||||
|
||||
---
|
||||
|
||||
## 15. Performance Targets
|
||||
|
||||
| Metric | Target | Acceptable | Critical |
|
||||
|--------|--------|------------|----------|
|
||||
| Health check interval | 5s | 10s | 30s |
|
||||
| Crash detection time | <5s | <10s | <30s |
|
||||
| Restart time | <10s | <20s | <60s |
|
||||
| MQTT publish latency | <100ms | <500ms | <2s |
|
||||
| CPU usage (watchdog) | <2% | <5% | <10% |
|
||||
| RAM usage (watchdog) | <50MB | <100MB | <200MB |
|
||||
| Log message size | <1KB | <10KB | <100KB |
|
||||
|
||||
---
|
||||
|
||||
## 16. Troubleshooting Guide (For Client Development)
|
||||
|
||||
### Issue: Logs not appearing in server database
|
||||
**Check:**
|
||||
1. Is MQTT broker reachable? (`mosquitto_pub` test from client)
|
||||
2. Is client UUID correct and exists in `clients` table?
|
||||
3. Is timestamp format correct (ISO 8601 with 'Z')?
|
||||
4. Check server listener logs for errors
|
||||
|
||||
### Issue: Health metrics not updating
|
||||
**Check:**
|
||||
1. Is health loop running? (check watchdog service status)
|
||||
2. Is MQTT connected? (check connection status in logs)
|
||||
3. Is payload JSON valid? (use JSON validator)
|
||||
|
||||
### Issue: Process restarts in loop
|
||||
**Check:**
|
||||
1. Is media file/URL accessible?
|
||||
2. Is process command correct? (test manually)
|
||||
3. Check process exit code (crash reason)
|
||||
4. Increase restart cooldown to avoid rapid loops
|
||||
|
||||
---
|
||||
|
||||
## 17. Complete Message Flow Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Infoscreen Client │
|
||||
│ │
|
||||
│ Event Occurs: │
|
||||
│ - Process crashed │
|
||||
│ - High CPU usage │
|
||||
│ - Content loaded │
|
||||
│ │
|
||||
│ ┌────────────────┐ │
|
||||
│ │ Decision Logic │ │
|
||||
│ │ - Is it ERROR?│ │
|
||||
│ │ - Is it WARN? │ │
|
||||
│ │ - Is it INFO? │ │
|
||||
│ └────────┬───────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌────────────────────────────────┐ │
|
||||
│ │ Build JSON Payload │ │
|
||||
│ │ { │ │
|
||||
│ │ "timestamp": "...", │ │
|
||||
│ │ "message": "...", │ │
|
||||
│ │ "context": {...} │ │
|
||||
│ │ } │ │
|
||||
│ └────────┬───────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌────────────────────────────────┐ │
|
||||
│ │ MQTT Publish │ │
|
||||
│ │ Topic: infoscreen/{uuid}/logs/error │
|
||||
│ │ QoS: 1 │ │
|
||||
│ └────────┬───────────────────────┘ │
|
||||
└───────────┼──────────────────────────────────────────┘
|
||||
│
|
||||
│ TCP/IP (MQTT Protocol)
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ MQTT Broker │
|
||||
│ (Mosquitto) │
|
||||
└──────┬───────┘
|
||||
│
|
||||
│ Topic: infoscreen/+/logs/#
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ Listener Service │
|
||||
│ (Python) │
|
||||
│ │
|
||||
│ - Parse JSON │
|
||||
│ - Validate UUID │
|
||||
│ - Store in database │
|
||||
└──────┬───────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ MariaDB Database │
|
||||
│ │
|
||||
│ Table: client_logs │
|
||||
│ - client_uuid │
|
||||
│ - timestamp │
|
||||
│ - level │
|
||||
│ - message │
|
||||
│ - context (JSON) │
|
||||
└──────┬───────────────────────┘
|
||||
│
|
||||
│ SQL Query
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ API Server (Flask) │
|
||||
│ │
|
||||
│ GET /api/client-logs/{uuid}/logs
|
||||
│ GET /api/client-logs/summary
|
||||
└──────┬───────────────────────┘
|
||||
│
|
||||
│ HTTP/JSON
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────┐
|
||||
│ Dashboard (React) │
|
||||
│ │
|
||||
│ - Display logs │
|
||||
│ - Filter by level │
|
||||
│ - Show health status │
|
||||
└───────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 18. Quick Reference Card
|
||||
|
||||
### MQTT Topics Summary
|
||||
```
|
||||
infoscreen/{uuid}/logs/error → Critical failures
|
||||
infoscreen/{uuid}/logs/warn → Non-critical issues
|
||||
infoscreen/{uuid}/logs/info → Informational (dev mode)
|
||||
infoscreen/{uuid}/health → Health metrics (every 5s)
|
||||
infoscreen/{uuid}/heartbeat → Enhanced heartbeat (every 60s)
|
||||
```
|
||||
|
||||
### JSON Timestamp Format
|
||||
```python
|
||||
from datetime import datetime, timezone
|
||||
timestamp = datetime.now(timezone.utc).isoformat()
|
||||
# Output: "2026-03-10T07:30:00+00:00" or "2026-03-10T07:30:00Z"
|
||||
```
|
||||
|
||||
### Process Status Values
|
||||
```
|
||||
"running" - Process is alive and responding
|
||||
"crashed" - Process terminated unexpectedly
|
||||
"starting" - Process is launching (startup phase)
|
||||
"stopped" - Process intentionally stopped
|
||||
```
|
||||
|
||||
### Restart Logic
|
||||
```
|
||||
Max attempts: 3
|
||||
Cooldown: 2 seconds between attempts
|
||||
Reset: After 5 minutes of successful operation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 19. Contact & Support
|
||||
|
||||
**Server API Documentation:**
|
||||
- Base URL: `http://192.168.43.201:8000`
|
||||
- Health check: `GET /health`
|
||||
- Test logs: `GET /api/client-logs/test` (no auth)
|
||||
- Full API docs: See `CLIENT_MONITORING_IMPLEMENTATION_GUIDE.md` on server
|
||||
|
||||
**MQTT Broker:**
|
||||
- Host: `192.168.43.201`
|
||||
- Port: `1883` (standard), `9001` (WebSocket)
|
||||
- Test tool: `mosquitto_pub` / `mosquitto_sub`
|
||||
|
||||
**Database Schema:**
|
||||
- Table: `client_logs`
|
||||
- Foreign Key: `client_uuid` → `clients.uuid` (ON DELETE CASCADE)
|
||||
- Constraint: UUID must exist in clients table before logging
|
||||
|
||||
**Server-Side Logs:**
|
||||
```bash
|
||||
# View listener logs (processes MQTT messages)
|
||||
docker compose logs -f listener
|
||||
|
||||
# View server logs (API requests)
|
||||
docker compose logs -f server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 20. Appendix: Example Implementations
|
||||
|
||||
### A. Minimal Python Watchdog (Pseudocode)
|
||||
|
||||
```python
|
||||
import time
|
||||
import json
|
||||
import psutil
|
||||
import paho.mqtt.client as mqtt
|
||||
from datetime import datetime, timezone
|
||||
|
||||
class MinimalWatchdog:
|
||||
def __init__(self, client_uuid, mqtt_broker):
|
||||
self.uuid = client_uuid
|
||||
self.mqtt_client = mqtt.Client(callback_api_version=mqtt.CallbackAPIVersion.VERSION2)
|
||||
self.mqtt_client.connect(mqtt_broker, 1883, 60)
|
||||
self.mqtt_client.loop_start()
|
||||
|
||||
self.expected_process = None
|
||||
self.restart_attempts = 0
|
||||
self.MAX_RESTARTS = 3
|
||||
|
||||
def send_log(self, level, message, context=None):
|
||||
topic = f"infoscreen/{self.uuid}/logs/{level}"
|
||||
payload = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"message": message,
|
||||
"context": context or {}
|
||||
}
|
||||
self.mqtt_client.publish(topic, json.dumps(payload), qos=1)
|
||||
|
||||
def is_process_running(self, process_name):
|
||||
for proc in psutil.process_iter(['name']):
|
||||
if process_name in proc.info['name']:
|
||||
return True
|
||||
return False
|
||||
|
||||
def monitor_loop(self):
|
||||
while True:
|
||||
if self.expected_process:
|
||||
if not self.is_process_running(self.expected_process):
|
||||
self.send_log("error", f"{self.expected_process} crashed")
|
||||
if self.restart_attempts < self.MAX_RESTARTS:
|
||||
self.restart_process()
|
||||
else:
|
||||
self.send_log("error", "Max restarts exceeded")
|
||||
|
||||
time.sleep(5)
|
||||
|
||||
# Usage:
|
||||
watchdog = MinimalWatchdog("9b8d1856-ff34-4864-a726-12de072d0f77", "192.168.43.201")
|
||||
watchdog.expected_process = "vlc"
|
||||
watchdog.monitor_loop()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**END OF SPECIFICATION**
|
||||
|
||||
Questions? Refer to:
|
||||
- `CLIENT_MONITORING_IMPLEMENTATION_GUIDE.md` (server repo)
|
||||
- Server API: `http://192.168.43.201:8000/api/client-logs/test`
|
||||
- MQTT test: `mosquitto_sub -h 192.168.43.201 -t infoscreen/#`
|
||||
@@ -6,7 +6,7 @@ Your database has been successfully initialized! Here's what you need to know:
|
||||
|
||||
### ✅ Current Status
|
||||
- **Database**: MariaDB 11.2 running in Docker container `infoscreen-db`
|
||||
- **Schema**: Up to date (Alembic revision: `b5a6c3d4e7f8`)
|
||||
- **Schema**: Up to date (check with `alembic current` in `server/`)
|
||||
- **Default Data**: Admin user and client group created
|
||||
- **Academic Periods**: Austrian school years 2024/25 (active), 2025/26, 2026/27
|
||||
|
||||
@@ -82,8 +82,70 @@ session.close()
|
||||
- **`conversions`** - File conversion jobs (PPT → PDF)
|
||||
- **`academic_periods`** - School year/semester management
|
||||
- **`school_holidays`** - Holiday calendar
|
||||
- **`event_exceptions`** - Overrides and skips for recurring events (per occurrence)
|
||||
- **`system_settings`** - Key–value store for global settings
|
||||
- **`alembic_version`** - Migration tracking
|
||||
|
||||
### Key details and relationships
|
||||
|
||||
- Users (`users`)
|
||||
- Fields: `username` (unique), `password_hash`, `role` (enum: user|editor|admin|superadmin), `is_active`
|
||||
|
||||
- Client groups (`client_groups`)
|
||||
- Fields: `name` (unique), `description`, `is_active`
|
||||
|
||||
- Clients (`clients`)
|
||||
- Fields: `uuid` (PK), network/device metadata, `group_id` (FK→client_groups, default 1), `last_alive` (updated on heartbeat), `is_active`
|
||||
|
||||
- Academic periods (`academic_periods`)
|
||||
- Fields: `name` (unique), optional `display_name`, `start_date`, `end_date`, `period_type` (enum: schuljahr|semester|trimester), `is_active` (at most one should be active)
|
||||
- Indexes: `is_active`, dates
|
||||
|
||||
- Event media (`event_media`)
|
||||
- Fields: `media_type` (enum, see below), `url`, optional `file_path`, optional `message_content`, optional `academic_period_id`
|
||||
- Used by events of types: presentation, video, website, message, other
|
||||
|
||||
- Events (`events`)
|
||||
- Core: `group_id` (FK), optional `academic_period_id` (FK), `title`, optional `description`, `start`, `end`, `event_type` (enum), optional `event_media_id` (FK)
|
||||
- Presentation/video extras: `autoplay`, `loop`, `volume`, `slideshow_interval`, `page_progress`, `auto_progress`
|
||||
- Recurrence: `recurrence_rule` (RFC 5545 RRULE), `recurrence_end`, `skip_holidays` (bool)
|
||||
- Audit/state: `created_by` (FK→users), `updated_by` (FK→users), `is_active`
|
||||
- Indexes: `start`, `end`, `recurrence_rule`, `recurrence_end`
|
||||
- Relationships: `event_media`, `academic_period`, `exceptions` (one-to-many to `event_exceptions` with cascade delete)
|
||||
|
||||
- Event exceptions (`event_exceptions`)
|
||||
- Purpose: track per-occurrence skips or overrides for a recurring master event
|
||||
- Fields: `event_id` (FK→events, ondelete CASCADE), `exception_date` (Date), `is_skipped`, optional overrides (`title`, `description`, `start`, `end`)
|
||||
|
||||
- School holidays (`school_holidays`)
|
||||
- Unique: (`name`, `start_date`, `end_date`, `region`)
|
||||
- Used in combination with `events.skip_holidays`
|
||||
|
||||
- Conversions (`conversions`)
|
||||
- Purpose: track PPT/PPTX/ODP → PDF processing
|
||||
- Fields: `source_event_media_id` (FK→event_media, ondelete CASCADE), `target_format`, `target_path`, `status` (enum), `file_hash`, timestamps, `error_message`
|
||||
- Indexes: (`source_event_media_id`, `target_format`), (`status`, `target_format`)
|
||||
- Unique: (`source_event_media_id`, `target_format`, `file_hash`) — idempotency per content
|
||||
|
||||
- System settings (`system_settings`)
|
||||
- Key–value store: `key` (PK), `value`, optional `description`, `updated_at`
|
||||
- Notable keys used by the app: `presentation_interval`, `presentation_page_progress`, `presentation_auto_progress`
|
||||
|
||||
### Enums (reference)
|
||||
|
||||
- UserRole: `user`, `editor`, `admin`, `superadmin`
|
||||
- AcademicPeriodType: `schuljahr`, `semester`, `trimester`
|
||||
- EventType: `presentation`, `website`, `video`, `message`, `other`, `webuntis`
|
||||
- MediaType: `pdf`, `ppt`, `pptx`, `odp`, `mp4`, `avi`, `mkv`, `mov`, `wmv`, `flv`, `webm`, `mpg`, `mpeg`, `ogv`, `jpg`, `jpeg`, `png`, `gif`, `bmp`, `tiff`, `svg`, `html`, `website`
|
||||
- ConversionStatus: `pending`, `processing`, `ready`, `failed`
|
||||
|
||||
### Timezones, recurrence, and holidays
|
||||
|
||||
- All timestamps are stored/compared as timezone-aware UTC. Any naive datetimes are normalized to UTC before comparisons.
|
||||
- Recurrence is represented on events via `recurrence_rule` (RFC 5545 RRULE) and `recurrence_end`. Do not pre-expand series in the DB.
|
||||
- Per-occurrence exclusions/overrides are stored in `event_exceptions`. The API also emits EXDATE tokens matching occurrence start times (UTC) so the frontend can exclude instances natively.
|
||||
- When `skip_holidays` is true, occurrences that fall on school holidays are excluded via corresponding `event_exceptions`.
|
||||
|
||||
### Environment Variables:
|
||||
```bash
|
||||
DB_CONN=mysql+pymysql://infoscreen_admin:KqtpM7wmNdM1DamFKs@db/infoscreen_by_taa
|
||||
|
||||
25
DEV-CHANGELOG.md
Normal file
25
DEV-CHANGELOG.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# DEV-CHANGELOG
|
||||
|
||||
This changelog tracks all changes made in the development workspace, including internal, experimental, and in-progress updates. Entries here may not be reflected in public releases or the user-facing changelog.
|
||||
|
||||
---
|
||||
|
||||
## Unreleased (development workspace)
|
||||
- Monitoring system completion: End-to-end monitoring pipeline is active (MQTT logs/health → listener persistence → monitoring APIs → superadmin dashboard).
|
||||
- Monitoring API: Added/active endpoints `GET /api/client-logs/monitoring-overview` and `GET /api/client-logs/recent-errors`; per-client logs via `GET /api/client-logs/<uuid>/logs`.
|
||||
- Dashboard monitoring UI: Superadmin monitoring page is integrated and displays client health status, screenshots, process metadata, and recent error activity.
|
||||
- Bugfix: Presentation flags `page_progress` and `auto_progress` now persist reliably across create/update and detached-occurrence flows.
|
||||
- Frontend (Settings → Events): Added Presentations defaults (slideshow interval, page-progress, auto-progress) with load/save via `/api/system-settings`; UI uses Syncfusion controls.
|
||||
- Backend defaults: Seeded `presentation_interval` ("10"), `presentation_page_progress` ("true"), `presentation_auto_progress` ("true") in `server/init_defaults.py` when missing.
|
||||
- Data model: Added per-event fields `page_progress` and `auto_progress` on `Event`; Alembic migration applied successfully.
|
||||
- Event modal (dashboard): Extended to show and persist presentation `pageProgress`/`autoProgress`; applies system defaults on create and preserves per-event values on edit; payload includes `page_progress`, `auto_progress`, and `slideshow_interval`.
|
||||
- Scheduler behavior: Now publishes only currently active events per group (at "now"); clears retained topics by publishing `[]` for groups with no active events; normalizes naive timestamps and compares times in UTC; presentation payloads include `page_progress` and `auto_progress`.
|
||||
- Recurrence handling: Still queries a 7‑day window to expand recurring events and apply exceptions; recurring events only deactivate after `recurrence_end` (UNTIL).
|
||||
- Logging: Temporarily added filter diagnostics during debugging; removed verbose logs after verification.
|
||||
- WebUntis event type: Implemented new `webuntis` type. Event creation resolves URL from system `supplement_table_url`; returns 400 if not configured. WebUntis behaves like Website on clients (shared website payload).
|
||||
- Settings consolidation: Removed separate `webuntis_url` (if present during dev); WebUntis and Vertretungsplan share `supplement_table_url`. Removed `/api/system-settings/webuntis-url` endpoints; use `/api/system-settings/supplement-table`.
|
||||
- Scheduler payloads: Added top-level `event_type` for all events; introduced unified nested `website` payload for both `website` and `webuntis` events: `{ "type": "browser", "url": "…" }`.
|
||||
- Frontend: Program info bumped to `2025.1.0-alpha.13`; changelog includes WebUntis/Website unification and settings update. Event modal shows no per-event URL for WebUntis.
|
||||
- Documentation: Added `MQTT_EVENT_PAYLOAD_GUIDE.md` and `WEBUNTIS_EVENT_IMPLEMENTATION.md`. Updated `.github/copilot-instructions.md` and `README.md` for unified Website/WebUntis handling and system settings usage.
|
||||
|
||||
Note: These changes are available in the development environment and may be included in future releases. For released changes, see TECH-CHANGELOG.md.
|
||||
308
MQTT_EVENT_PAYLOAD_GUIDE.md
Normal file
308
MQTT_EVENT_PAYLOAD_GUIDE.md
Normal file
@@ -0,0 +1,308 @@
|
||||
# MQTT Event Payload Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the MQTT message structure used by the Infoscreen system to deliver event information from the scheduler to display clients. It covers best practices, payload formats, and versioning strategies.
|
||||
|
||||
## MQTT Topics
|
||||
|
||||
### Event Distribution
|
||||
- **Topic**: `infoscreen/events/{group_id}`
|
||||
- **Retained**: Yes
|
||||
- **Format**: JSON array of event objects
|
||||
- **Purpose**: Delivers active events to client groups
|
||||
|
||||
### Per-Client Configuration
|
||||
- **Topic**: `infoscreen/{uuid}/group_id`
|
||||
- **Retained**: Yes
|
||||
- **Format**: Integer (group ID)
|
||||
- **Purpose**: Assigns clients to groups
|
||||
|
||||
## Message Structure
|
||||
|
||||
### General Principles
|
||||
|
||||
1. **Type Safety**: Always include `event_type` to allow clients to parse appropriately
|
||||
2. **Backward Compatibility**: Add new fields without removing old ones
|
||||
3. **Extensibility**: Use nested objects for event-type-specific data
|
||||
4. **UTC Timestamps**: All times in ISO 8601 format with timezone info
|
||||
|
||||
### Base Event Structure
|
||||
|
||||
Every event includes these common fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 123,
|
||||
"title": "Event Title",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"event_type": "presentation|website|webuntis|video|message|other",
|
||||
"recurrence_rule": "FREQ=WEEKLY;BYDAY=MO,WE,FR" or null,
|
||||
"recurrence_end": "2025-12-31T23:59:59+00:00" or null
|
||||
}
|
||||
```
|
||||
|
||||
### Event Type-Specific Payloads
|
||||
|
||||
#### Presentation Events
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 123,
|
||||
"event_type": "presentation",
|
||||
"title": "Morning Announcements",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"presentation": {
|
||||
"type": "slideshow",
|
||||
"files": [
|
||||
{
|
||||
"name": "slides.pdf",
|
||||
"url": "http://server:8000/api/files/converted/abc123.pdf",
|
||||
"checksum": null,
|
||||
"size": null
|
||||
}
|
||||
],
|
||||
"slide_interval": 10000,
|
||||
"auto_advance": true,
|
||||
"page_progress": true,
|
||||
"auto_progress": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fields**:
|
||||
- `type`: Always "slideshow" for presentations
|
||||
- `files`: Array of file objects with download URLs
|
||||
- `slide_interval`: Milliseconds between slides (default: 5000)
|
||||
- `auto_advance`: Whether to automatically advance slides
|
||||
- `page_progress`: Show page number indicator
|
||||
- `auto_progress`: Enable automatic progression
|
||||
|
||||
#### Website Events
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 124,
|
||||
"event_type": "website",
|
||||
"title": "School Website",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"website": {
|
||||
"type": "browser",
|
||||
"url": "https://example.com/page"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fields**:
|
||||
- `type`: Always "browser" for website display
|
||||
- `url`: Full URL to display in embedded browser
|
||||
|
||||
#### WebUntis Events
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 125,
|
||||
"event_type": "webuntis",
|
||||
"title": "Schedule Display",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"website": {
|
||||
"type": "browser",
|
||||
"url": "https://webuntis.example.com/schedule"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: WebUntis events use the same payload structure as website events. The URL is fetched from system settings (`webuntis_url`) rather than being specified per-event. Clients treat `webuntis` and `website` event types identically—both display a website.
|
||||
|
||||
#### Video Events
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 126,
|
||||
"event_type": "video",
|
||||
"title": "Video Playback",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"video": {
|
||||
"type": "media",
|
||||
"url": "http://server:8000/api/eventmedia/stream/123/video.mp4",
|
||||
"autoplay": true,
|
||||
"loop": false,
|
||||
"volume": 0.8
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fields**:
|
||||
- `type`: Always "media" for video playback
|
||||
- `url`: Video streaming URL with range request support
|
||||
- `autoplay`: Whether to start playing automatically (default: true)
|
||||
- `loop`: Whether to loop the video (default: false)
|
||||
- `volume`: Playback volume from 0.0 to 1.0 (default: 0.8)
|
||||
|
||||
#### Message Events (Future)
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 127,
|
||||
"event_type": "message",
|
||||
"title": "Important Announcement",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"message": {
|
||||
"type": "html",
|
||||
"content": "<h1>Important</h1><p>Message content</p>",
|
||||
"style": "default"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Type-Based Parsing
|
||||
|
||||
Clients should:
|
||||
1. Read the `event_type` field first
|
||||
2. Switch/dispatch based on type
|
||||
3. Parse type-specific nested objects (`presentation`, `website`, etc.)
|
||||
|
||||
```javascript
|
||||
// Example client parsing
|
||||
function parseEvent(event) {
|
||||
switch (event.event_type) {
|
||||
case 'presentation':
|
||||
return handlePresentation(event.presentation);
|
||||
case 'website':
|
||||
case 'webuntis':
|
||||
return handleWebsite(event.website);
|
||||
case 'video':
|
||||
return handleVideo(event.video);
|
||||
// ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Graceful Degradation
|
||||
|
||||
- Always provide fallback values for optional fields
|
||||
- Validate URLs before attempting to load
|
||||
- Handle missing or malformed data gracefully
|
||||
|
||||
### 3. Performance Optimization
|
||||
|
||||
- Cache downloaded presentation files
|
||||
- Use checksums to avoid re-downloading unchanged content
|
||||
- Preload resources before event start time
|
||||
|
||||
### 4. Time Handling
|
||||
|
||||
- Always parse ISO 8601 timestamps with timezone awareness
|
||||
- Compare event start/end times in UTC
|
||||
- Account for clock drift on embedded devices
|
||||
|
||||
### 5. Error Recovery
|
||||
|
||||
- Retry failed downloads with exponential backoff
|
||||
- Log errors but continue operation
|
||||
- Display fallback content if event data is invalid
|
||||
|
||||
## Message Flow
|
||||
|
||||
1. **Scheduler** queries active events from database
|
||||
2. **Scheduler** formats events with type-specific payloads
|
||||
3. **Scheduler** publishes JSON array to `infoscreen/events/{group_id}` (retained)
|
||||
4. **Client** receives retained message on connect
|
||||
5. **Client** parses events and schedules display
|
||||
6. **Client** downloads resources (presentations, etc.)
|
||||
7. **Client** displays events at scheduled times
|
||||
|
||||
## Versioning Strategy
|
||||
|
||||
### Adding New Event Types
|
||||
|
||||
1. Add enum value to `EventType` in `models/models.py`
|
||||
2. Update scheduler's `format_event_with_media()` in `scheduler/db_utils.py`
|
||||
3. Update events API in `server/routes/events.py`
|
||||
4. Add icon mapping in `get_icon_for_type()`
|
||||
5. Document payload structure in this guide
|
||||
|
||||
### Adding Fields to Existing Types
|
||||
|
||||
- **Safe**: Add new optional fields to nested objects
|
||||
- **Unsafe**: Remove or rename existing fields
|
||||
- **Migration**: Provide both old and new field names during transition
|
||||
|
||||
### Example: Adding a New Field
|
||||
|
||||
```json
|
||||
{
|
||||
"event_type": "presentation",
|
||||
"presentation": {
|
||||
"type": "slideshow",
|
||||
"files": [...],
|
||||
"slide_interval": 10000,
|
||||
"transition_effect": "fade" // NEW FIELD (optional)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Old clients ignore unknown fields; new clients use enhanced features.
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
1. **Hardcoding Event Types**: Use `event_type` field, not assumptions
|
||||
2. **Timezone Confusion**: Always use UTC internally
|
||||
3. **Missing Error Handling**: Network failures, malformed URLs, etc.
|
||||
4. **Resource Leaks**: Clean up downloaded files periodically
|
||||
5. **Not Handling Recurrence**: Events may repeat; check `recurrence_rule`
|
||||
|
||||
## System Settings Integration
|
||||
|
||||
Some event types rely on system-wide settings rather than per-event configuration:
|
||||
|
||||
### WebUntis / Supplement Table URL
|
||||
- **Setting Key**: `supplement_table_url`
|
||||
- **API Endpoint**: `GET/POST /api/system-settings/supplement-table`
|
||||
- **Usage**: Automatically applied when creating `webuntis` events
|
||||
- **Default**: Empty string (must be configured by admin)
|
||||
- **Description**: This URL is shared for both Vertretungsplan (supplement table) and WebUntis displays
|
||||
|
||||
### Presentation Defaults
|
||||
- `presentation_interval`: Default slide interval (seconds)
|
||||
- `presentation_page_progress`: Show page indicators by default
|
||||
- `presentation_auto_progress`: Auto-advance by default
|
||||
|
||||
These are applied when creating new events but can be overridden per-event.
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
1. **Unit Tests**: Validate payload serialization/deserialization
|
||||
2. **Integration Tests**: Full scheduler → MQTT → client flow
|
||||
3. **Edge Cases**: Empty event lists, missing URLs, malformed data
|
||||
4. **Performance Tests**: Large file downloads, many events
|
||||
5. **Time Tests**: Events across midnight, timezone boundaries, DST
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- `AUTH_SYSTEM.md` - Authentication and authorization
|
||||
- `DATABASE_GUIDE.md` - Database schema and models
|
||||
- `.github/copilot-instructions.md` - System architecture overview
|
||||
- `scheduler/scheduler.py` - Event publishing implementation
|
||||
- `scheduler/db_utils.py` - Event formatting logic
|
||||
|
||||
## Changelog
|
||||
|
||||
- **2025-10-19**: Initial documentation
|
||||
- Documented base event structure
|
||||
- Added presentation and website/webuntis payload formats
|
||||
- Established best practices and versioning strategy
|
||||
194
MQTT_PAYLOAD_MIGRATION_GUIDE.md
Normal file
194
MQTT_PAYLOAD_MIGRATION_GUIDE.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# MQTT Payload Migration Guide
|
||||
|
||||
## Purpose
|
||||
This guide describes a practical migration from the current dashboard screenshot payload to a grouped schema, with client-side implementation first and server-side migration second.
|
||||
|
||||
## Scope
|
||||
- Environment: development and alpha systems (no production installs)
|
||||
- Message topic: infoscreen/<client_id>/dashboard
|
||||
- Capture types to preserve: periodic, event_start, event_stop
|
||||
|
||||
## Target Schema (v2)
|
||||
The canonical message should be grouped into four logical blocks in this order:
|
||||
|
||||
1. message
|
||||
2. content
|
||||
3. runtime
|
||||
4. metadata
|
||||
|
||||
Example shape:
|
||||
|
||||
```json
|
||||
{
|
||||
"message": {
|
||||
"client_id": "<uuid>",
|
||||
"status": "alive"
|
||||
},
|
||||
"content": {
|
||||
"screenshot": {
|
||||
"filename": "latest.jpg",
|
||||
"data": "<base64>",
|
||||
"timestamp": "2026-03-30T10:15:41.123456+00:00",
|
||||
"size": 183245
|
||||
}
|
||||
},
|
||||
"runtime": {
|
||||
"system_info": {
|
||||
"hostname": "pi-display-01",
|
||||
"ip": "192.168.1.42",
|
||||
"uptime": 123456.7
|
||||
},
|
||||
"process_health": {
|
||||
"event_id": "evt-123",
|
||||
"event_type": "presentation",
|
||||
"current_process": "impressive",
|
||||
"process_pid": 4123,
|
||||
"process_status": "running",
|
||||
"restart_count": 0
|
||||
}
|
||||
},
|
||||
"metadata": {
|
||||
"schema_version": "2.0",
|
||||
"producer": "simclient",
|
||||
"published_at": "2026-03-30T10:15:42.004321+00:00",
|
||||
"capture": {
|
||||
"type": "periodic",
|
||||
"captured_at": "2026-03-30T10:15:41.123456+00:00",
|
||||
"age_s": 0.9,
|
||||
"triggered": false,
|
||||
"send_immediately": false
|
||||
},
|
||||
"transport": {
|
||||
"qos": 0,
|
||||
"publisher": "simclient"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step-by-Step: Client-Side First
|
||||
|
||||
1. Create a migration branch.
|
||||
- Example: feature/payload-v2
|
||||
|
||||
2. Freeze a baseline sample from MQTT.
|
||||
- Capture one payload via mosquitto_sub and store it for comparison.
|
||||
|
||||
3. Implement one canonical payload builder.
|
||||
- Centralize JSON assembly in one function only.
|
||||
- Do not duplicate payload construction across code paths.
|
||||
|
||||
4. Add versioned metadata.
|
||||
- Set metadata.schema_version = "2.0".
|
||||
- Add metadata.producer = "simclient".
|
||||
- Add metadata.published_at in UTC ISO format.
|
||||
|
||||
5. Map existing data into grouped blocks.
|
||||
- client_id/status -> message
|
||||
- screenshot object -> content.screenshot
|
||||
- system_info/process_health -> runtime
|
||||
- capture mode and freshness -> metadata.capture
|
||||
|
||||
6. Preserve existing capture semantics.
|
||||
- Keep type values unchanged: periodic, event_start, event_stop.
|
||||
- Keep UTC ISO timestamps.
|
||||
- Keep screenshot encoding and size behavior unchanged.
|
||||
|
||||
7. Optional short-term compatibility mode (recommended for one sprint).
|
||||
- Either:
|
||||
- Keep current legacy fields in parallel, or
|
||||
- Add a legacy block with old field names.
|
||||
- Goal: prevent immediate server breakage while parser updates are merged.
|
||||
|
||||
8. Improve publish logs for verification.
|
||||
- Log schema_version, metadata.capture.type, metadata.capture.age_s.
|
||||
|
||||
9. Validate all three capture paths end-to-end.
|
||||
- periodic capture
|
||||
- event_start trigger capture
|
||||
- event_stop trigger capture
|
||||
|
||||
10. Lock the client contract.
|
||||
- Save one validated JSON sample per capture type.
|
||||
- Use those samples in server parser tests.
|
||||
|
||||
## Step-by-Step: Server-Side Migration
|
||||
|
||||
1. Add support for grouped v2 parsing.
|
||||
- Parse from message/content/runtime/metadata first.
|
||||
|
||||
2. Add fallback parser for legacy payload (temporary).
|
||||
- If grouped keys are absent, parse old top-level keys.
|
||||
|
||||
3. Normalize to one internal server model.
|
||||
- Convert both parser paths into one DTO/entity used by dashboard logic.
|
||||
|
||||
4. Validate required fields.
|
||||
- Required:
|
||||
- message.client_id
|
||||
- message.status
|
||||
- metadata.schema_version
|
||||
- metadata.capture.type
|
||||
- Optional:
|
||||
- runtime.process_health
|
||||
- content.screenshot (if no screenshot available)
|
||||
|
||||
5. Update dashboard consumers.
|
||||
- Read grouped fields from internal model (not raw old keys).
|
||||
|
||||
6. Add migration observability.
|
||||
- Counters:
|
||||
- v2 parse success
|
||||
- legacy fallback usage
|
||||
- parse failures
|
||||
- Warning log for unknown schema_version.
|
||||
|
||||
7. Run mixed-format integration tests.
|
||||
- New client -> new server
|
||||
- Legacy client -> new server (fallback path)
|
||||
|
||||
8. Cut over to v2 preferred.
|
||||
- Keep fallback for short soak period only.
|
||||
|
||||
9. Remove fallback and legacy assumptions.
|
||||
- After stability window, remove old parser path.
|
||||
|
||||
10. Final cleanup.
|
||||
- Keep one schema doc and test fixtures.
|
||||
- Remove temporary compatibility switches.
|
||||
|
||||
## Legacy to v2 Field Mapping
|
||||
|
||||
| Legacy field | v2 field |
|
||||
|---|---|
|
||||
| client_id | message.client_id |
|
||||
| status | message.status |
|
||||
| screenshot | content.screenshot |
|
||||
| screenshot_type | metadata.capture.type |
|
||||
| screenshot_age_s | metadata.capture.age_s |
|
||||
| timestamp | metadata.published_at |
|
||||
| system_info | runtime.system_info |
|
||||
| process_health | runtime.process_health |
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. All capture types parse and display correctly.
|
||||
- periodic
|
||||
- event_start
|
||||
- event_stop
|
||||
|
||||
2. Screenshot payload integrity is unchanged.
|
||||
- filename, data, timestamp, size remain valid.
|
||||
|
||||
3. Metadata is centrally visible at message end.
|
||||
- schema_version, capture metadata, transport metadata all inside metadata.
|
||||
|
||||
4. No regression in dashboard update timing.
|
||||
- Triggered screenshots still publish quickly.
|
||||
|
||||
## Suggested Timeline (Dev Only)
|
||||
|
||||
1. Day 1: client v2 payload implementation + local tests
|
||||
2. Day 2: server v2 parser + fallback
|
||||
3. Day 3-5: soak in dev, monitor parse metrics
|
||||
4. Day 6+: remove fallback and finalize v2-only
|
||||
533
PHASE_3_CLIENT_MONITORING_IMPLEMENTATION.md
Normal file
533
PHASE_3_CLIENT_MONITORING_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,533 @@
|
||||
# Phase 3: Client-Side Monitoring Implementation
|
||||
|
||||
**Status**: ✅ COMPLETE
|
||||
**Date**: 11. März 2026
|
||||
**Architecture**: Two-process design with health-state bridge
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the **Phase 3** client-side monitoring implementation integrated into the existing infoscreen-dev codebase. The implementation adds:
|
||||
|
||||
1. ✅ **Health-state tracking** for all display processes (Impressive, Chromium, VLC)
|
||||
2. ✅ **Tiered logging**: Local rotating logs + selective MQTT transmission
|
||||
3. ✅ **Process crash detection** with bounded restart attempts
|
||||
4. ✅ **MQTT health/log topics** feeding the monitoring server
|
||||
5. ✅ **Impressive-aware process mapping** (presentations → impressive, websites → chromium, videos → vlc)
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Two-Process Design
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ simclient.py (MQTT Client) │
|
||||
│ - Discovers device, sends heartbeat │
|
||||
│ - Downloads presentation files │
|
||||
│ - Reads health state from display_manager │
|
||||
│ - Publishes health/log messages to MQTT │
|
||||
│ - Sends screenshots for dashboard │
|
||||
└────────┬────────────────────────────────────┬───────────┘
|
||||
│ │
|
||||
│ reads: current_process_health.json │
|
||||
│ │
|
||||
│ writes: current_event.json │
|
||||
│ │
|
||||
┌────────▼────────────────────────────────────▼───────────┐
|
||||
│ display_manager.py (Display Control) │
|
||||
│ - Monitors events and manages displays │
|
||||
│ - Launches Impressive (presentations) │
|
||||
│ - Launches Chromium (websites) │
|
||||
│ - Launches VLC (videos) │
|
||||
│ - Tracks process health and crashes │
|
||||
│ - Detects and restarts crashed processes │
|
||||
│ - Writes health state to JSON bridge │
|
||||
│ - Captures screenshots to shared folder │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### 1. Health State Tracking (display_manager.py)
|
||||
|
||||
**File**: `src/display_manager.py`
|
||||
**New Class**: `ProcessHealthState`
|
||||
|
||||
Tracks process health and persists to JSON for simclient to read:
|
||||
|
||||
```python
|
||||
class ProcessHealthState:
|
||||
"""Track and persist process health state for monitoring integration"""
|
||||
|
||||
- event_id: Currently active event identifier
|
||||
- event_type: presentation, website, video, or None
|
||||
- process_name: impressive, chromium-browser, vlc, or None
|
||||
- process_pid: Process ID or None for libvlc
|
||||
- status: running, crashed, starting, stopped
|
||||
- restart_count: Number of restart attempts
|
||||
- max_restarts: Maximum allowed restarts (3)
|
||||
```
|
||||
|
||||
Methods:
|
||||
- `update_running()` - Mark process as started (logs to monitoring.log)
|
||||
- `update_crashed()` - Mark process as crashed (warning to monitoring.log)
|
||||
- `update_restart_attempt()` - Increment restart counter (logs attempt and checks max)
|
||||
- `update_stopped()` - Mark process as stopped (info to monitoring.log)
|
||||
- `save()` - Persist state to `src/current_process_health.json`
|
||||
|
||||
**New Health State File**: `src/current_process_health.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"event_id": "event_123",
|
||||
"event_type": "presentation",
|
||||
"current_process": "impressive",
|
||||
"process_pid": 1234,
|
||||
"process_status": "running",
|
||||
"restart_count": 0,
|
||||
"timestamp": "2026-03-11T10:30:45.123456+00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Monitoring Logger (both files)
|
||||
|
||||
**Local Rotating Logs**: 5 files × 5 MB each = 25 MB max per device
|
||||
|
||||
**display_manager.py**:
|
||||
```python
|
||||
MONITORING_LOG_PATH = "logs/monitoring.log"
|
||||
monitoring_logger = logging.getLogger("monitoring")
|
||||
monitoring_handler = RotatingFileHandler(MONITORING_LOG_PATH, maxBytes=5*1024*1024, backupCount=5)
|
||||
```
|
||||
|
||||
**simclient.py**:
|
||||
- Shares same `logs/monitoring.log` file
|
||||
- Both processes write to monitoring logger for health events
|
||||
- Local logs never rotate (persisted for technician inspection)
|
||||
|
||||
**Log Filtering** (tiered strategy):
|
||||
- **ERROR**: Local + MQTT (published to `infoscreen/{uuid}/logs/error`)
|
||||
- **WARN**: Local + MQTT (published to `infoscreen/{uuid}/logs/warn`)
|
||||
- **INFO**: Local only (unless `DEBUG_MODE=1`)
|
||||
- **DEBUG**: Local only (always)
|
||||
|
||||
### 3. Process Mapping with Impressive Support
|
||||
|
||||
**display_manager.py** - When starting processes:
|
||||
|
||||
| Event Type | Process Name | Health Status |
|
||||
|-----------|--------------|---------------|
|
||||
| presentation | `impressive` | tracked with PID |
|
||||
| website/webpage/webuntis | `chromium` or `chromium-browser` | tracked with PID |
|
||||
| video | `vlc` | tracked (may have no PID if using libvlc) |
|
||||
|
||||
**Per-Process Updates**:
|
||||
- Presentation: `health.update_running('event_id', 'presentation', 'impressive', pid)`
|
||||
- Website: `health.update_running('event_id', 'website', browser_name, pid)`
|
||||
- Video: `health.update_running('event_id', 'video', 'vlc', pid or None)`
|
||||
|
||||
### 4. Crash Detection and Restart Logic
|
||||
|
||||
**display_manager.py** - `process_events()` method:
|
||||
|
||||
```
|
||||
If process not running AND same event_id:
|
||||
├─ Check exit code
|
||||
├─ If presentation with exit code 0: Normal completion (no restart)
|
||||
├─ Else: Mark crashed
|
||||
│ ├─ health.update_crashed()
|
||||
│ └─ health.update_restart_attempt()
|
||||
│ ├─ If restart_count > max_restarts: Give up
|
||||
│ └─ Else: Restart display (loop back to start_display_for_event)
|
||||
└─ Log to monitoring.log at each step
|
||||
```
|
||||
|
||||
**Restart Logic**:
|
||||
- Max 3 restart attempts per event
|
||||
- Restarts only if same event still active
|
||||
- Graceful exit (code 0) for Impressive auto-quit presentations is treated as normal
|
||||
- All crashes logged to monitoring.log with context
|
||||
|
||||
### 5. MQTT Health and Log Topics
|
||||
|
||||
**simclient.py** - New functions:
|
||||
|
||||
**`read_health_state()`**
|
||||
- Reads `src/current_process_health.json` written by display_manager
|
||||
- Returns dict or None if no active process
|
||||
|
||||
**`publish_health_message(client, client_id)`**
|
||||
- Topic: `infoscreen/{uuid}/health`
|
||||
- QoS: 1 (reliable)
|
||||
- Payload:
|
||||
```json
|
||||
{
|
||||
"timestamp": "2026-03-11T10:30:45.123456+00:00",
|
||||
"expected_state": {
|
||||
"event_id": "event_123"
|
||||
},
|
||||
"actual_state": {
|
||||
"process": "impressive",
|
||||
"pid": 1234,
|
||||
"status": "running"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**`publish_log_message(client, client_id, level, message, context)`**
|
||||
- Topics: `infoscreen/{uuid}/logs/error` or `infoscreen/{uuid}/logs/warn`
|
||||
- QoS: 1 (reliable)
|
||||
- Log level filtering (only ERROR/WARN sent unless DEBUG_MODE=1)
|
||||
- Payload:
|
||||
```json
|
||||
{
|
||||
"timestamp": "2026-03-11T10:30:45.123456+00:00",
|
||||
"message": "Process started: event_id=123 event_type=presentation process=impressive pid=1234",
|
||||
"context": {
|
||||
"event_id": "event_123",
|
||||
"process": "impressive",
|
||||
"event_type": "presentation"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Enhanced Dashboard Heartbeat**:
|
||||
- Topic: `infoscreen/{uuid}/dashboard`
|
||||
- Now includes `process_health` block with event_id, process name, status, restart count
|
||||
|
||||
### 6. Integration Points
|
||||
|
||||
**Existing Features Preserved**:
|
||||
- ✅ Impressive PDF presentations with auto-advance and loop
|
||||
- ✅ Chromium website display with auto-scroll injection
|
||||
- ✅ VLC video playback (python-vlc preferred, binary fallback)
|
||||
- ✅ Screenshot capture and transmission
|
||||
- ✅ HDMI-CEC TV control
|
||||
- ✅ Two-process architecture
|
||||
|
||||
**New Integration Points**:
|
||||
|
||||
| File | Function | Change |
|
||||
|------|----------|--------|
|
||||
| display_manager.py | `__init__()` | Initialize `ProcessHealthState()` |
|
||||
| display_manager.py | `start_presentation()` | Call `health.update_running()` with impressive |
|
||||
| display_manager.py | `start_video()` | Call `health.update_running()` with vlc |
|
||||
| display_manager.py | `start_webpage()` | Call `health.update_running()` with chromium |
|
||||
| display_manager.py | `process_events()` | Detect crashes, call `health.update_crashed()` and `update_restart_attempt()` |
|
||||
| display_manager.py | `stop_current_display()` | Call `health.update_stopped()` |
|
||||
| simclient.py | `screenshot_service_thread()` | (No changes to interval) |
|
||||
| simclient.py | Main heartbeat loop | Call `publish_health_message()` after successful heartbeat |
|
||||
| simclient.py | `send_screenshot_heartbeat()` | Read health state and include in dashboard payload |
|
||||
|
||||
---
|
||||
|
||||
## Logging Hierarchy
|
||||
|
||||
### Local Rotating Files (5 × 5 MB)
|
||||
|
||||
**`logs/display_manager.log`** (existing - updated):
|
||||
- Display event processing
|
||||
- Process lifecycle (start/stop)
|
||||
- HDMI-CEC operations
|
||||
- Presentation status
|
||||
- Video/website startup
|
||||
|
||||
**`logs/simclient.log`** (existing - updated):
|
||||
- MQTT connection/reconnection
|
||||
- Discovery and heartbeat
|
||||
- File downloads
|
||||
- Group membership changes
|
||||
- Dashboard payload info
|
||||
|
||||
**`logs/monitoring.log`** (NEW):
|
||||
- Process health events (start, crash, restart, stop)
|
||||
- Both display_manager and simclient write here
|
||||
- Centralized health tracking
|
||||
- Technician-focused: "What happened to the processes?"
|
||||
|
||||
```
|
||||
# Example monitoring.log entries:
|
||||
2026-03-11 10:30:45 [INFO] Process started: event_id=event_123 event_type=presentation process=impressive pid=1234
|
||||
2026-03-11 10:35:20 [WARNING] Process crashed: event_id=event_123 event_type=presentation process=impressive restart_count=0/3
|
||||
2026-03-11 10:35:20 [WARNING] Restarting process: attempt 1/3 for impressive
|
||||
2026-03-11 10:35:25 [INFO] Process started: event_id=event_123 event_type=presentation process=impressive pid=1245
|
||||
```
|
||||
|
||||
### MQTT Transmission (Selective)
|
||||
|
||||
**Always sent** (when error occurs):
|
||||
- `infoscreen/{uuid}/logs/error` - Critical failures
|
||||
- `infoscreen/{uuid}/logs/warn` - Restarts, crashes, missing binaries
|
||||
|
||||
**Development mode only** (if DEBUG_MODE=1):
|
||||
- `infoscreen/{uuid}/logs/info` - Event start/stop, process running status
|
||||
|
||||
**Never sent**:
|
||||
- DEBUG messages (local-only debug details)
|
||||
- INFO messages in production
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
No new required variables. Existing configuration supports monitoring:
|
||||
|
||||
```bash
|
||||
# Existing (unchanged):
|
||||
ENV=development|production
|
||||
DEBUG_MODE=0|1 # Enables INFO logs to MQTT
|
||||
LOG_LEVEL=DEBUG|INFO|WARNING|ERROR # Local log verbosity
|
||||
HEARTBEAT_INTERVAL=5|60 # seconds
|
||||
SCREENSHOT_INTERVAL=30|300 # seconds (display_manager_screenshot_capture)
|
||||
|
||||
# Recommended for monitoring:
|
||||
SCREENSHOT_CAPTURE_INTERVAL=30 # How often display_manager captures screenshots
|
||||
SCREENSHOT_MAX_WIDTH=800 # Downscale for bandwidth
|
||||
SCREENSHOT_JPEG_QUALITY=70 # Balance quality/size
|
||||
|
||||
# File server (if different from MQTT broker):
|
||||
FILE_SERVER_HOST=192.168.1.100
|
||||
FILE_SERVER_PORT=8000
|
||||
FILE_SERVER_SCHEME=http
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Validation
|
||||
|
||||
### System-Level Test Sequence
|
||||
|
||||
**1. Start Services**:
|
||||
```bash
|
||||
# Terminal 1: Display Manager
|
||||
./scripts/start-display-manager.sh
|
||||
|
||||
# Terminal 2: MQTT Client
|
||||
./scripts/start-dev.sh
|
||||
|
||||
# Terminal 3: Monitor logs
|
||||
tail -f logs/monitoring.log
|
||||
```
|
||||
|
||||
**2. Trigger Each Event Type**:
|
||||
```bash
|
||||
# Via test menu or MQTT publish:
|
||||
./scripts/test-display-manager.sh # Options 1-3 trigger events
|
||||
```
|
||||
|
||||
**3. Verify Health State File**:
|
||||
```bash
|
||||
# Check health state gets written immediately
|
||||
cat src/current_process_health.json
|
||||
# Should show: event_id, event_type, current_process (impressive/chromium/vlc), process_status=running
|
||||
```
|
||||
|
||||
**4. Check MQTT Topics**:
|
||||
```bash
|
||||
# Monitor health messages:
|
||||
mosquitto_sub -h localhost -t "infoscreen/+/health" -v
|
||||
|
||||
# Monitor log messages:
|
||||
mosquitto_sub -h localhost -t "infoscreen/+/logs/#" -v
|
||||
|
||||
# Monitor dashboard heartbeat:
|
||||
mosquitto_sub -h localhost -t "infoscreen/+/dashboard" -v | head -c 500 && echo "..."
|
||||
```
|
||||
|
||||
**5. Simulate Process Crash**:
|
||||
```bash
|
||||
# Find impressive/chromium/vlc PID:
|
||||
ps aux | grep -E 'impressive|chromium|vlc'
|
||||
|
||||
# Kill process:
|
||||
kill -9 <pid>
|
||||
|
||||
# Watch monitoring.log for crash detection and restart
|
||||
tail -f logs/monitoring.log
|
||||
# Should see: [WARNING] Process crashed... [WARNING] Restarting process...
|
||||
```
|
||||
|
||||
**6. Verify Server Integration**:
|
||||
```bash
|
||||
# Server receives health messages:
|
||||
sqlite3 infoscreen.db "SELECT process_status, current_process, restart_count FROM clients WHERE uuid='...';"
|
||||
# Should show latest status from health message
|
||||
|
||||
# Server receives logs:
|
||||
sqlite3 infoscreen.db "SELECT level, message FROM client_logs WHERE client_uuid='...' ORDER BY timestamp DESC LIMIT 10;"
|
||||
# Should show ERROR/WARN entries from crashes/restarts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Health State File Not Created
|
||||
|
||||
**Symptom**: `src/current_process_health.json` missing
|
||||
**Causes**:
|
||||
- No event active (file only created when display starts)
|
||||
- display_manager not running
|
||||
|
||||
**Check**:
|
||||
```bash
|
||||
ps aux | grep display_manager
|
||||
tail -f logs/display_manager.log | grep "Process started\|Process stopped"
|
||||
```
|
||||
|
||||
### MQTT Health Messages Not Arriving
|
||||
|
||||
**Symptom**: No health messages on `infoscreen/{uuid}/health` topic
|
||||
**Causes**:
|
||||
- simclient not reading health state file
|
||||
- MQTT connection dropped
|
||||
- Health update function not called
|
||||
|
||||
**Check**:
|
||||
```bash
|
||||
# Check health file exists and is recent:
|
||||
ls -l src/current_process_health.json
|
||||
stat src/current_process_health.json | grep Modify
|
||||
|
||||
# Monitor simclient logs:
|
||||
tail -f logs/simclient.log | grep -E "Health|heartbeat|publish"
|
||||
|
||||
# Verify MQTT connection:
|
||||
mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
```
|
||||
|
||||
### Restart Loop (Process Keeps Crashing)
|
||||
|
||||
**Symptom**: monitoring.log shows repeated crashes and restarts
|
||||
**Check**:
|
||||
```bash
|
||||
# Read last log lines of the process (stored by display_manager):
|
||||
tail -f logs/impressive.out.log # for presentations
|
||||
tail -f logs/browser.out.log # for websites
|
||||
tail -f logs/video_player.out.log # for videos
|
||||
```
|
||||
|
||||
**Common Causes**:
|
||||
- Missing binary (impressive not installed, chromium not found, vlc not available)
|
||||
- Corrupt presentation file
|
||||
- Invalid URL for website
|
||||
- Insufficient permissions for screenshots
|
||||
|
||||
### Log Messages Not Reaching Server
|
||||
|
||||
**Symptom**: client_logs table in server DB is empty
|
||||
**Causes**:
|
||||
- Log level filtering: INFO messages in production are local-only
|
||||
- Logs only published on ERROR/WARN
|
||||
- MQTT publish failing silently
|
||||
|
||||
**Check**:
|
||||
```bash
|
||||
# Force DEBUG_MODE to see all logs:
|
||||
export DEBUG_MODE=1
|
||||
export LOG_LEVEL=DEBUG
|
||||
# Restart simclient and trigger event
|
||||
|
||||
# Monitor local logs first:
|
||||
tail -f logs/monitoring.log | grep -i error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
**Bandwidth per Client**:
|
||||
- Health message: ~200 bytes per heartbeat interval (every 5-60s)
|
||||
- Screenshot heartbeat: ~50-100 KB (every 30-300s)
|
||||
- Log messages: ~100-500 bytes per crash/error (rare)
|
||||
- **Total**: ~0.5-2 MB/day per device (very minimal)
|
||||
|
||||
**Disk Space on Client**:
|
||||
- Monitoring logs: 5 files × 5 MB = 25 MB max
|
||||
- Display manager logs: 5 files × 2 MB = 10 MB max
|
||||
- MQTT client logs: 5 files × 2 MB = 10 MB max
|
||||
- Screenshots: 20 files × 50-100 KB = 1-2 MB max
|
||||
- **Total**: ~50 MB max (typical for Raspberry Pi USB/SSD)
|
||||
|
||||
**Rotation Strategy**:
|
||||
- Old files automatically deleted when size limit reached
|
||||
- Technician can SSH and `tail -f` any time
|
||||
- No database overhead (file-based rotation is minimal CPU)
|
||||
|
||||
---
|
||||
|
||||
## Integration with Server (Phase 2)
|
||||
|
||||
The client implementation sends data to the server's Phase 2 endpoints:
|
||||
|
||||
**Expected Server Implementation** (from CLIENT_MONITORING_SETUP.md):
|
||||
|
||||
1. **MQTT Listener** receives and stores:
|
||||
- `infoscreen/{uuid}/logs/error`, `/logs/warn`, `/logs/info`
|
||||
- `infoscreen/{uuid}/health` messages
|
||||
- Updates `clients` table with health fields
|
||||
|
||||
2. **Database Tables**:
|
||||
- `clients.process_status`: running/crashed/starting/stopped
|
||||
- `clients.current_process`: impressive/chromium/vlc/None
|
||||
- `clients.process_pid`: PID value
|
||||
- `clients.current_event_id`: Active event
|
||||
- `client_logs`: table stores logs with level/message/context
|
||||
|
||||
3. **API Endpoints**:
|
||||
- `GET /api/client-logs/{uuid}/logs?level=ERROR&limit=50`
|
||||
- `GET /api/client-logs/summary` (errors/warnings across all clients)
|
||||
|
||||
---
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
### Files Modified
|
||||
|
||||
1. **`src/display_manager.py`**:
|
||||
- Added `psutil` import for future process monitoring
|
||||
- Added `ProcessHealthState` class (60 lines)
|
||||
- Added monitoring logger setup (8 lines)
|
||||
- Added `health.update_running()` calls in `start_presentation()`, `start_video()`, `start_webpage()`
|
||||
- Added crash detection and restart logic in `process_events()`
|
||||
- Added `health.update_stopped()` in `stop_current_display()`
|
||||
|
||||
2. **`src/simclient.py`**:
|
||||
- Added `timezone` import
|
||||
- Added monitoring logger setup (8 lines)
|
||||
- Added `read_health_state()` function
|
||||
- Added `publish_health_message()` function
|
||||
- Added `publish_log_message()` function (with level filtering)
|
||||
- Updated `send_screenshot_heartbeat()` to include health data
|
||||
- Updated heartbeat loop to call `publish_health_message()`
|
||||
|
||||
### Files Created
|
||||
|
||||
1. **`src/current_process_health.json`** (at runtime):
|
||||
- Bridge file between display_manager and simclient
|
||||
- Shared volume compatible (works in container setup)
|
||||
|
||||
2. **`logs/monitoring.log`** (at runtime):
|
||||
- New rotating log file (5 × 5MB)
|
||||
- Health events from both processes
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Deploy to test client** and run validation sequence above
|
||||
2. **Deploy server Phase 2** (if not yet done) to receive health/log messages
|
||||
3. **Verify database updates** in server-side `clients` and `client_logs` tables
|
||||
4. **Test dashboard UI** (Phase 4) to display health indicators
|
||||
5. **Configure alerting** (email/Slack) for ERROR level messages
|
||||
|
||||
---
|
||||
|
||||
**Implementation Date**: 11. März 2026
|
||||
**Part of**: Infoscreen 2025 Client Monitoring System
|
||||
**Status**: Production Ready (with server Phase 2 integration)
|
||||
267
README.md
267
README.md
@@ -11,35 +11,47 @@ A comprehensive multi-service digital signage solution for educational instituti
|
||||
## 🏗️ Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Dashboard │ │ API Server │ │ Listener │
|
||||
│ (React/Vite) │◄──►│ (Flask) │◄──►│ (MQTT Client) │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────┐ │
|
||||
│ │ MariaDB │ │
|
||||
│ │ (Database) │ │
|
||||
│ └─────────────────┘ │
|
||||
│ │
|
||||
└────────────────────┬───────────────────────────┘
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ MQTT Broker │
|
||||
│ (Mosquitto) │
|
||||
└─────────────────┘
|
||||
│
|
||||
┌────────────────────┼────────────────────┐
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Scheduler │ │ Worker │ │ Infoscreen │
|
||||
│ (Events) │ │ (Conversions) │ │ Clients │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
┌───────────────┐ ┌──────────────────────────┐ ┌───────────────┐
|
||||
│ Dashboard │◄──────►│ API Server │◄──────►│ Worker │
|
||||
│ (React/Vite) │ │ (Flask) │ │ (Conversions) │
|
||||
└───────────────┘ └──────────────────────────┘ └───────────────┘
|
||||
▲ ▲
|
||||
│ │
|
||||
┌───────────────┐ │
|
||||
│ MariaDB │ │
|
||||
│ (Database) │ │
|
||||
└───────────────┘ │
|
||||
│ direct commands/results
|
||||
|
||||
Reads events ▲ Interacts with API ▲
|
||||
│ ┌────┘
|
||||
┌───────────────┐ │ │ ┌───────────────┐
|
||||
│ Scheduler │──┘ └──│ Listener │
|
||||
│ (Events) │ │ (MQTT Client) │
|
||||
└───────────────┘ └───────────────┘
|
||||
│ Publishes events ▲ Consumes discovery/heartbeats
|
||||
▼ │ and forwards to API
|
||||
┌─────────────────┐◄─────────────────────────────────────────────────────────────────┘
|
||||
│ MQTT Broker │────────────────────────────────────────────────────────► Clients
|
||||
│ (Mosquitto) │ Sends events to clients (send discovery/heartbeats)
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
Data flow summary:
|
||||
- Listener: consumes discovery and heartbeat messages from the MQTT Broker and updates the API Server (client registration/heartbeats).
|
||||
- Listener screenshot flow: consumes `infoscreen/{uuid}/screenshot` and `infoscreen/{uuid}/dashboard`. Dashboard messages use grouped v2 schema (`message`, `content`, `runtime`, `metadata`); screenshot data is read from `content.screenshot`, capture type from `metadata.capture.type`, and forwarded to `POST /api/clients/{uuid}/screenshot`.
|
||||
- Scheduler: reads events from the API Server and publishes only currently active content to the MQTT Broker (retained topics per group). When a group has no active events, the scheduler clears its retained topic by publishing an empty list. All time comparisons are done in UTC; any naive timestamps are normalized.
|
||||
- Clients: send discovery/heartbeat via the MQTT Broker (handled by the Listener) and receive content from the Scheduler via MQTT.
|
||||
- Worker: receives conversion commands directly from the API Server and reports results/status back to the API (no MQTT involved).
|
||||
- MariaDB: is accessed exclusively by the API Server. The Dashboard never talks to the database directly; it only communicates with the API.
|
||||
|
||||
## 🌟 Key Features
|
||||
|
||||
### 📊 **Dashboard Management**
|
||||
- **User Management**: Comprehensive role-based access control (user → editor → admin → superadmin)
|
||||
- Admin panel for user CRUD operations with audit tracking
|
||||
- Self-service password change available to all users
|
||||
- Audit trail: login times, password changes, deactivation history
|
||||
- Soft-delete by default, hard-delete superadmin-only
|
||||
- Modern React-based web interface with Syncfusion components
|
||||
- Real-time client monitoring and group management
|
||||
- Event scheduling with academic period support
|
||||
@@ -47,13 +59,20 @@ A comprehensive multi-service digital signage solution for educational instituti
|
||||
- Holiday calendar integration
|
||||
- Visual indicators: TentTree icon next to the main event icon marks events that skip holidays (icon color: black)
|
||||
|
||||
- **Event Deletion**: All event types (single, single-in-series, entire series) are handled with custom dialogs. The frontend intercepts Syncfusion's built-in RecurrenceAlert and DeleteAlert popups to provide a unified, user-friendly deletion flow:
|
||||
- Single (non-recurring) event: deleted directly after confirmation.
|
||||
- Single occurrence of a recurring series: user can delete just that instance.
|
||||
- Entire recurring series: user can delete all occurrences after a final custom confirmation dialog.
|
||||
- Detached occurrences (edited/broken out): treated as single events.
|
||||
|
||||
### 🎯 **Event System**
|
||||
- **Presentations**: PowerPoint/LibreOffice → PDF conversion via Gotenberg
|
||||
- **Websites**: URL-based content display
|
||||
- **Videos**: Media file streaming
|
||||
- **Videos**: Media file streaming with per-event playback settings (`autoplay`, `loop`, `volume`, `muted`); system-wide defaults configurable under Settings → Events → Videos
|
||||
- **Messages**: Text announcements
|
||||
- **WebUntis**: Educational schedule integration
|
||||
- **Recurrence & Holidays**: Recurring events can be configured to skip holidays. The backend generates EXDATEs (RecurrenceException) for holiday occurrences so the calendar never shows those instances. The "Termine an Ferientagen erlauben" toggle does not affect these events.
|
||||
- Uses the system-wide Vertretungsplan/Supplement-Table URL (`supplement_table_url`) configured under Settings → Events. No separate per-event URL is required; WebUntis events display the same as Website events.
|
||||
- **Recurrence & Holidays**: Recurring events can be configured to skip holidays. The backend generates EXDATEs (RecurrenceException) for holiday occurrences using RFC 5545 timestamps (yyyyMMddTHHmmssZ), so the calendar never shows those instances. The scheduler queries a 7-day window to expand recurring events and applies event exceptions, but only publishes events that are active at the current time (UTC). The "Termine an Ferientagen erlauben" toggle does not affect these events.
|
||||
- **Single Occurrence Editing**: Users can edit individual occurrences of recurring events without affecting the master series. The system provides a confirmation dialog to choose between editing a single occurrence or the entire series.
|
||||
|
||||
### 🏫 **Academic Period Management**
|
||||
@@ -101,6 +120,17 @@ A comprehensive multi-service digital signage solution for educational instituti
|
||||
# or: docker compose up -d --build
|
||||
```
|
||||
|
||||
Before running the dashboard dev server you may need to install Syncfusion packages used by the UI. Example (install only the packages you use):
|
||||
|
||||
```bash
|
||||
# from the repository root
|
||||
cd dashboard
|
||||
npm install --save @syncfusion/ej2-react-splitbuttons @syncfusion/ej2-splitbuttons \
|
||||
@syncfusion/ej2-react-grids @syncfusion/ej2-react-schedule @syncfusion/ej2-react-filemanager
|
||||
```
|
||||
|
||||
License note: Syncfusion distributes components under a commercial license with a free community license for qualifying users. Verify licensing for your organization before using Syncfusion in production and document any license keys or compliance steps in this repository.
|
||||
|
||||
4. **Initialize the database (first run only)**
|
||||
```bash
|
||||
# One-shot: runs all Alembic migrations, creates default admin/group, and seeds academic periods
|
||||
@@ -139,25 +169,33 @@ For detailed deployment instructions, see:
|
||||
- **Styling**: Centralized Syncfusion Material 3 CSS imports in `dashboard/src/main.tsx`
|
||||
- **Features**: Responsive design, real-time updates, file management
|
||||
- **Port**: 5173 (dev), served via Nginx (prod)
|
||||
- **Data access**: No direct database connection; communicates with the API Server only via HTTP.
|
||||
- **Dev proxy tip**: In development, use relative paths like `/api/...` in the frontend to route through Vite's proxy to the API. Avoid absolute URLs with an extra `/api` segment to prevent CORS or double-path issues.
|
||||
|
||||
### 🔧 **API Server** (`server/`)
|
||||
- **Technology**: Flask + SQLAlchemy + Alembic
|
||||
- **Database**: MariaDB with timezone-aware timestamps
|
||||
- **Features**: RESTful API, file uploads, MQTT integration
|
||||
- Recurrence/holidays: returns only master events with `RecurrenceRule` and `RecurrenceException` (EXDATEs) so clients render recurrences and skip holiday instances reliably.
|
||||
- Recurring events are only deactivated after their recurrence_end (UNTIL); non-recurring events deactivate after their end time. Event exceptions are respected and rendered in scheduler output.
|
||||
- Single occurrence detach: `POST /api/events/<id>/occurrences/<date>/detach` creates standalone events from recurring series without modifying the master event.
|
||||
- **Port**: 8000
|
||||
- **Health Check**: `/health`
|
||||
|
||||
### 👂 **Listener** (`listener/`)
|
||||
- **Technology**: Python + paho-mqtt
|
||||
- **Purpose**: MQTT message processing, client discovery
|
||||
- **Features**: Heartbeat monitoring, automatic client registration
|
||||
|
||||
### ⏰ **Scheduler** (`scheduler/`)
|
||||
- **Technology**: Python + SQLAlchemy
|
||||
- **Purpose**: Event publishing, group-based content distribution
|
||||
- **Features**: Time-based event activation, MQTT publishing
|
||||
**Technology**: Python + SQLAlchemy
|
||||
**Purpose**: Event publishing, group-based content distribution
|
||||
**Features**:
|
||||
- Queries a future window (default: 7 days) to expand recurring events
|
||||
- Expands recurrences using RFC 5545 rules
|
||||
- Applies event exceptions (skipped dates, detached occurrences)
|
||||
- Only deactivates recurring events after their recurrence_end (UNTIL)
|
||||
- Publishes only currently active events to MQTT (per group)
|
||||
- Clears retained topics by publishing an empty list when a group has no active events
|
||||
- Normalizes naive timestamps and compares times in UTC
|
||||
- Logging is concise; conversion lookups are cached and logged only once per media
|
||||
|
||||
### 🔄 **Worker** (Conversion Service)
|
||||
- **Technology**: RQ (Redis Queue) + Gotenberg
|
||||
@@ -175,7 +213,40 @@ For detailed deployment instructions, see:
|
||||
- `infoscreen/discovery` - Client registration
|
||||
- `infoscreen/{uuid}/heartbeat` - Client alive status
|
||||
- `infoscreen/events/{group_id}` - Event distribution
|
||||
- `infoscreen/{uuid}/group_id` - Client group assignment
|
||||
## 🔗 Scheduler Event Payloads
|
||||
|
||||
- Presentations include a `presentation` object with `files`, `slide_interval`, `page_progress`, and `auto_progress`.
|
||||
- Website and WebUntis events share a unified payload:
|
||||
- `website`: `{ "type": "browser", "url": "https://..." }`
|
||||
- The `event_type` field remains specific (e.g., `presentation`, `website`, `webuntis`) so clients can dispatch appropriately; however, `website` and `webuntis` should be handled identically in clients.
|
||||
- Videos include a `video` payload with a stream URL and playback flags:
|
||||
- `video`: includes `url` (streaming endpoint) and `autoplay`, `loop`, `volume`, `muted`
|
||||
- Streaming endpoint supports byte-range requests (206) to enable seeking: `/api/eventmedia/stream/<media_id>/<filename>`
|
||||
- Server-side UTC: All backend comparisons are performed in UTC; API returns ISO strings without `Z`. Frontend appends `Z` before parsing.
|
||||
|
||||
## Recent changes since last commit
|
||||
|
||||
- Monitoring system: End-to-end monitoring is now implemented. The listener ingests `logs/*` and `health` MQTT topics, the API exposes monitoring endpoints (`/api/client-logs/monitoring-overview`, `/api/client-logs/recent-errors`, `/api/client-logs/<uuid>/logs`), and the superadmin dashboard page shows live client status, screenshots, and recent errors.
|
||||
- Screenshot priority flow: Screenshot payloads now support `screenshot_type` (`periodic`, `event_start`, `event_stop`). `event_start` and `event_stop` are treated as high-priority screenshots; the API stores typed screenshots, maintains priority metadata, and serves active priority screenshots through `/screenshots/{uuid}/priority`.
|
||||
- MQTT dashboard payload v2 cutover: Listener parsing is now v2-only for dashboard JSON payloads (`message/content/runtime/metadata`). Legacy top-level dashboard fallback has been removed after migration completion; parser observability tracks `v2_success` and `parse_failures`.
|
||||
- Presentation persistence fix: Fixed persistence of presentation flags so `page_progress` and `auto_progress` are reliably stored and returned for create/update flows and detached occurrences.
|
||||
- Additional improvements: Video/streaming, scheduler metadata, settings defaults, and UI refinements remain documented in the detailed sections below.
|
||||
|
||||
These changes are designed to be safe if metadata extraction or probes fail — clients should still attempt playback using the provided `url` and fall back to requesting/resolving richer metadata when available.
|
||||
|
||||
See `MQTT_EVENT_PAYLOAD_GUIDE.md` for details.
|
||||
|
||||
## 🧩 Developer Environment Notes (Dev Container)
|
||||
- Extensions: UI-only `Dev Containers` runs on the host UI; not installed inside the container to avoid reinstallation loops. See `/.devcontainer/devcontainer.json` (`remote.extensionKind`).
|
||||
- Installs: Dashboard uses `npm ci` on `postCreateCommand` for reproducible installs.
|
||||
- Aliases: `postStartCommand` appends shell aliases idempotently to prevent duplicates across restarts.
|
||||
|
||||
## 📦 Versioning
|
||||
- Unified app version: Use a single SemVer for the product (e.g., `2025.1.0-beta.3`) — simplest for users and release management.
|
||||
- Pre-releases: Use identifiers like `-alpha.N`, `-beta.N`, `-rc.N` for stage tracking.
|
||||
- Build metadata: Optionally include component build info (non-ordering) e.g., `+api.abcd123,dash.efgh456,sch.jkl789,wkr.mno012`.
|
||||
- Component traceability: Document component SHAs or image tags under each TECH-CHANGELOG release entry rather than exposing separate user-facing versions.
|
||||
- Hotfixes: For backend-only fixes, prefer a patch bump or pre-release increment, and record component metadata under the unified version.
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
@@ -258,6 +329,8 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
- `GET /api/clients` - List all registered clients
|
||||
- `PUT /api/clients/{uuid}/group` - Assign client to group
|
||||
- `GET /api/groups` - List client groups with alive status
|
||||
- `GET /api/groups/order` - Get saved group display order
|
||||
- `POST /api/groups/order` - Save group display order (array of group IDs)
|
||||
- `GET /api/events` - List events with filtering
|
||||
- `POST /api/events` - Create new event
|
||||
- `POST /api/events/{id}/occurrences/{date}/detach` - Detach single occurrence from recurring series
|
||||
@@ -270,18 +343,61 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
- `GET /api/files/converted/{path}` - Download converted PDFs
|
||||
- `POST /api/conversions/{media_id}/pdf` - Request conversion
|
||||
- `GET /api/conversions/{media_id}/status` - Check conversion status
|
||||
- `GET /api/eventmedia/stream/<media_id>/<filename>` - Stream media with byte-range support (206) for seeking
|
||||
- `POST /api/clients/{uuid}/screenshot` - Upload screenshot for client (base64 JPEG, optional `timestamp`, optional `screenshot_type` = `periodic|event_start|event_stop`)
|
||||
- **Screenshot retention:** The API stores `{uuid}.jpg` as latest plus the last 20 timestamped screenshots per client; older timestamped files are deleted automatically.
|
||||
- **Priority screenshots:** For `event_start`/`event_stop`, the API also keeps `{uuid}_priority.jpg` and metadata (`{uuid}_meta.json`) used by monitoring priority selection.
|
||||
|
||||
### System Settings
|
||||
- `GET /api/system-settings` - List all system settings (admin+)
|
||||
- `GET /api/system-settings/{key}` - Get a specific setting (admin+)
|
||||
- `POST /api/system-settings/{key}` - Create or update a setting (admin+)
|
||||
- `DELETE /api/system-settings/{key}` - Delete a setting (admin+)
|
||||
- `GET /api/system-settings/supplement-table` - Get WebUntis/Vertretungsplan settings (enabled, url)
|
||||
- `POST /api/system-settings/supplement-table` - Update WebUntis/Vertretungsplan settings
|
||||
- Presentation defaults stored as keys:
|
||||
- `presentation_interval` (seconds, default "10")
|
||||
- `presentation_page_progress` ("true"/"false", default "true")
|
||||
- `presentation_auto_progress` ("true"/"false", default "true")
|
||||
- Video defaults stored as keys:
|
||||
- `video_autoplay` ("true"/"false", default "true")
|
||||
- `video_loop` ("true"/"false", default "true")
|
||||
- `video_volume` (0.0–1.0, default "0.8")
|
||||
- `video_muted` ("true"/"false", default "false")
|
||||
|
||||
### User Management (Admin+)
|
||||
- `GET /api/users` - List all users (role-filtered by user's role)
|
||||
- `POST /api/users` - Create new user with username, password (min 6 chars), role, and status
|
||||
- `GET /api/users/<id>` - Get user details including audit information (login times, password changes, deactivation)
|
||||
- `PUT /api/users/<id>` - Update user (cannot change own role or account status)
|
||||
- `PUT /api/users/<id>/password` - Admin password reset (cannot reset own password this way; use `/api/auth/change-password` instead)
|
||||
- `DELETE /api/users/<id>` - Delete user permanently (superadmin only; cannot delete self)
|
||||
|
||||
### Authentication
|
||||
- `POST /api/auth/login` - User login (tracks last login time and failed attempts)
|
||||
- `POST /api/auth/logout` - User logout
|
||||
- `PUT /api/auth/change-password` - Self-service password change (all authenticated users; requires current password verification)
|
||||
|
||||
### Health & Monitoring
|
||||
- `GET /health` - Service health check
|
||||
- `GET /api/screenshots/{uuid}.jpg` - Client screenshots
|
||||
- `GET /screenshots/{uuid}.jpg` - Latest client screenshot
|
||||
- `GET /screenshots/{uuid}/priority` - Active high-priority screenshot (falls back to latest)
|
||||
- `GET /api/client-logs/monitoring-overview` - Aggregated monitoring overview for dashboard (superadmin)
|
||||
- `GET /api/client-logs/recent-errors` - Recent error feed across clients (admin+)
|
||||
- `GET /api/client-logs/{uuid}/logs` - Filtered per-client logs (admin+)
|
||||
|
||||
## 🎨 Frontend Features
|
||||
|
||||
### API Response Format
|
||||
- **JSON Convention**: All API endpoints return camelCase JSON (e.g., `startTime`, `endTime`, `groupId`). Frontend consumes camelCase directly.
|
||||
- **UTC Time Parsing**: API returns ISO strings without 'Z' suffix. Frontend appends 'Z' before parsing to ensure UTC interpretation: `const utcString = dateStr.endsWith('Z') ? dateStr : dateStr + 'Z'; new Date(utcString);`. Display uses `toLocaleTimeString('de-DE')` for German format.
|
||||
|
||||
### Recurrence & holidays
|
||||
- The frontend manually expands recurring events due to Syncfusion EXDATE handling limitations.
|
||||
- The API supplies `RecurrenceException` (EXDATE) with exact occurrence start times (UTC) so holiday instances are excluded.
|
||||
- Events with "skip holidays" display a TentTree icon next to the main event icon.
|
||||
- Single occurrence editing: Users can detach individual occurrences via confirmation dialog, creating standalone events while preserving the master series.
|
||||
- Recurrence is handled natively by Syncfusion. The API returns master events with `RecurrenceRule` and `RecurrenceException` (EXDATE) in RFC 5545 format (yyyyMMddTHHmmssZ, UTC) so the Scheduler excludes holiday instances reliably.
|
||||
- Events with "skip holidays" display a TentTree icon next to the main event icon (icon color: black). The Scheduler’s native lower-right recurrence badge indicates series membership.
|
||||
- Single occurrence editing: Users can edit either a single occurrence or the entire series. The UI persists changes using `onActionCompleted (requestType='eventChanged')`:
|
||||
- Single occurrence → `POST /api/events/<id>/occurrences/<date>/detach` (creates standalone event and adds EXDATE to master)
|
||||
- Series/single event → `PUT /api/events/<id>`
|
||||
|
||||
### Syncfusion Components Used (Material 3)
|
||||
- **Schedule**: Event calendar with drag-drop support
|
||||
@@ -292,19 +408,67 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
- **Notifications**: Toast messages and alerts
|
||||
- **Pager**: Used on Programinfo changelog for pagination
|
||||
- **Cards (layouts)**: Programinfo sections styled with Syncfusion card classes
|
||||
- **SplitButtons**: Header user menu (top-right) using Syncfusion DropDownButton to show current user and role, with actions "Passwort ändern", "Profil", and "Abmelden".
|
||||
|
||||
### Pages Overview
|
||||
- **Dashboard**: System overview and statistics
|
||||
- **Dashboard**: Card-based overview of all Raumgruppen (room groups) with real-time status monitoring. Features include:
|
||||
- Global statistics: total infoscreens, online/offline counts, warning groups
|
||||
- Filter buttons: All / Online / Offline / Warnings with dynamic counts
|
||||
- Per-group cards showing currently active event (title, type, date/time in local timezone)
|
||||
- Health bar with online/offline ratio and color-coded status
|
||||
- Expandable client list with last alive timestamps
|
||||
- Bulk restart button for offline clients
|
||||
- Auto-refresh every 15 seconds; manual refresh button available
|
||||
- **Clients**: Device management and monitoring
|
||||
- **Groups**: Client group organization
|
||||
- **Events**: Schedule management
|
||||
- **Media**: File upload and conversion
|
||||
- **Settings**: System configuration
|
||||
- **Users**: Comprehensive user management (admin+ only in menu)
|
||||
- Full CRUD interface with sortable GridComponent (20 per page)
|
||||
- Statistics cards: total, active, inactive user counts
|
||||
- Create, edit, delete, and password reset dialogs
|
||||
- User details modal showing audit information (login times, password changes, deactivation)
|
||||
- Role badges with color coding (user: gray, editor: blue, admin: green, superadmin: red)
|
||||
- Self-protection: cannot modify own account (cannot change role/status or delete self)
|
||||
- Superadmin-only hard delete; other users soft-deactivate
|
||||
- **Settings**: Central configuration (tabbed)
|
||||
- 📅 Academic Calendar (all users):
|
||||
- 📥 Import & Liste: CSV/TXT import combined with holidays list
|
||||
- 🗂️ Perioden: Academic Periods (set active period)
|
||||
- 🖥️ Display & Clients (admin+): Defaults placeholders and quick links to Clients/Groups
|
||||
- 🎬 Media & Files (admin+): Upload settings placeholders and Conversion status overview
|
||||
- 🗓️ Events (admin+): WebUntis/Vertretungsplan URL enable/disable, save, preview. Presentations: general defaults for slideshow interval, page-progress, and auto-progress; persisted via `/api/system-settings` keys and applied on create in the event modal. Videos: system-wide defaults for `autoplay`, `loop`, `volume`, and `muted`; persisted via `/api/system-settings` keys and applied on create in the event modal.
|
||||
- ⚙️ System (superadmin): Organization info and Advanced configuration placeholders
|
||||
- **Holidays**: Academic calendar management
|
||||
- **Ressourcen**: Timeline view of active events across all room groups
|
||||
- Parallel timeline display showing all groups and their current events simultaneously
|
||||
- Compact visualization: 65px row height per group with color-coded event bars
|
||||
- Day and week views for flexible time range inspection
|
||||
- Customizable group ordering with visual drag controls (order persisted to backend)
|
||||
- Real-time event status: shows currently running events with type, title, and time window
|
||||
- Filters out unassigned groups for focused view
|
||||
- Resource-based Syncfusion timeline scheduler with resize and drag-drop support
|
||||
- **Monitoring**: Superadmin-only monitoring dashboard
|
||||
- Live client health states (`healthy`, `warning`, `critical`, `offline`) from heartbeat/process/log data
|
||||
- Latest screenshot preview with screenshot-type badges (`periodic`, `event_start`, `event_stop`) and process metadata per client
|
||||
- Active priority screenshots are surfaced immediately and polled faster while priority items are active
|
||||
- System-wide recent error stream and per-client log drill-down
|
||||
- **Program info**: Version, build info, tech stack and paginated changelog (reads `dashboard/public/program-info.json`)
|
||||
|
||||
## 🔒 Security & Authentication
|
||||
|
||||
- **Role-Based Access Control (RBAC)**: 4-tier hierarchy (user → editor → admin → superadmin) with privilege escalation protection
|
||||
- Admin cannot see, manage, or create superadmin accounts
|
||||
- Admin can create and manage user/editor/admin roles only
|
||||
- Superadmin can manage all roles including other superadmins
|
||||
- Role-gated menu visibility: users only see menu items they have permission for
|
||||
- **Account Management**:
|
||||
- Soft-delete by default (deactivated_at, deactivated_by timestamps)
|
||||
- Hard-delete superadmin-only (permanent removal from database)
|
||||
- Self-account protections: cannot change own role/status, cannot delete self via admin panel
|
||||
- Self-service password change available to all authenticated users (requires current password verification)
|
||||
- Admin password reset available for other users (no current password required)
|
||||
- **Audit Tracking**: All user accounts track login times, password changes, failed login attempts, and deactivation history
|
||||
- **Environment Variables**: Sensitive data via `.env`
|
||||
- **SSL/TLS**: HTTPS support with custom certificates
|
||||
- **MQTT Security**: Username/password authentication
|
||||
@@ -315,11 +479,12 @@ mosquitto_sub -h localhost -t "infoscreen/+/heartbeat" -v
|
||||
## 📊 Monitoring & Logging
|
||||
|
||||
### Health Checks
|
||||
All services include Docker health checks:
|
||||
- API: HTTP endpoint monitoring
|
||||
- Database: Connection and initialization status
|
||||
- MQTT: Pub/sub functionality test
|
||||
- Dashboard: Nginx availability
|
||||
- **Scheduler**: Logging is concise; conversion lookups are cached and logged only once per media.
|
||||
- Monitoring API: `/api/client-logs/monitoring-overview` and `/api/client-logs/recent-errors` for live diagnostics
|
||||
- Monitoring overview includes screenshot priority state (`latestScreenshotType`, `priorityScreenshotType`, `priorityScreenshotReceivedAt`, `hasActivePriorityScreenshot`) and `summary.activePriorityScreenshots`
|
||||
|
||||
### Logging Strategy
|
||||
- **Development**: Docker Compose logs with service prefixes
|
||||
@@ -394,6 +559,22 @@ docker exec -it infoscreen-db mysqladmin ping
|
||||
# Restart dependent services
|
||||
```
|
||||
|
||||
**Vite import-analysis errors (Syncfusion splitbuttons)**
|
||||
```bash
|
||||
# Symptom
|
||||
[plugin:vite:import-analysis] Failed to resolve import "@syncfusion/ej2-react-splitbuttons"
|
||||
|
||||
# Fix
|
||||
# 1) Ensure dependencies are added in dashboard/package.json:
|
||||
# - @syncfusion/ej2-react-splitbuttons, @syncfusion/ej2-splitbuttons
|
||||
# 2) In dashboard/vite.config.ts, add to optimizeDeps.include:
|
||||
# '@syncfusion/ej2-react-splitbuttons', '@syncfusion/ej2-splitbuttons'
|
||||
# 3) If dashboard uses a named node_modules volume, recreate it so npm ci runs inside the container:
|
||||
docker compose rm -sf dashboard
|
||||
docker volume rm <project>_dashboard-node-modules <project>_dashboard-vite-cache || true
|
||||
docker compose up -d --build dashboard
|
||||
```
|
||||
|
||||
**MQTT communication issues**
|
||||
```bash
|
||||
# Test MQTT broker
|
||||
|
||||
94
SCREENSHOT_IMPLEMENTATION.md
Normal file
94
SCREENSHOT_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# Screenshot Transmission Implementation
|
||||
|
||||
## Overview
|
||||
Clients send screenshots via MQTT during heartbeat intervals. The listener service receives these screenshots and forwards them to the server API for storage.
|
||||
|
||||
## Architecture
|
||||
|
||||
### MQTT Topic
|
||||
- **Topic**: `infoscreen/{uuid}/screenshot`
|
||||
- **Payload Format**:
|
||||
- Raw binary image data (JPEG/PNG), OR
|
||||
- JSON with base64-encoded image: `{"image": "<base64-string>"}`
|
||||
|
||||
### Components
|
||||
|
||||
#### 1. Listener Service (`listener/listener.py`)
|
||||
- **Subscribes to**: `infoscreen/+/screenshot`
|
||||
- **Function**: `handle_screenshot(uuid, payload)`
|
||||
- Detects payload format (binary or JSON)
|
||||
- Converts binary to base64 if needed
|
||||
- Forwards to API via HTTP POST
|
||||
|
||||
#### 2. Server API (`server/routes/clients.py`)
|
||||
- **Endpoint**: `POST /api/clients/<uuid>/screenshot`
|
||||
- **Authentication**: No authentication required (internal service call)
|
||||
- **Accepts**:
|
||||
- JSON: `{"image": "<base64-encoded-image>"}`
|
||||
- Binary: raw image data
|
||||
- **Storage**:
|
||||
- Saves to `server/screenshots/{uuid}_{timestamp}.jpg` (with timestamp)
|
||||
- Saves to `server/screenshots/{uuid}.jpg` (latest, for quick retrieval)
|
||||
|
||||
#### 3. Retrieval (`server/wsgi.py`)
|
||||
- **Endpoint**: `GET /screenshots/<uuid>`
|
||||
- **Returns**: Latest screenshot for the given client UUID
|
||||
- **Nginx**: Exposes `/screenshots/{uuid}.jpg` in production
|
||||
|
||||
## Unified Identification Method
|
||||
|
||||
Screenshots are identified by **client UUID**:
|
||||
- Each client has a unique UUID stored in the `clients` table
|
||||
- Screenshots are stored as `{uuid}.jpg` (latest) and `{uuid}_{timestamp}.jpg` (historical)
|
||||
- The API endpoint requires UUID validation against the database
|
||||
- Retrieval is done via `GET /screenshots/<uuid>` which returns the latest screenshot
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
Client → MQTT (infoscreen/{uuid}/screenshot)
|
||||
↓
|
||||
Listener Service
|
||||
↓ (validates client exists)
|
||||
↓ (converts binary → base64 if needed)
|
||||
↓
|
||||
API POST /api/clients/{uuid}/screenshot
|
||||
↓ (validates client UUID)
|
||||
↓ (decodes base64 → binary)
|
||||
↓
|
||||
Filesystem: server/screenshots/{uuid}.jpg
|
||||
↓
|
||||
Dashboard/Nginx: GET /screenshots/{uuid}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
- **Listener**: `API_BASE_URL` (default: `http://server:8000`)
|
||||
- **Server**: Screenshots stored in `server/screenshots/` directory
|
||||
|
||||
### Dependencies
|
||||
- Listener: Added `requests>=2.31.0` to `listener/requirements.txt`
|
||||
- Server: Uses built-in Flask and base64 libraries
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Client Not Found**: Returns 404 if UUID doesn't exist in database
|
||||
- **Invalid Payload**: Returns 400 if image data is missing or invalid
|
||||
- **API Timeout**: Listener logs error and continues (timeout: 10s)
|
||||
- **Network Errors**: Listener logs and continues operation
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Screenshot endpoint does not require authentication (internal service-to-service)
|
||||
- Client UUID must exist in database before screenshot is accepted
|
||||
- Base64 encoding prevents binary data issues in JSON transport
|
||||
- File size is tracked and logged for monitoring
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- Add screenshot retention policy (auto-delete old timestamped files)
|
||||
- Add compression before transmission
|
||||
- Add screenshot quality settings
|
||||
- Add authentication between listener and API
|
||||
- Add screenshot history API endpoint
|
||||
159
SUPERADMIN_SETUP.md
Normal file
159
SUPERADMIN_SETUP.md
Normal file
@@ -0,0 +1,159 @@
|
||||
# Superadmin User Setup
|
||||
|
||||
This document describes the superadmin user initialization system implemented in the infoscreen_2025 project.
|
||||
|
||||
## Overview
|
||||
|
||||
The system automatically creates a default superadmin user during database initialization if one doesn't already exist. This ensures there's always an initial administrator account available for system setup and configuration.
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Files Modified
|
||||
|
||||
1. **`server/init_defaults.py`**
|
||||
- Updated to create a superadmin user with role `superadmin` (from `UserRole` enum)
|
||||
- Password is securely hashed using bcrypt
|
||||
- Only creates user if not already present in the database
|
||||
- Provides clear feedback about creation status
|
||||
|
||||
2. **`.env.example`**
|
||||
- Updated with new environment variables
|
||||
- Includes documentation for required variables
|
||||
|
||||
3. **`docker-compose.yml`** and **`docker-compose.prod.yml`**
|
||||
- Added environment variable passthrough for superadmin credentials
|
||||
|
||||
4. **`userrole-management.md`**
|
||||
- Marked stage 1, step 2 as completed
|
||||
|
||||
## Environment Variables
|
||||
|
||||
### Required
|
||||
|
||||
- **`DEFAULT_SUPERADMIN_PASSWORD`**: The password for the superadmin user
|
||||
- **IMPORTANT**: This must be set for the superadmin user to be created
|
||||
- Should be a strong, secure password
|
||||
- If not set, the script will skip superadmin creation with a warning
|
||||
|
||||
### Optional
|
||||
|
||||
- **`DEFAULT_SUPERADMIN_USERNAME`**: The username for the superadmin user
|
||||
- Default: `superadmin`
|
||||
- Can be customized if needed
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
### Development
|
||||
|
||||
1. Copy `.env.example` to `.env`:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. Edit `.env` and set a secure password:
|
||||
```bash
|
||||
DEFAULT_SUPERADMIN_USERNAME=superadmin
|
||||
DEFAULT_SUPERADMIN_PASSWORD=your_secure_password_here
|
||||
```
|
||||
|
||||
3. Run the initialization (happens automatically on container startup):
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Production
|
||||
|
||||
1. Set environment variables in your deployment configuration:
|
||||
```bash
|
||||
export DEFAULT_SUPERADMIN_USERNAME=superadmin
|
||||
export DEFAULT_SUPERADMIN_PASSWORD=your_very_secure_password
|
||||
```
|
||||
|
||||
2. Deploy with docker-compose:
|
||||
```bash
|
||||
docker-compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
|
||||
## Behavior
|
||||
|
||||
The `init_defaults.py` script runs automatically during container initialization and:
|
||||
|
||||
1. Checks if the username already exists in the database
|
||||
2. If it exists: Prints an info message and skips creation
|
||||
3. If it doesn't exist and `DEFAULT_SUPERADMIN_PASSWORD` is set:
|
||||
- Hashes the password with bcrypt
|
||||
- Creates the user with role `superadmin`
|
||||
- Prints a success message
|
||||
4. If `DEFAULT_SUPERADMIN_PASSWORD` is not set:
|
||||
- Prints a warning and skips creation
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Never commit the `.env` file** to version control
|
||||
2. Use a strong password (minimum 12 characters, mixed case, numbers, special characters)
|
||||
3. Change the default password after first login
|
||||
4. In production, consider using secrets management (Docker secrets, Kubernetes secrets, etc.)
|
||||
5. Rotate passwords regularly
|
||||
6. The password is hashed with bcrypt (industry standard) before storage
|
||||
|
||||
## Testing
|
||||
|
||||
To verify the superadmin user was created:
|
||||
|
||||
```bash
|
||||
# Connect to the database container
|
||||
docker exec -it infoscreen-db mysql -u root -p
|
||||
|
||||
# Check the users table
|
||||
USE infoscreen_by_taa;
|
||||
SELECT username, role, is_active FROM users WHERE role = 'superadmin';
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
+------------+------------+-----------+
|
||||
| username | role | is_active |
|
||||
+------------+------------+-----------+
|
||||
| superadmin | superadmin | 1 |
|
||||
+------------+------------+-----------+
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Superadmin not created
|
||||
|
||||
**Symptoms**: No superadmin user in database
|
||||
|
||||
**Solutions**:
|
||||
1. Check if `DEFAULT_SUPERADMIN_PASSWORD` is set in environment
|
||||
2. Check container logs: `docker logs infoscreen-api`
|
||||
3. Look for warning message: "⚠️ DEFAULT_SUPERADMIN_PASSWORD nicht gesetzt"
|
||||
|
||||
### User already exists message
|
||||
|
||||
**Symptoms**: Script says user already exists but you can't log in
|
||||
|
||||
**Solutions**:
|
||||
1. Verify the username is correct
|
||||
2. Reset the password manually in the database
|
||||
3. Or delete the user and restart containers to recreate
|
||||
|
||||
### Permission denied errors
|
||||
|
||||
**Symptoms**: Database connection errors during initialization
|
||||
|
||||
**Solutions**:
|
||||
1. Verify `DB_USER`, `DB_PASSWORD`, and `DB_NAME` environment variables
|
||||
2. Check database container is healthy: `docker ps`
|
||||
3. Verify database connectivity: `docker exec infoscreen-api ping -c 1 db`
|
||||
|
||||
## Next Steps
|
||||
|
||||
After setting up the superadmin user:
|
||||
|
||||
1. Implement the `/api/me` endpoint (Stage 1, Step 3)
|
||||
2. Add authentication/session management
|
||||
3. Create permission decorators (Stage 1, Step 4)
|
||||
4. Build user management UI (Stage 2)
|
||||
|
||||
See `userrole-management.md` for the complete roadmap.
|
||||
337
TECH-CHANGELOG.md
Normal file
337
TECH-CHANGELOG.md
Normal file
@@ -0,0 +1,337 @@
|
||||
|
||||
# TECH-CHANGELOG
|
||||
|
||||
|
||||
|
||||
This changelog documents technical and developer-relevant changes included in public releases. For development workspace changes, see DEV-CHANGELOG.md. Not all changes here are reflected in the user-facing changelog (`program-info.json`), and not all UI/feature changes are repeated here. Some changes (e.g., backend refactoring, API adjustments, infrastructure, developer tooling, or internal logic) may only appear in TECH-CHANGELOG.md. For UI/feature changes, see `dashboard/public/program-info.json`.
|
||||
|
||||
## 2026.1.0-alpha.14 (2026-01-28)
|
||||
- 🗓️ **Ressourcen Page (Timeline View)**:
|
||||
- New frontend page: `dashboard/src/ressourcen.tsx` (357 lines) – Parallel timeline view showing active events for all room groups
|
||||
- Uses Syncfusion ScheduleComponent with TimelineViews module for resource-based scheduling
|
||||
- Compact visualization: 65px row height per group, dynamically calculated total container height
|
||||
- Real-time event loading: Fetches events per group for current date range on mount and view/date changes
|
||||
- Timeline modes: Day (default) and Week views with date range calculation
|
||||
- Color-coded event bars: Uses `getGroupColor()` from `groupColors.ts` for group theme matching
|
||||
- Displays first active event per group with type, title, and time window
|
||||
- Filters out "Nicht zugeordnet" group from timeline display
|
||||
- Resource mapping: Each group becomes a timeline resource row, events mapped via `ResourceId`
|
||||
- Syncfusion modules: TimelineViews, Resize, DragAndDrop injected for rich interaction
|
||||
- 🎨 **Ressourcen Styling**:
|
||||
- New CSS file: `dashboard/src/ressourcen.css` (178 lines) with modern Material 3 design
|
||||
- Fixed CSS lint errors: Converted `rgba()` to modern `rgb()` notation with percentage alpha values (`rgb(0 0 0 / 10%)`)
|
||||
- Removed unnecessary quotes from font-family names (Roboto, Oxygen, Ubuntu, Cantarell)
|
||||
- Fixed CSS selector specificity ordering (`.e-schedule` before `.ressourcen-timeline-wrapper .e-schedule`)
|
||||
- Card-based controls layout with shadow and rounded corners
|
||||
- Group ordering panel with scrollable list and action buttons
|
||||
- Responsive timeline wrapper with flex layout
|
||||
- 🔌 **Group Order API**:
|
||||
- New backend endpoints in `server/routes/groups.py`:
|
||||
- `GET /api/groups/order` – Retrieve saved group display order (returns JSON with `order` array of group IDs)
|
||||
- `POST /api/groups/order` – Persist group display order (accepts JSON with `order` array)
|
||||
- Order persistence: Stored in `system_settings` table with key `group_display_order` (JSON array of integers)
|
||||
- Automatic synchronization: Missing group IDs added to order, removed IDs filtered out
|
||||
- Frontend integration: Group order panel with drag up/down buttons, real-time reordering with backend sync
|
||||
- 🖥️ **Frontend Technical**:
|
||||
- State management: React hooks with unused setters removed (setTimelineView, setViewDate) to resolve lint warnings
|
||||
- TypeScript: Changed `let` to `const` for immutable end date calculation
|
||||
- UTC date parsing: Uses parseUTCDate callback to append 'Z' and ensure UTC interpretation
|
||||
- Event formatting: Capitalizes first letter of event type for display (e.g., "Website - Title")
|
||||
- Loading state: Shows loading indicator while fetching group/event data
|
||||
- Schedule height: Dynamic calculation based on `groups.length * 65px + 100px` for header
|
||||
- 📖 **Documentation**:
|
||||
- Updated `.github/copilot-instructions.md`:
|
||||
- Added Ressourcen page to "Recent changes" section (January 2026)
|
||||
- Added `ressourcen.tsx` and `ressourcen.css` to "Important files" list
|
||||
- Added Groups API order endpoints documentation
|
||||
- Added comprehensive Ressourcen page section to "Frontend patterns"
|
||||
- Updated `README.md`:
|
||||
- Added Ressourcen page to "Pages Overview" section with feature details
|
||||
- Added `GET/POST /api/groups/order` to Core Resources API section
|
||||
- Bumped version in `dashboard/public/program-info.json` to `2026.1.0-alpha.14` with user-facing changelog
|
||||
|
||||
Notes for integrators:
|
||||
- Group order API returns JSON with `{ "order": [1, 2, 3, ...] }` structure (array of group IDs)
|
||||
- Timeline view automatically filters "Nicht zugeordnet" group for cleaner display
|
||||
- CSS follows modern Material 3 color-function notation (`rgb(r g b / alpha%)`)
|
||||
- Syncfusion ScheduleComponent requires TimelineViews, Resize, and DragAndDrop modules injected
|
||||
|
||||
Backend technical work (post-release notes; no version bump):
|
||||
- 📊 **Client Monitoring Infrastructure (Server-Side) (2026-03-10)**:
|
||||
- Database schema: New Alembic migration `c1d2e3f4g5h6_add_client_monitoring.py` (idempotent) adds:
|
||||
- `client_logs` table: Stores centralized logs with columns (id, client_uuid, timestamp, level, message, context, created_at)
|
||||
- Foreign key: `client_logs.client_uuid` → `clients.uuid` (ON DELETE CASCADE)
|
||||
- Health monitoring columns added to `clients` table: `current_event_id`, `current_process`, `process_status`, `process_pid`, `last_screenshot_analyzed`, `screen_health_status`, `last_screenshot_hash`
|
||||
- Indexes for performance: (client_uuid, timestamp DESC), (level, timestamp DESC), (created_at DESC)
|
||||
- Data models (`models/models.py`):
|
||||
- New enums: `LogLevel` (ERROR, WARN, INFO, DEBUG), `ProcessStatus` (running, crashed, starting, stopped), `ScreenHealthStatus` (OK, BLACK, FROZEN, UNKNOWN)
|
||||
- New model: `ClientLog` with foreign key to `Client` (CASCADE on delete)
|
||||
- Extended `Client` model with 7 health monitoring fields
|
||||
- MQTT listener extensions (`listener/listener.py`):
|
||||
- New topic subscriptions: `infoscreen/+/logs/error`, `infoscreen/+/logs/warn`, `infoscreen/+/logs/info`, `infoscreen/+/health`
|
||||
- Log handler: Parses JSON payloads, creates `ClientLog` entries, validates client UUID exists (FK constraint)
|
||||
- Health handler: Updates client state from MQTT health messages
|
||||
- Enhanced heartbeat handler: Captures `process_status`, `current_process`, `process_pid`, `current_event_id` from payload
|
||||
- API endpoints (`server/routes/client_logs.py`):
|
||||
- `GET /api/client-logs/<uuid>/logs` – Retrieve client logs with filters (level, limit, since); authenticated (admin_or_higher)
|
||||
- `GET /api/client-logs/summary` – Get log counts by level per client for last 24h; authenticated (admin_or_higher)
|
||||
- `GET /api/client-logs/monitoring-overview` – Aggregated monitoring overview for dashboard clients/statuses; authenticated (admin_or_higher)
|
||||
- `GET /api/client-logs/recent-errors` – System-wide error monitoring; authenticated (admin_or_higher)
|
||||
- `GET /api/client-logs/test` – Infrastructure validation endpoint (no auth required)
|
||||
- Blueprint registered in `server/wsgi.py` as `client_logs_bp`
|
||||
- Dev environment fix: Updated `docker-compose.override.yml` listener service to use `working_dir: /workspace` and direct command path for live code reload
|
||||
- 🖥️ **Monitoring Dashboard Integration (2026-03-24)**:
|
||||
- Frontend monitoring dashboard (`dashboard/src/monitoring.tsx`) is active and wired to monitoring APIs
|
||||
- Superadmin-only route/menu integration completed in `dashboard/src/App.tsx`
|
||||
- Added dashboard monitoring API client (`dashboard/src/apiClientMonitoring.ts`) for overview and recent errors
|
||||
- 🐛 **Presentation Flags Persistence Fix (2026-03-24)**:
|
||||
- Fixed persistence for presentation flags `page_progress` and `auto_progress` across create/update and detached-occurrence flows
|
||||
- API serialization now reliably returns stored values for presentation behavior fields
|
||||
- 📡 **MQTT Protocol Extensions**:
|
||||
- New log topics: `infoscreen/{uuid}/logs/{error|warn|info}` with JSON payload (timestamp, message, context)
|
||||
- New health topic: `infoscreen/{uuid}/health` with metrics (expected_state, actual_state, health_metrics)
|
||||
- Enhanced heartbeat: `infoscreen/{uuid}/heartbeat` now includes `current_process`, `process_pid`, `process_status`, `current_event_id`
|
||||
- QoS levels: ERROR/WARN logs use QoS 1 (at least once), INFO/health use QoS 0 (fire and forget)
|
||||
- 📖 **Documentation**:
|
||||
- New file: `CLIENT_MONITORING_SPECIFICATION.md` – Comprehensive 20-section technical spec for client-side implementation (MQTT protocol, process monitoring, auto-recovery, payload formats, testing guide)
|
||||
- New file: `CLIENT_MONITORING_IMPLEMENTATION_GUIDE.md` – 5-phase implementation guide (database, backend, client watchdog, dashboard UI, testing)
|
||||
- Updated `.github/copilot-instructions.md`: Added MQTT topics section, client monitoring integration notes
|
||||
- ✅ **Validation**:
|
||||
- End-to-end testing completed: MQTT message → listener → database → API confirmed working
|
||||
- Test flow: Published message to `infoscreen/{real-uuid}/logs/error` → listener logs showed receipt → database stored entry → test API returned log data
|
||||
- Known client UUIDs validated: 9b8d1856-ff34-4864-a726-12de072d0f77, 7f65c615-5827-4ada-9ac8-4727c2e8ee55, bdbfff95-0b2b-4265-8cc7-b0284509540a
|
||||
|
||||
Notes for integrators:
|
||||
- Tiered logging strategy: ERROR/WARN always centralized (QoS 1), INFO dev-only (QoS 0), DEBUG local-only
|
||||
- Monitoring dashboard is implemented and consumes `/api/client-logs/monitoring-overview`, `/api/client-logs/recent-errors`, and `/api/client-logs/<uuid>/logs`
|
||||
- Foreign key constraint prevents logging for non-existent clients (data integrity enforced)
|
||||
- Migration is idempotent and can be safely rerun after interruption
|
||||
- Use `GET /api/client-logs/test` for quick infrastructure validation without authentication
|
||||
|
||||
## 2025.1.0-beta.1 (TBD)
|
||||
- 🔐 **User Management & Role-Based Access Control**:
|
||||
- Backend: Implemented comprehensive user management API (`server/routes/users.py`) with 6 endpoints (GET, POST, PUT, DELETE users + password reset).
|
||||
- Data model: Extended `User` with 7 audit/security fields via Alembic migration (`4f0b8a3e5c20_add_user_audit_fields.py`):
|
||||
- `last_login_at`, `last_password_change_at`: TIMESTAMP (UTC) for auth event tracking
|
||||
- `failed_login_attempts`, `last_failed_login_at`: Security monitoring for brute-force detection
|
||||
- `locked_until`: TIMESTAMP placeholder for account lockout (infrastructure in place, not yet enforced)
|
||||
- `deactivated_at`, `deactivated_by`: Soft-delete audit trail (FK self-reference)
|
||||
- Role hierarchy: 4-tier privilege escalation (user → editor → admin → superadmin) enforced at API and UI levels:
|
||||
- Admin cannot see, create, or manage superadmin accounts
|
||||
- Admin can manage user/editor/admin roles only
|
||||
- Superadmin can manage all roles including other superadmins
|
||||
- Auth routes enhanced (`server/routes/auth.py`):
|
||||
- Login: Sets `last_login_at`, resets `failed_login_attempts` on success; increments `failed_login_attempts` and `last_failed_login_at` on failure
|
||||
- Password change: Sets `last_password_change_at` on both self-service and admin reset
|
||||
- New endpoint: `PUT /api/auth/change-password` for self-service password change (all authenticated users; requires current password verification)
|
||||
- User API security:
|
||||
- Admin cannot reset superadmin passwords
|
||||
- Self-account protections: cannot change own role/status, cannot delete self
|
||||
- Admin cannot use password reset endpoint for their own account (backend check enforces self-service requirement)
|
||||
- All user responses include audit fields in camelCase (lastLoginAt, lastPasswordChangeAt, failedLoginAttempts, deactivatedAt, deactivatedBy)
|
||||
- Soft-delete pattern: Deactivation by default (sets `deactivated_at` and `deactivated_by`); hard-delete superadmin-only
|
||||
- 🖥️ **Frontend User Management**:
|
||||
- New page: `dashboard/src/users.tsx` – Full CRUD interface (820 lines) with Syncfusion components
|
||||
- GridComponent: 20 per page (configurable), sortable columns (ID, username, role), custom action button template with role-based visibility
|
||||
- Statistics cards: Total users, active (non-deactivated), inactive (deactivated) counts
|
||||
- Dialogs: Create (username/password/role/status), Edit (with self-edit protections), Password Reset (admin only, no current password required), Delete (superadmin only, self-check), Details (read-only audit info with formatted timestamps)
|
||||
- Role badges: Color-coded display (user: gray, editor: blue, admin: green, superadmin: red)
|
||||
- Audit information display: last login, password change, last failed login, deactivation timestamps and deactivating user
|
||||
- Self-protection: Delete button hidden for current user (prevents accidental self-deletion)
|
||||
- Menu visibility: "Benutzer" sidebar item only visible to admin+ (role-gated in App.tsx)
|
||||
- 💬 **Header User Menu**:
|
||||
- Enhanced top-right dropdown with "Passwort ändern" (lock icon), "Profil", and "Abmelden"
|
||||
- Self-service password change dialog: Available to all authenticated users; requires current password verification, new password min 6 chars, must match confirm field
|
||||
- Implemented with Syncfusion DropDownButton (`@syncfusion/ej2-react-splitbuttons`)
|
||||
- 🔌 **API Client**:
|
||||
- New file: `dashboard/src/apiUsers.ts` – Type-safe TypeScript client (143 lines) for user operations
|
||||
- Functions: listUsers(), getUser(), createUser(), updateUser(), resetUserPassword(), deleteUser()
|
||||
- All functions include proper error handling and camelCase JSON mapping
|
||||
- 📖 **Documentation**:
|
||||
- Updated `.github/copilot-instructions.md`: Added comprehensive sections on user model audit fields, user management API routes, auth routes, header menu, and user management page implementation
|
||||
- Updated `README.md`: Added user management to Key Features, API endpoints (User Management + Authentication sections), Pages Overview, and Security & Authentication sections with RBAC details
|
||||
- Updated `TECH-CHANGELOG.md`: Documented all technical changes and integration notes
|
||||
|
||||
Notes for integrators:
|
||||
- User CRUD endpoints accept/return all audit fields in camelCase
|
||||
- Admin password reset (`PUT /api/users/<id>/password`) cannot be used for admin's own account; users must use self-service endpoint
|
||||
- Frontend enforces role-gated menu visibility; backend validates all role transitions to prevent privilege escalation
|
||||
- Soft-delete is default; hard-delete (superadmin-only) requires explicit confirmation
|
||||
- Audit fields populated automatically on login/logout/password-change/deactivation events
|
||||
|
||||
|
||||
|
||||
Backend rework (post-release notes; no version bump):
|
||||
- 🧩 Dev Container hygiene: Remote Containers runs on UI (`remote.extensionKind`), removed in-container install to prevent reappearance loops; switched `postCreateCommand` to `npm ci` for reproducible dashboard installs; `postStartCommand` aliases made idempotent.
|
||||
- 🔄 Serialization: Consolidated snake_case→camelCase via `server/serializers.py` for all JSON outputs; ensured enums/UTC datetimes serialize consistently across routes.
|
||||
- 🕒 Time handling: Normalized naive timestamps to UTC in all back-end comparisons (events, scheduler, groups) and kept ISO strings without `Z` in API responses; frontend appends `Z`.
|
||||
- 📡 Streaming: Stabilized range-capable endpoint (`/api/eventmedia/stream/<media_id>/<filename>`), clarified client handling; scheduler emits basic HEAD-probe metadata (`mime_type`, `size`, `accept_ranges`).
|
||||
- 📅 Recurrence/exceptions: Ensured EXDATE tokens (RFC 5545 UTC) align with occurrence start; detached-occurrence flow confirmed via `POST /api/events/<id>/occurrences/<date>/detach`.
|
||||
- 🧰 Routes cleanup: Applied `dict_to_camel_case()` before `jsonify()` uniformly; verified Session lifecycle consistency (open/commit/close) across blueprints.
|
||||
- 🔄 **API Naming Convention Standardization**:
|
||||
- Created `server/serializers.py` with `dict_to_camel_case()` and `dict_to_snake_case()` utilities for consistent JSON serialization
|
||||
- Events API refactored: `GET /api/events` and `GET /api/events/<id>` now return camelCase JSON (`id`, `subject`, `startTime`, `endTime`, `type`, `groupId`, etc.) instead of PascalCase
|
||||
- Internal event dictionaries use snake_case keys, then converted to camelCase via `dict_to_camel_case()` before `jsonify()`
|
||||
- **Breaking**: External API consumers must update field names from PascalCase to camelCase
|
||||
- ⏰ **UTC Time Handling**:
|
||||
- Standardized datetime handling: Database stores timestamps in UTC (naive timestamps normalized by backend)
|
||||
- API returns ISO strings without 'Z' suffix: `"2025-11-27T20:03:00"`
|
||||
- Frontend appends 'Z' to parse as UTC and displays in user's local timezone via `toLocaleTimeString('de-DE')`
|
||||
- All time comparisons use UTC; `date.toISOString()` sends UTC back to API
|
||||
- 🖥️ **Dashboard Major Redesign**:
|
||||
- Completely redesigned dashboard with card-based layout for Raumgruppen (room groups)
|
||||
- Global statistics summary card: total infoscreens, online/offline counts, warning groups
|
||||
- Filter buttons with dynamic counts: All, Online, Offline, Warnings
|
||||
- Active event display per group: shows currently playing content with type icon, title, date ("Heute"/"Morgen"/date), and time range
|
||||
- Health visualization: color-coded progress bars showing online/offline ratio per group
|
||||
- Expandable client details: shows last alive timestamps with human-readable format ("vor X Min.", "vor X Std.", "vor X Tagen")
|
||||
- Bulk restart functionality: restart all offline clients in a group
|
||||
- Manual refresh button with toast notifications
|
||||
- 15-second auto-refresh interval
|
||||
- "Nicht zugeordnet" group always appears last in sorted list
|
||||
- 🎨 **Frontend Technical**:
|
||||
- Dashboard (`dashboard/src/dashboard.tsx`): Uses Syncfusion ButtonComponent, ToastComponent, and card CSS classes
|
||||
- Appointments page updated to map camelCase API responses to internal PascalCase for Syncfusion compatibility
|
||||
- Time formatting functions (`formatEventTime`, `formatEventDate`) handle UTC string parsing with 'Z' appending
|
||||
- TypeScript lint errors resolved: unused error variables removed, null safety checks added with optional chaining
|
||||
- 📖 **Documentation**:
|
||||
- Updated `.github/copilot-instructions.md` with comprehensive sections on:
|
||||
- API patterns: JSON serialization, datetime handling conventions
|
||||
- Frontend patterns: API response format, UTC time parsing
|
||||
- Dashboard page overview with features
|
||||
- Conventions & gotchas: datetime and JSON naming guidelines
|
||||
- Updated `README.md` with recent changes, API response format section, and dashboard page details
|
||||
|
||||
Notes for integrators:
|
||||
- **Breaking change**: All Events API endpoints now return camelCase field names. Update client code accordingly.
|
||||
- Frontend must append 'Z' to API datetime strings before parsing: `const utcStr = dateStr.endsWith('Z') ? dateStr : dateStr + 'Z'; new Date(utcStr);`
|
||||
- Use `dict_to_camel_case()` from `server/serializers.py` for any new API endpoints returning JSON
|
||||
- Dev container: prefer `npm ci` and UI-only Remote Containers to avoid extension drift in-container.
|
||||
|
||||
---
|
||||
|
||||
### Component build metadata template (for traceability)
|
||||
Record component builds under the unified app version when releasing:
|
||||
|
||||
```
|
||||
Component builds for this release
|
||||
- API: image tag `ghcr.io/robbstarkaustria/api:<short-sha>` (commit `<sha>`)
|
||||
- Dashboard: image tag `ghcr.io/robbstarkaustria/dashboard:<short-sha>` (commit `<sha>`)
|
||||
- Scheduler: image tag `ghcr.io/robbstarkaustria/scheduler:<short-sha>` (commit `<sha>`)
|
||||
- Listener: image tag `ghcr.io/robbstarkaustria/listener:<short-sha>` (commit `<sha>`)
|
||||
- Worker: image tag `ghcr.io/robbstarkaustria/worker:<short-sha>` (commit `<sha>`)
|
||||
```
|
||||
|
||||
This is informational (build metadata) and does not change the user-facing version number.
|
||||
|
||||
## 2025.1.0-alpha.11 (2025-11-05)
|
||||
- 🗃️ Data model & API:
|
||||
- Added `muted` (Boolean) to `Event` with Alembic migration; create/update and GET endpoints now accept, persist, and return `muted` alongside `autoplay`, `loop`, and `volume` for video events.
|
||||
- Video event fields consolidated: `event_media_id`, `autoplay`, `loop`, `volume`, `muted`.
|
||||
- 🔗 Streaming:
|
||||
- Added range-capable streaming endpoint: `GET /api/eventmedia/stream/<media_id>/<filename>` (supports byte-range requests 206 for seeking).
|
||||
- Scheduler: Performs a best-effort HEAD probe for video stream URLs and includes basic metadata in the emitted payload (`mime_type`, `size`, `accept_ranges`). Placeholders added for `duration`, `resolution`, `bitrate`, `qualities`, `thumbnails`, `checksum`.
|
||||
- 🖥️ Frontend/Dashboard:
|
||||
- Settings page refactored to nested tabs with controlled tab selection (`selectedItem`) to prevent sub-tab jumps.
|
||||
- Settings → Events → Videos: Added system-wide defaults with load/save via system settings keys: `video_autoplay`, `video_loop`, `video_volume`, `video_muted`.
|
||||
- Event modal (CustomEventModal): Exposes per-event video options including “Ton aus” (`muted`) and initializes all video fields from system defaults when creating new events.
|
||||
- Academic Calendar (Settings): Merged “Schulferien Import” and “Liste” into a single sub-tab “📥 Import & Liste”.
|
||||
- 📖 Documentation:
|
||||
- Updated `README.md` and `.github/copilot-instructions.md` for video payload (incl. `muted`), streaming endpoint (206), nested Settings tabs, and video defaults keys; clarified client handling of `video` payloads.
|
||||
- Updated `dashboard/public/program-info.json` (user-facing changelog) and bumped version to `2025.1.0-alpha.11` with corresponding UI/UX notes.
|
||||
|
||||
Notes for integrators:
|
||||
- Clients should parse `event_type` and handle the nested `video` payload, honoring `autoplay`, `loop`, `volume`, and `muted`. Use the streaming endpoint with HTTP Range for seeking.
|
||||
- System settings keys for video defaults: `video_autoplay`, `video_loop`, `video_volume`, `video_muted`.
|
||||
|
||||
## 2025.1.0-alpha.10 (2025-10-25)
|
||||
- No new developer-facing changes in this release.
|
||||
- UI/UX updates are documented in `dashboard/public/program-info.json`:
|
||||
- Event modal: Surfaced video options (Autoplay, Loop, Volume).
|
||||
- FileManager: Increased upload limits (Full-HD); client-side duration validation (max 10 minutes).
|
||||
|
||||
## 2025.1.0-alpha.9 (2025-10-19)
|
||||
- 🗓️ Events/API:
|
||||
- Implemented new `webuntis` event type. Event creation now resolves the URL from the system setting `supplement_table_url`; returns 400 if unset.
|
||||
- Removed obsolete `webuntis-url` settings endpoints. Use `GET/POST /api/system-settings/supplement-table` for URL and enabled state (shared for WebUntis/Vertretungsplan).
|
||||
- Initialization defaults: dropped `webuntis_url`; updated `supplement_table_url` description to “Vertretungsplan / WebUntis”.
|
||||
- 🚦 Scheduler payloads:
|
||||
- Unified Website/WebUntis payload: both emit a nested `website` object `{ "type": "browser", "url": "…" }`; `event_type` remains either `website` or `webuntis` for dispatch.
|
||||
- Payloads now include a top-level `event_type` string for all events to aid client dispatch.
|
||||
- 🖥️ Frontend/Dashboard:
|
||||
- Program info updated to `2025.1.0-alpha.13` with release notes.
|
||||
- Settings → Events: WebUntis now uses the existing Supplement-Table URL; no separate WebUntis URL field.
|
||||
- Event modal: WebUntis type behaves like Website (no per-event URL input).
|
||||
- 📖 Documentation:
|
||||
- Added `MQTT_EVENT_PAYLOAD_GUIDE.md` (message structure, client best practices, versioning).
|
||||
- Added `WEBUNTIS_EVENT_IMPLEMENTATION.md` (design notes, admin setup, testing checklist).
|
||||
- Updated `.github/copilot-instructions.md` and `README.md` for the unified Website/WebUntis handling and settings usage.
|
||||
|
||||
Notes for integrators:
|
||||
- If you previously integrated against `/api/system-settings/webuntis-url`, migrate to `/api/system-settings/supplement-table`.
|
||||
- Clients should now parse `event_type` and use the corresponding nested payload (`presentation`, `website`, …). `webuntis` and `website` should be handled identically (nested `website` payload).
|
||||
|
||||
|
||||
## 2025.1.0-alpha.8 (2025-10-18)
|
||||
- 🛠️ Backend: Seeded presentation defaults (`presentation_interval`, `presentation_page_progress`, `presentation_auto_progress`) in system settings; applied on event creation.
|
||||
- 🗃️ Data model: Added `page_progress` and `auto_progress` fields to `Event` (with Alembic migration).
|
||||
- 🗓️ Scheduler: Now publishes only currently active events per group (at "now"); clears retained topics by publishing `[]` for groups with no active events; normalizes naive timestamps and compares times in UTC; presentation payloads include `page_progress` and `auto_progress`.
|
||||
- 🖥️ Dashboard: Settings → Events tab now includes Presentations defaults (interval, page-progress, auto-progress) with load/save via API; event modal applies defaults on create and persists per-event values on edit.
|
||||
- 📖 Docs: Updated README and Copilot instructions for new scheduler behavior, UTC handling, presentation defaults, and per-event flags.
|
||||
|
||||
---
|
||||
|
||||
## 2025.1.0-alpha.11 (2025-10-16)
|
||||
- ✨ Settings page: New tab layout (Syncfusion) with role-based visibility – Tabs: 📅 Academic Calendar, 🖥️ Display & Clients, 🎬 Media & Files, 🗓️ Events, ⚙️ System.
|
||||
- 🛠️ Settings (Technical): API calls now use relative /api paths via the Vite proxy (prevents CORS and double /api).
|
||||
- 📖 Docs: README updated for settings page (tabs) and system settings API.
|
||||
|
||||
## 2025.1.0-alpha.10 (2025-10-15)
|
||||
- 🔐 Auth: Login and user management implemented (role-based, persistent sessions).
|
||||
- 🧩 Frontend: Syncfusion SplitButtons integrated (react-splitbuttons) and Vite config updated for pre-bundling.
|
||||
- 🐛 Fix: Import error ‘@syncfusion/ej2-react-splitbuttons’ – instructions added to README (optimizeDeps + volume reset).
|
||||
|
||||
## 2025.1.0-alpha.9 (2025-10-14)
|
||||
- ✨ UI: Unified deletion workflow for appointments – all types (single, single instance, entire series) handled with custom dialogs.
|
||||
- 🔧 Frontend: Syncfusion RecurrenceAlert and DeleteAlert intercepted and replaced with custom dialogs (including final confirmation for series deletion).
|
||||
- 📖 Docs: README and Copilot instructions expanded for deletion workflow and dialog handling.
|
||||
|
||||
## 2025.1.0-alpha.8 (2025-10-11)
|
||||
- 🎨 Theme: Migrated to Syncfusion Material 3; centralized CSS imports in main.tsx
|
||||
- 🧹 Cleanup: Tailwind CSS completely removed (packages, PostCSS, Stylelint, config files)
|
||||
- 🧩 Group management: "infoscreen_groups" migrated to Syncfusion components (Buttons, Dialogs, DropDownList, TextBox); improved spacing
|
||||
- 🔔 Notifications: Unified toast/dialog wording; last alert usage replaced
|
||||
- 📖 Docs: README and Copilot instructions updated (Material 3, centralized styles, no Tailwind)
|
||||
|
||||
## 2025.1.0-alpha.7 (2025-09-21)
|
||||
- 🧭 UI: Period selection (Syncfusion) next to group selection; compact layout
|
||||
- ✅ Display: Badge for existing holiday plan + counter ‘Holidays in view’
|
||||
- 🛠️ API: Endpoints for academic periods (list, active GET/POST, for_date)
|
||||
- 📅 Scheduler: By default, no scheduling during holidays; block display like all-day event; black text color
|
||||
- 📤 Holidays: Upload from TXT/CSV (headless TXT uses columns 2–4)
|
||||
- 🔧 UX: Switches in a row; dropdown widths optimized
|
||||
|
||||
## 2025.1.0-alpha.6 (2025-09-20)
|
||||
- 🗓️ NEW: Academic periods system – support for school years, semesters, trimesters
|
||||
- 🏗️ DATABASE: New 'academic_periods' table for time-based organization
|
||||
- 🔗 EXTENDED: Events and media can now optionally be linked to an academic period
|
||||
- 📊 ARCHITECTURE: Fully backward-compatible implementation for gradual rollout
|
||||
- ⚙️ TOOLS: Automatic creation of standard school years for Austrian schools
|
||||
|
||||
## 2025.1.0-alpha.5 (2025-09-14)
|
||||
- Backend: Complete redesign of backend handling for group assignments of new clients and steps for changing group assignment.
|
||||
|
||||
## 2025.1.0-alpha.4 (2025-09-01)
|
||||
- Deployment: Base structure for deployment tested and optimized.
|
||||
- FIX: Program error when switching view on media page fixed.
|
||||
|
||||
## 2025.1.0-alpha.3 (2025-08-30)
|
||||
- NEW: Program info page with dynamic data, build info, and changelog.
|
||||
- NEW: Logout functionality implemented.
|
||||
- FIX: Sidebar width corrected in collapsed state.
|
||||
|
||||
## 2025.1.0-alpha.2 (2025-08-29)
|
||||
- INFO: Analysis and display of used open-source libraries.
|
||||
|
||||
## 2025.1.0-alpha.1 (2025-08-28)
|
||||
- Initial project setup and base structure.
|
||||
324
WEBUNTIS_EVENT_IMPLEMENTATION.md
Normal file
324
WEBUNTIS_EVENT_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,324 @@
|
||||
# WebUntis Event Type Implementation
|
||||
|
||||
**Date**: 2025-10-19
|
||||
**Status**: Completed
|
||||
|
||||
## Summary
|
||||
|
||||
Implemented support for a new `webuntis` event type that displays a centrally-configured WebUntis website on infoscreen clients. This event type follows the same client-side behavior as `website` events but sources its URL from system settings rather than per-event configuration.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Database & Models
|
||||
|
||||
The `webuntis` event type was already defined in the `EventType` enum in `models/models.py`:
|
||||
|
||||
```python
|
||||
class EventType(enum.Enum):
|
||||
presentation = "presentation"
|
||||
website = "website"
|
||||
video = "video"
|
||||
message = "message"
|
||||
other = "other"
|
||||
webuntis = "webuntis" # Already present
|
||||
```
|
||||
|
||||
### 2. System Settings
|
||||
|
||||
#### Default Initialization (`server/init_defaults.py`)
|
||||
|
||||
Updated `supplement_table_url` description to indicate it's used for both Vertretungsplan and WebUntis:
|
||||
|
||||
```python
|
||||
('supplement_table_url', '', 'URL für Vertretungsplan / WebUntis (Stundenplan-Änderungstabelle)')
|
||||
```
|
||||
|
||||
This setting is automatically seeded during database initialization.
|
||||
|
||||
**Note**: The same URL (`supplement_table_url`) is used for both:
|
||||
- Vertretungsplan (supplement table) displays
|
||||
- WebUntis event displays
|
||||
|
||||
#### API Endpoints (`server/routes/system_settings.py`)
|
||||
|
||||
WebUntis events use the existing supplement table endpoints:
|
||||
|
||||
- **`GET /api/system-settings/supplement-table`** (Admin+)
|
||||
- Returns: `{"url": "https://...", "enabled": true/false}`
|
||||
|
||||
- **`POST /api/system-settings/supplement-table`** (Admin+)
|
||||
- Body: `{"url": "https://...", "enabled": true/false}`
|
||||
- Updates the URL used for both supplement table and WebUntis events
|
||||
|
||||
No separate WebUntis URL endpoint is needed—the supplement table URL serves both purposes.
|
||||
|
||||
### 3. Event Creation (`server/routes/events.py`)
|
||||
|
||||
Added handling for `webuntis` event type in `create_event()`:
|
||||
|
||||
```python
|
||||
# WebUntis: URL aus System-Einstellungen holen und EventMedia anlegen
|
||||
if event_type == "webuntis":
|
||||
# Hole WebUntis-URL aus Systemeinstellungen (verwendet supplement_table_url)
|
||||
webuntis_setting = session.query(SystemSetting).filter_by(key='supplement_table_url').first()
|
||||
webuntis_url = webuntis_setting.value if webuntis_setting else ''
|
||||
|
||||
if not webuntis_url:
|
||||
return jsonify({"error": "WebUntis / Supplement table URL not configured in system settings"}), 400
|
||||
|
||||
# EventMedia für WebUntis anlegen
|
||||
media = EventMedia(
|
||||
media_type=MediaType.website,
|
||||
url=webuntis_url,
|
||||
file_path=webuntis_url
|
||||
)
|
||||
session.add(media)
|
||||
session.commit()
|
||||
event_media_id = media.id
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Check if `supplement_table_url` is configured in system settings
|
||||
2. Return error if not configured
|
||||
3. Create `EventMedia` with `MediaType.website` using the supplement table URL
|
||||
4. Associate the media with the event
|
||||
|
||||
### 4. Scheduler Payload (`scheduler/db_utils.py`)
|
||||
|
||||
Modified `format_event_with_media()` to handle both `website` and `webuntis` events:
|
||||
|
||||
```python
|
||||
# Handle website and webuntis events (both display a website)
|
||||
elif event.event_type.value in ("website", "webuntis"):
|
||||
event_dict["website"] = {
|
||||
"type": "browser",
|
||||
"url": media.url if media.url else None
|
||||
}
|
||||
if media.id not in _media_decision_logged:
|
||||
logging.debug(
|
||||
f"[Scheduler] Using website URL for event_media_id={media.id} (type={event.event_type.value}): {media.url}")
|
||||
_media_decision_logged.add(media.id)
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
- Both event types use the same `website` payload structure
|
||||
- Clients interpret `event_type` but handle display identically
|
||||
- URL is already resolved from system settings during event creation
|
||||
|
||||
### 5. Documentation
|
||||
|
||||
Created comprehensive documentation in `MQTT_EVENT_PAYLOAD_GUIDE.md` covering:
|
||||
- MQTT message structure
|
||||
- Event type-specific payloads
|
||||
- Best practices for client implementation
|
||||
- Versioning strategy
|
||||
- System settings integration
|
||||
|
||||
## MQTT Message Format
|
||||
|
||||
### WebUntis Event Payload
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 125,
|
||||
"event_type": "webuntis",
|
||||
"title": "Schedule Display",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"website": {
|
||||
"type": "browser",
|
||||
"url": "https://webuntis.example.com/schedule"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Website Event Payload (for comparison)
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 124,
|
||||
"event_type": "website",
|
||||
"title": "School Website",
|
||||
"start": "2025-10-19T09:00:00+00:00",
|
||||
"end": "2025-10-19T09:30:00+00:00",
|
||||
"group_id": 1,
|
||||
"website": {
|
||||
"type": "browser",
|
||||
"url": "https://example.com/page"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Client Implementation Guide
|
||||
|
||||
Clients should handle both `website` and `webuntis` event types identically:
|
||||
|
||||
```javascript
|
||||
function parseEvent(event) {
|
||||
switch (event.event_type) {
|
||||
case 'presentation':
|
||||
return handlePresentation(event.presentation);
|
||||
|
||||
case 'website':
|
||||
case 'webuntis':
|
||||
// Both types use the same display logic
|
||||
return handleWebsite(event.website);
|
||||
|
||||
case 'video':
|
||||
return handleVideo(event.video);
|
||||
|
||||
default:
|
||||
console.warn(`Unknown event type: ${event.event_type}`);
|
||||
}
|
||||
}
|
||||
|
||||
function handleWebsite(websiteData) {
|
||||
// websiteData = { type: "browser", url: "https://..." }
|
||||
if (!websiteData.url) {
|
||||
console.error('Website event missing URL');
|
||||
return;
|
||||
}
|
||||
|
||||
// Display URL in embedded browser/webview
|
||||
displayInBrowser(websiteData.url);
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Type-Based Dispatch
|
||||
Always check `event_type` first and dispatch to appropriate handlers. The nested payload structure (`presentation`, `website`, etc.) provides type-specific details.
|
||||
|
||||
### 2. Graceful Error Handling
|
||||
- Validate URLs before displaying
|
||||
- Handle missing or empty URLs gracefully
|
||||
- Provide user-friendly error messages
|
||||
|
||||
### 3. Unified Website Display
|
||||
Both `website` and `webuntis` events trigger the same browser/webview component. The only difference is in event creation (per-event URL vs. system-wide URL).
|
||||
|
||||
### 4. Extensibility
|
||||
The message structure supports adding new event types without breaking existing clients:
|
||||
- Old clients ignore unknown `event_type` values
|
||||
- New fields in existing payloads are optional
|
||||
- Nested objects isolate type-specific changes
|
||||
|
||||
## Administrative Setup
|
||||
|
||||
### Setting the WebUntis / Supplement Table URL
|
||||
|
||||
The same URL is used for both Vertretungsplan (supplement table) and WebUntis displays.
|
||||
|
||||
1. **Via API** (recommended for UI integration):
|
||||
```bash
|
||||
POST /api/system-settings/supplement-table
|
||||
{
|
||||
"url": "https://webuntis.example.com/schedule",
|
||||
"enabled": true
|
||||
}
|
||||
```
|
||||
|
||||
2. **Via Database** (for initial setup):
|
||||
```sql
|
||||
INSERT INTO system_settings (`key`, value, description)
|
||||
VALUES ('supplement_table_url', 'https://webuntis.example.com/schedule',
|
||||
'URL für Vertretungsplan / WebUntis (Stundenplan-Änderungstabelle)');
|
||||
```
|
||||
|
||||
3. **Via Dashboard**:
|
||||
Settings → Events → WebUntis / Vertretungsplan
|
||||
|
||||
### Creating a WebUntis Event
|
||||
|
||||
Once the URL is configured, events can be created through:
|
||||
|
||||
1. **Dashboard UI**: Select "WebUntis" as event type
|
||||
2. **API**:
|
||||
```json
|
||||
POST /api/events
|
||||
{
|
||||
"group_id": 1,
|
||||
"title": "Daily Schedule",
|
||||
"description": "Current class schedule",
|
||||
"start": "2025-10-19T08:00:00Z",
|
||||
"end": "2025-10-19T16:00:00Z",
|
||||
"event_type": "webuntis",
|
||||
"created_by": 1
|
||||
}
|
||||
```
|
||||
|
||||
No `website_url` is required—it's automatically fetched from the `supplement_table_url` system setting.
|
||||
|
||||
## Migration Notes
|
||||
|
||||
### From Presentation-Only System
|
||||
|
||||
This implementation extends the existing event system without breaking presentation events:
|
||||
|
||||
- **Presentation events**: Still use `presentation` payload with `files` array
|
||||
- **Website/WebUntis events**: Use new `website` payload with `url` field
|
||||
- **Message structure**: Includes `event_type` for client-side dispatch
|
||||
|
||||
### Future Event Types
|
||||
|
||||
The pattern established here can be extended to other event types:
|
||||
|
||||
- **Video**: `event_dict["video"] = { "type": "media", "url": "...", "autoplay": true }`
|
||||
- **Message**: `event_dict["message"] = { "type": "html", "content": "..." }`
|
||||
- **Custom**: Any new type with its own nested payload
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [x] Database migration includes `webuntis` enum value
|
||||
- [x] System setting `supplement_table_url` description updated to include WebUntis
|
||||
- [x] Event creation validates supplement_table_url is configured
|
||||
- [x] Event creation creates `EventMedia` with supplement table URL
|
||||
- [x] Scheduler includes `website` payload for `webuntis` events
|
||||
- [x] MQTT message structure documented
|
||||
- [x] No duplicate webuntis_url setting (uses supplement_table_url)
|
||||
- [ ] Dashboard UI shows supplement table URL is used for WebUntis (documentation)
|
||||
- [ ] Client implementation tested with WebUntis events (client-side)
|
||||
|
||||
## Related Files
|
||||
|
||||
### Modified
|
||||
- `scheduler/db_utils.py` - Event formatting logic
|
||||
- `server/routes/events.py` - Event creation handling
|
||||
- `server/routes/system_settings.py` - WebUntis URL endpoints
|
||||
- `server/init_defaults.py` - System setting defaults
|
||||
|
||||
### Created
|
||||
- `MQTT_EVENT_PAYLOAD_GUIDE.md` - Comprehensive message format documentation
|
||||
- `WEBUNTIS_EVENT_IMPLEMENTATION.md` - This file
|
||||
|
||||
### Existing (Not Modified)
|
||||
- `models/models.py` - Already had `webuntis` enum value
|
||||
- `dashboard/src/components/CustomEventModal.tsx` - Already supports webuntis type
|
||||
|
||||
## Further Enhancements
|
||||
|
||||
### Short-term
|
||||
1. Add WebUntis URL configuration to dashboard Settings page
|
||||
2. Update event creation UI to explain WebUntis URL comes from settings
|
||||
3. Add validation/preview for WebUntis URL in settings
|
||||
|
||||
### Long-term
|
||||
1. Support multiple WebUntis instances (per-school in multi-tenant setup)
|
||||
2. Add WebUntis-specific metadata (class filter, room filter, etc.)
|
||||
3. Implement iframe sandboxing options for security
|
||||
4. Add refresh intervals for dynamic WebUntis content
|
||||
|
||||
## Conclusion
|
||||
|
||||
The `webuntis` event type is now fully integrated into the infoscreen system. It uses the existing `supplement_table_url` system setting, which serves dual purposes:
|
||||
1. **Vertretungsplan (supplement table)** displays in the existing settings UI
|
||||
2. **WebUntis schedule** displays via the webuntis event type
|
||||
|
||||
This provides a clean separation between system-wide URL configuration and per-event scheduling, while maintaining backward compatibility and following established patterns for event payload structure.
|
||||
|
||||
The implementation demonstrates best practices:
|
||||
- **Reuse existing infrastructure**: Uses supplement_table_url instead of creating duplicate settings
|
||||
- **Consistency**: Follows same patterns as existing event types
|
||||
- **Extensibility**: Easy to add new event types following this model
|
||||
- **Documentation**: Comprehensive guides for both developers and clients
|
||||
24
dashboard/.gitignore
vendored
Normal file
24
dashboard/.gitignore
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
lerna-debug.log*
|
||||
|
||||
node_modules
|
||||
dist
|
||||
dist-ssr
|
||||
*.local
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/*
|
||||
!.vscode/extensions.json
|
||||
.idea
|
||||
.DS_Store
|
||||
*.suo
|
||||
*.ntvs*
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
2877
dashboard/package-lock.json
generated
2877
dashboard/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -10,33 +10,36 @@
|
||||
"preview": "vite preview"
|
||||
},
|
||||
"dependencies": {
|
||||
"@syncfusion/ej2-base": "^30.2.0",
|
||||
"@syncfusion/ej2-buttons": "^30.2.0",
|
||||
"@syncfusion/ej2-calendars": "^30.2.0",
|
||||
"@syncfusion/ej2-dropdowns": "^30.2.0",
|
||||
"@syncfusion/ej2-grids": "^30.2.0",
|
||||
"@syncfusion/ej2-icons": "^30.2.0",
|
||||
"@syncfusion/ej2-inputs": "^30.2.0",
|
||||
"@syncfusion/ej2-kanban": "^30.2.0",
|
||||
"@syncfusion/ej2-layouts": "^30.2.0",
|
||||
"@syncfusion/ej2-lists": "^30.2.0",
|
||||
"@syncfusion/ej2-navigations": "^30.2.0",
|
||||
"@syncfusion/ej2-notifications": "^30.2.0",
|
||||
"@syncfusion/ej2-popups": "^30.2.0",
|
||||
"@syncfusion/ej2-react-base": "^30.2.0",
|
||||
"@syncfusion/ej2-react-buttons": "^30.2.0",
|
||||
"@syncfusion/ej2-react-calendars": "^30.2.0",
|
||||
"@syncfusion/ej2-react-dropdowns": "^30.2.0",
|
||||
"@syncfusion/ej2-react-filemanager": "^30.2.0",
|
||||
"@syncfusion/ej2-react-grids": "^30.2.0",
|
||||
"@syncfusion/ej2-react-inputs": "^30.2.0",
|
||||
"@syncfusion/ej2-react-kanban": "^30.2.0",
|
||||
"@syncfusion/ej2-react-layouts": "^30.2.0",
|
||||
"@syncfusion/ej2-react-navigations": "^30.2.0",
|
||||
"@syncfusion/ej2-react-notifications": "^30.2.0",
|
||||
"@syncfusion/ej2-react-popups": "^30.2.0",
|
||||
"@syncfusion/ej2-react-schedule": "^30.2.0",
|
||||
"@syncfusion/ej2-splitbuttons": "^30.2.0",
|
||||
"@syncfusion/ej2-base": "^30.2.0",
|
||||
"@syncfusion/ej2-buttons": "^30.2.0",
|
||||
"@syncfusion/ej2-calendars": "^30.2.0",
|
||||
"@syncfusion/ej2-dropdowns": "^30.2.0",
|
||||
"@syncfusion/ej2-gantt": "^32.1.23",
|
||||
"@syncfusion/ej2-grids": "^30.2.0",
|
||||
"@syncfusion/ej2-icons": "^30.2.0",
|
||||
"@syncfusion/ej2-inputs": "^30.2.0",
|
||||
"@syncfusion/ej2-kanban": "^30.2.0",
|
||||
"@syncfusion/ej2-layouts": "^30.2.0",
|
||||
"@syncfusion/ej2-lists": "^30.2.0",
|
||||
"@syncfusion/ej2-navigations": "^30.2.0",
|
||||
"@syncfusion/ej2-notifications": "^30.2.0",
|
||||
"@syncfusion/ej2-popups": "^30.2.0",
|
||||
"@syncfusion/ej2-react-base": "^30.2.0",
|
||||
"@syncfusion/ej2-react-buttons": "^30.2.0",
|
||||
"@syncfusion/ej2-react-calendars": "^30.2.0",
|
||||
"@syncfusion/ej2-react-dropdowns": "^30.2.0",
|
||||
"@syncfusion/ej2-react-filemanager": "^30.2.0",
|
||||
"@syncfusion/ej2-react-gantt": "^32.1.23",
|
||||
"@syncfusion/ej2-react-grids": "^30.2.0",
|
||||
"@syncfusion/ej2-react-inputs": "^30.2.0",
|
||||
"@syncfusion/ej2-react-kanban": "^30.2.0",
|
||||
"@syncfusion/ej2-react-layouts": "^30.2.0",
|
||||
"@syncfusion/ej2-react-navigations": "^30.2.0",
|
||||
"@syncfusion/ej2-react-notifications": "^30.2.0",
|
||||
"@syncfusion/ej2-react-popups": "^30.2.0",
|
||||
"@syncfusion/ej2-react-schedule": "^30.2.0",
|
||||
"@syncfusion/ej2-react-splitbuttons": "^30.2.0",
|
||||
"@syncfusion/ej2-splitbuttons": "^30.2.0",
|
||||
"cldr-data": "^36.0.4",
|
||||
"lucide-react": "^0.522.0",
|
||||
"react": "^19.1.0",
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
{
|
||||
"appName": "Infoscreen-Management",
|
||||
"version": "2025.1.0-alpha.8",
|
||||
"copyright": "© 2025 Third-Age-Applications",
|
||||
"version": "2026.1.0-alpha.14",
|
||||
"copyright": "© 2026 Third-Age-Applications",
|
||||
"supportContact": "support@third-age-applications.com",
|
||||
"description": "Eine zentrale Verwaltungsoberfläche für digitale Informationsbildschirme.",
|
||||
"techStack": {
|
||||
"Frontend": "React, Vite, TypeScript",
|
||||
"Frontend": "React, Vite, TypeScript, Syncfusion UI Components (Material 3)",
|
||||
"Backend": "Python (Flask), SQLAlchemy",
|
||||
"Database": "MariaDB",
|
||||
"Realtime": "Mosquitto (MQTT)",
|
||||
@@ -26,81 +26,146 @@
|
||||
]
|
||||
},
|
||||
"buildInfo": {
|
||||
"buildDate": "2025-09-20T11:00:00Z",
|
||||
"commitId": "8d1df7199cb7"
|
||||
"buildDate": "2025-12-29T12:00:00Z",
|
||||
"commitId": "9f2ae8b44c3a"
|
||||
},
|
||||
"changelog": [
|
||||
{
|
||||
"version": "2025.1.0-alpha.8",
|
||||
"date": "2025-10-11",
|
||||
"version": "2026.1.0-alpha.14",
|
||||
"date": "2026-01-28",
|
||||
"changes": [
|
||||
"🎨 Theme: Umstellung auf Syncfusion Material 3; zentrale CSS-Imports in main.tsx",
|
||||
"🧹 Cleanup: Tailwind CSS komplett entfernt (Pakete, PostCSS, Stylelint, Konfigurationsdateien)",
|
||||
"🧩 Gruppenverwaltung: \"infoscreen_groups\" auf Syncfusion-Komponenten (Buttons, Dialoge, DropDownList, TextBox) umgestellt; Abstände verbessert",
|
||||
"🔔 Benachrichtigungen: Vereinheitlichte Toast-/Dialog-Texte; letzte Alert-Verwendung ersetzt",
|
||||
"📖 Doku: README und Copilot-Anweisungen angepasst (Material 3, zentrale Styles, kein Tailwind)"
|
||||
"✨ UI: Neue 'Ressourcen'-Seite mit Timeline-Ansicht zeigt aktive Events für alle Raumgruppen parallel.",
|
||||
"📊 Ressourcen: Kompakte Zeitachsen-Darstellung.",
|
||||
"🎯 Ressourcen: Zeigt aktuell laufende Events mit Typ, Titel und Zeitfenster in Echtzeit.",
|
||||
"🔄 Ressourcen: Gruppensortierung anpassbar mit visueller Reihenfolgen-Verwaltung.",
|
||||
"🎨 Ressourcen: Farbcodierte Event-Balken entsprechend dem Gruppen-Theme."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.13",
|
||||
"date": "2025-12-29",
|
||||
"changes": [
|
||||
"👥 UI: Neue 'Benutzer'-Seite mit vollständiger Benutzerverwaltung (CRUD) für Admins und Superadmins.",
|
||||
"🔐 Benutzer-Seite: Sortierbare Gitter-Tabelle mit Benutzer-ID, Benutzername und Rolle; 20 Einträge pro Seite.",
|
||||
"📊 Benutzer-Seite: Statistik-Karten zeigen Gesamtanzahl, aktive und inaktive Benutzer.",
|
||||
"➕ Benutzer-Seite: Dialog zum Erstellen neuer Benutzer (Benutzername, Passwort, Rolle, Status).",
|
||||
"✏️ Benutzer-Seite: Dialog zum Bearbeiten von Benutzer-Details mit Schutz vor Selbst-Änderungen.",
|
||||
"🔑 Benutzer-Seite: Dialog zum Zurücksetzen von Passwörtern durch Admins (ohne alte Passwort-Anfrage).",
|
||||
"❌ Benutzer-Seite: Dialog zum Löschen von Benutzern (nur für Superadmins; verhindert Selbst-Löschung).",
|
||||
"📋 Benutzer-Seite: Details-Modal zeigt Audit-Informationen (letzte Anmeldung, Passwort-Änderung, Abmeldungen).",
|
||||
"🎨 Benutzer-Seite: Rollen-Abzeichen mit Farb-Kodierung (Benutzer: grau, Editor: blau, Admin: grün, Superadmin: rot).",
|
||||
"🔒 Header-Menü: Neue 'Passwort ändern'-Option im Benutzer-Dropdown für Selbstbedienung (alle Benutzer).",
|
||||
"🔐 Passwort-Dialog: Authentifizierung mit aktuellem Passwort erforderlich (min. 6 Zeichen für neues Passwort).",
|
||||
"🎯 Rollenbasiert: Menu-Einträge werden basierend auf Benutzer-Rolle gefiltert (z.B. 'Benutzer' nur für Admin+)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.12",
|
||||
"date": "2025-11-27",
|
||||
"changes": [
|
||||
"✨ Dashboard: Komplett überarbeitetes Dashboard mit Karten-Design für alle Raumgruppen.",
|
||||
"📊 Dashboard: Globale Statistik-Übersicht zeigt Gesamt-Infoscreens, Online/Offline-Anzahl und Warnungen.",
|
||||
"🔍 Dashboard: Filter-Buttons (Alle, Online, Offline, Warnungen) mit dynamischen Zählern.",
|
||||
"🎯 Dashboard: Anzeige des aktuell laufenden Events pro Gruppe (Titel, Typ, Datum, Uhrzeit in lokaler Zeitzone).",
|
||||
"📈 Dashboard: Farbcodierte Health-Bars zeigen Online/Offline-Verhältnis je Gruppe.",
|
||||
"👥 Dashboard: Ausklappbare Client-Details mit 'Zeit seit letztem Lebenszeichen' (z.B. 'vor 5 Min.').",
|
||||
"🔄 Dashboard: Sammel-Neustart-Funktion für alle offline Clients einer Gruppe.",
|
||||
"⏱️ Dashboard: Auto-Aktualisierung alle 15 Sekunden; manueller Aktualisierungs-Button verfügbar."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.11",
|
||||
"date": "2025-11-05",
|
||||
"changes": [
|
||||
"🎬 Client: Clients können jetzt Video-Events aus dem Terminplaner abspielen (Streaming mit Seek via Byte-Range).",
|
||||
"🧭 Einstellungen: Neues verschachteltes Tab-Layout mit kontrollierter Tab-Auswahl (keine Sprünge in Unter-Tabs).",
|
||||
"📅 Einstellungen › Akademischer Kalender: ‘Schulferien Import’ und ‘Liste’ zusammengeführt in ‘📥 Import & Liste’.",
|
||||
"🗓️ Events-Modal: Video-Optionen erweitert (Autoplay, Loop, Lautstärke, Ton aus). Werte werden bei neuen Terminen aus System-Defaults initialisiert.",
|
||||
"⚙️ Einstellungen › Events › Videos: Globale Defaults für Autoplay, Loop, Lautstärke und Mute (Keys: video_autoplay, video_loop, video_volume, video_muted)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.10",
|
||||
"date": "2025-10-25",
|
||||
"changes": [
|
||||
"🎬 Client: Client kann jetzt Videos wiedergeben (Playback/UI surface) — Benutzerseitige Präsentation wurde ergänzt.",
|
||||
"🧩 UI: Event-Modal ergänzt um Video-Auswahl und Wiedergabe-Optionen (Autoplay, Loop, Lautstärke).",
|
||||
"📁 Medien-UI: FileManager erlaubt größere Uploads für Full-HD-Videos; Client-seitige Validierung begrenzt Videolänge auf 10 Minuten."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.9",
|
||||
"date": "2025-10-19",
|
||||
"changes": [
|
||||
"🆕 Events: Darstellung für ‘WebUntis’ harmonisiert mit ‘Website’ (UI/representation).",
|
||||
"🛠️ Einstellungen › Events: WebUntis verwendet jetzt die bestehende Supplement-Table-Einstellung (Settings UI updated)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.8",
|
||||
"date": "2025-10-18",
|
||||
"changes": [
|
||||
"✨ Einstellungen › Events › Präsentationen: Neue UI-Felder für Slide-Show Intervall, Page-Progress und Auto-Progress.",
|
||||
"️ UI: Event-Modal lädt Präsentations-Einstellungen aus Global-Defaults bzw. Event-Daten (behaviour surfaced in UI)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.7",
|
||||
"date": "2025-09-21",
|
||||
"date": "2025-10-16",
|
||||
"changes": [
|
||||
"🧭 UI: Periode-Auswahl (Syncfusion) neben Gruppenauswahl; kompaktes Layout",
|
||||
"✅ Anzeige: Abzeichen für vorhandenen Ferienplan + Zähler ‘Ferien im Blick’",
|
||||
"🛠️ API: Endpunkte für akademische Perioden (list, active GET/POST, for_date)",
|
||||
"📅 Scheduler: Standardmäßig keine Terminierung in Ferien; Block-Darstellung wie Ganztagesereignis; schwarze Textfarbe",
|
||||
"📤 Ferien: Upload von TXT/CSV (headless TXT nutzt Spalten 2–4)",
|
||||
"🔧 UX: Schalter in einer Reihe; Dropdown-Breiten optimiert"
|
||||
"✨ Einstellungen-Seite: Neues Tab-Layout (Syncfusion) mit rollenbasierter Sichtbarkeit.",
|
||||
"🗓️ Einstellungen › Events: WebUntis/Vertretungsplan in Events-Tab (enable/preview in UI).",
|
||||
"📅 UI: Akademische Periode kann in der Einstellungen-Seite direkt gesetzt werden."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.6",
|
||||
"date": "2025-09-20",
|
||||
"date": "2025-10-15",
|
||||
"changes": [
|
||||
"🗓️ NEU: Akademische Perioden System - Unterstützung für Schuljahre, Semester und Trimester",
|
||||
"🏗️ DATENBANK: Neue 'academic_periods' Tabelle für zeitbasierte Organisation",
|
||||
"🔗 ERWEITERT: Events und Medien können jetzt optional einer akademischen Periode zugeordnet werden",
|
||||
"📊 ARCHITEKTUR: Vollständig rückwärtskompatible Implementierung für schrittweise Einführung",
|
||||
"🎯 BILDUNG: Fokus auf Schulumgebung mit Erweiterbarkeit für Hochschulen",
|
||||
"⚙️ TOOLS: Automatische Erstellung von Standard-Schuljahren für österreichische Schulen"
|
||||
"✨ UI: Benutzer-Menü (top-right) mit Name/Rolle und Einträgen 'Profil' und 'Abmelden'."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.5",
|
||||
"date": "2025-09-14",
|
||||
"date": "2025-10-14",
|
||||
"changes": [
|
||||
"Komplettes Redesign des Backend-Handlings der Gruppenzuordnungen von neuen Clients und der Schritte bei Änderung der Gruppenzuordnung."
|
||||
"✨ UI: Einheitlicher Lösch-Workflow für Termine mit benutzerfreundlichen Dialogen (Einzeltermin, Einzelinstanz, Serie).",
|
||||
"🔧 Frontend: RecurrenceAlert/DeleteAlert werden abgefangen und durch eigene Dialoge ersetzt (Verbesserung der UX).",
|
||||
"✅ Bugfix (UX): Keine doppelten oder verwirrenden Bestätigungsdialoge mehr beim Löschen von Serienterminen."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.4",
|
||||
"date": "2025-09-01",
|
||||
"date": "2025-10-11",
|
||||
"changes": [
|
||||
"Grundstruktur für Deployment getestet und optimiert.",
|
||||
"FIX: Programmfehler beim Umschalten der Ansicht auf der Medien-Seite behoben."
|
||||
"🎨 Theme: Umstellung auf Syncfusion Material 3; zentrale CSS-Imports (UI theme update).",
|
||||
"🧩 UI: Gruppenverwaltung ('infoscreen_groups') auf Syncfusion-Komponenten umgestellt.",
|
||||
"🔔 UI: Vereinheitlichte Notifications / Toast-Texte für konsistente UX."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.3",
|
||||
"date": "2025-08-30",
|
||||
"date": "2025-09-21",
|
||||
"changes": [
|
||||
"NEU: Programminfo-Seite mit dynamischen Daten, Build-Infos und Changelog.",
|
||||
"NEU: Logout-Funktionalität implementiert.",
|
||||
"FIX: Breite der Sidebar im eingeklappten Zustand korrigiert."
|
||||
"🧭 UI: Periode-Auswahl (Syncfusion) neben Gruppenauswahl; kompakte Layout-Verbesserung.",
|
||||
"✅ Anzeige: Abzeichen für vorhandenen Ferienplan + 'Ferien im Blick' Zähler (UI indicator).",
|
||||
"📤 UI: Ferien-Upload (TXT/CSV) Benutzer-Workflow ergänzt."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.2",
|
||||
"date": "2025-08-29",
|
||||
"date": "2025-09-01",
|
||||
"changes": [
|
||||
"INFO: Analyse und Anzeige der verwendeten Open-Source-Bibliotheken."
|
||||
"UI Fix: Fehler beim Umschalten der Ansicht auf der Medien-Seite behoben."
|
||||
]
|
||||
},
|
||||
{
|
||||
"version": "2025.1.0-alpha.1",
|
||||
"date": "2025-08-28",
|
||||
"date": "2025-08-30",
|
||||
"changes": [
|
||||
"Initiales Setup des Projekts und der Grundstruktur."
|
||||
"🆕 UI: Programminfo-Seite mit dynamischen Daten, Build-Infos und Changelog.",
|
||||
"✨ UI: Logout-Funktionalität (Frontend) implementiert.",
|
||||
"🐛 UI Fix: Breite der Sidebar im eingeklappten Zustand korrigiert."
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
import React, { useState } from 'react';
|
||||
import { BrowserRouter as Router, Routes, Route, Link, Outlet } from 'react-router-dom';
|
||||
import { BrowserRouter as Router, Routes, Route, Link, Outlet, useNavigate, Navigate } from 'react-router-dom';
|
||||
import { SidebarComponent } from '@syncfusion/ej2-react-navigations';
|
||||
import { ButtonComponent } from '@syncfusion/ej2-react-buttons';
|
||||
import { TooltipComponent } from '@syncfusion/ej2-react-popups';
|
||||
import { DropDownButtonComponent } from '@syncfusion/ej2-react-splitbuttons';
|
||||
import type { MenuEventArgs } from '@syncfusion/ej2-splitbuttons';
|
||||
import { TooltipComponent, DialogComponent } from '@syncfusion/ej2-react-popups';
|
||||
import { TextBoxComponent } from '@syncfusion/ej2-react-inputs';
|
||||
import logo from './assets/logo.png';
|
||||
import './App.css';
|
||||
|
||||
@@ -16,6 +19,7 @@ import {
|
||||
Settings,
|
||||
Monitor,
|
||||
MonitorDotIcon,
|
||||
Activity,
|
||||
LogOut,
|
||||
Wrench,
|
||||
Info,
|
||||
@@ -23,16 +27,17 @@ import {
|
||||
import { ToastProvider } from './components/ToastProvider';
|
||||
|
||||
const sidebarItems = [
|
||||
{ name: 'Dashboard', path: '/', icon: LayoutDashboard },
|
||||
{ name: 'Termine', path: '/termine', icon: Calendar },
|
||||
{ name: 'Ressourcen', path: '/ressourcen', icon: Boxes },
|
||||
{ name: 'Raumgruppen', path: '/infoscr_groups', icon: MonitorDotIcon },
|
||||
{ name: 'Infoscreen-Clients', path: '/clients', icon: Monitor },
|
||||
{ name: 'Erweiterungsmodus', path: '/setup', icon: Wrench },
|
||||
{ name: 'Medien', path: '/medien', icon: Image },
|
||||
{ name: 'Benutzer', path: '/benutzer', icon: User },
|
||||
{ name: 'Einstellungen', path: '/einstellungen', icon: Settings },
|
||||
{ name: 'Programminfo', path: '/programminfo', icon: Info },
|
||||
{ name: 'Dashboard', path: '/', icon: LayoutDashboard, minRole: 'user' },
|
||||
{ name: 'Termine', path: '/termine', icon: Calendar, minRole: 'user' },
|
||||
{ name: 'Ressourcen', path: '/ressourcen', icon: Boxes, minRole: 'editor' },
|
||||
{ name: 'Raumgruppen', path: '/infoscr_groups', icon: MonitorDotIcon, minRole: 'admin' },
|
||||
{ name: 'Infoscreen-Clients', path: '/clients', icon: Monitor, minRole: 'admin' },
|
||||
{ name: 'Monitor-Dashboard', path: '/monitoring', icon: Activity, minRole: 'superadmin' },
|
||||
{ name: 'Erweiterungsmodus', path: '/setup', icon: Wrench, minRole: 'admin' },
|
||||
{ name: 'Medien', path: '/medien', icon: Image, minRole: 'editor' },
|
||||
{ name: 'Benutzer', path: '/benutzer', icon: User, minRole: 'admin' },
|
||||
{ name: 'Einstellungen', path: '/einstellungen', icon: Settings, minRole: 'admin' },
|
||||
{ name: 'Programminfo', path: '/programminfo', icon: Info, minRole: 'user' },
|
||||
];
|
||||
|
||||
// Dummy Components (können in eigene Dateien ausgelagert werden)
|
||||
@@ -42,11 +47,16 @@ import Ressourcen from './ressourcen';
|
||||
import Infoscreens from './clients';
|
||||
import Infoscreen_groups from './infoscreen_groups';
|
||||
import Media from './media';
|
||||
import Benutzer from './benutzer';
|
||||
import Einstellungen from './einstellungen';
|
||||
import Benutzer from './users';
|
||||
import Einstellungen from './settings';
|
||||
import SetupMode from './SetupMode';
|
||||
import Programminfo from './programminfo';
|
||||
import MonitoringDashboard from './monitoring';
|
||||
import Logout from './logout';
|
||||
import Login from './login';
|
||||
import { useAuth } from './useAuth';
|
||||
import { changePassword } from './apiAuth';
|
||||
import { useToast } from './components/ToastProvider';
|
||||
|
||||
// ENV aus .env holen (Platzhalter, im echten Projekt über process.env oder API)
|
||||
// const ENV = import.meta.env.VITE_ENV || 'development';
|
||||
@@ -54,7 +64,18 @@ import Logout from './logout';
|
||||
const Layout: React.FC = () => {
|
||||
const [version, setVersion] = useState('');
|
||||
const [isCollapsed, setIsCollapsed] = useState(false);
|
||||
const [organizationName, setOrganizationName] = useState('');
|
||||
let sidebarRef: SidebarComponent | null;
|
||||
const { user } = useAuth();
|
||||
const toast = useToast();
|
||||
const navigate = useNavigate();
|
||||
|
||||
// Change password dialog state
|
||||
const [showPwdDialog, setShowPwdDialog] = useState(false);
|
||||
const [pwdCurrent, setPwdCurrent] = useState('');
|
||||
const [pwdNew, setPwdNew] = useState('');
|
||||
const [pwdConfirm, setPwdConfirm] = useState('');
|
||||
const [pwdBusy, setPwdBusy] = useState(false);
|
||||
|
||||
React.useEffect(() => {
|
||||
fetch('/program-info.json')
|
||||
@@ -63,6 +84,25 @@ const Layout: React.FC = () => {
|
||||
.catch(err => console.error('Failed to load version info:', err));
|
||||
}, []);
|
||||
|
||||
// Load organization name
|
||||
React.useEffect(() => {
|
||||
const loadOrgName = async () => {
|
||||
try {
|
||||
const { getOrganizationName } = await import('./apiSystemSettings');
|
||||
const data = await getOrganizationName();
|
||||
setOrganizationName(data.name || '');
|
||||
} catch (err) {
|
||||
console.error('Failed to load organization name:', err);
|
||||
}
|
||||
};
|
||||
loadOrgName();
|
||||
|
||||
// Listen for organization name updates from Settings page
|
||||
const handleUpdate = () => loadOrgName();
|
||||
window.addEventListener('organizationNameUpdated', handleUpdate);
|
||||
return () => window.removeEventListener('organizationNameUpdated', handleUpdate);
|
||||
}, []);
|
||||
|
||||
const toggleSidebar = () => {
|
||||
if (sidebarRef) {
|
||||
sidebarRef.toggle();
|
||||
@@ -81,6 +121,33 @@ const Layout: React.FC = () => {
|
||||
}
|
||||
};
|
||||
|
||||
const submitPasswordChange = async () => {
|
||||
if (!pwdCurrent || !pwdNew || !pwdConfirm) {
|
||||
toast.show({ content: 'Bitte alle Felder ausfüllen', cssClass: 'e-toast-warning' });
|
||||
return;
|
||||
}
|
||||
if (pwdNew.length < 6) {
|
||||
toast.show({ content: 'Neues Passwort muss mindestens 6 Zeichen haben', cssClass: 'e-toast-warning' });
|
||||
return;
|
||||
}
|
||||
if (pwdNew !== pwdConfirm) {
|
||||
toast.show({ content: 'Passwörter stimmen nicht überein', cssClass: 'e-toast-warning' });
|
||||
return;
|
||||
}
|
||||
|
||||
setPwdBusy(true);
|
||||
try {
|
||||
await changePassword(pwdCurrent, pwdNew);
|
||||
toast.show({ content: 'Passwort erfolgreich geändert', cssClass: 'e-toast-success' });
|
||||
setShowPwdDialog(false);
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Ändern des Passworts';
|
||||
toast.show({ content: msg, cssClass: 'e-toast-danger' });
|
||||
} finally {
|
||||
setPwdBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
const sidebarTemplate = () => (
|
||||
<div
|
||||
className={`sidebar-theme ${isCollapsed ? 'collapsed' : 'expanded'}`}
|
||||
@@ -126,7 +193,16 @@ const Layout: React.FC = () => {
|
||||
minHeight: 0, // Wichtig für Flex-Shrinking
|
||||
}}
|
||||
>
|
||||
{sidebarItems.map(item => {
|
||||
{sidebarItems
|
||||
.filter(item => {
|
||||
// Only show items the current user is allowed to see
|
||||
if (!user) return false;
|
||||
const roleHierarchy = ['user', 'editor', 'admin', 'superadmin'];
|
||||
const userRoleIndex = roleHierarchy.indexOf(user.role);
|
||||
const itemRoleIndex = roleHierarchy.indexOf(item.minRole || 'user');
|
||||
return userRoleIndex >= itemRoleIndex;
|
||||
})
|
||||
.map(item => {
|
||||
const Icon = item.icon;
|
||||
const linkContent = (
|
||||
<Link
|
||||
@@ -292,10 +368,103 @@ const Layout: React.FC = () => {
|
||||
<span className="text-2xl font-bold mr-8" style={{ color: '#78591c' }}>
|
||||
Infoscreen-Management
|
||||
</span>
|
||||
<span className="ml-auto text-lg font-medium" style={{ color: '#78591c' }}>
|
||||
[Organisationsname]
|
||||
</span>
|
||||
<div style={{ marginLeft: 'auto', display: 'inline-flex', alignItems: 'center', gap: 16 }}>
|
||||
{organizationName && (
|
||||
<span className="text-lg font-medium" style={{ color: '#78591c' }}>
|
||||
{organizationName}
|
||||
</span>
|
||||
)}
|
||||
{user && (
|
||||
<DropDownButtonComponent
|
||||
items={[
|
||||
{ text: 'Passwort ändern', id: 'change-password', iconCss: 'e-icons e-lock' },
|
||||
{ separator: true },
|
||||
{ text: 'Abmelden', id: 'logout', iconCss: 'e-icons e-logout' },
|
||||
]}
|
||||
select={(args: MenuEventArgs) => {
|
||||
if (args.item.id === 'change-password') {
|
||||
setPwdCurrent('');
|
||||
setPwdNew('');
|
||||
setPwdConfirm('');
|
||||
setShowPwdDialog(true);
|
||||
} else if (args.item.id === 'logout') {
|
||||
navigate('/logout');
|
||||
}
|
||||
}}
|
||||
cssClass="e-inherit"
|
||||
>
|
||||
<div style={{ display: 'inline-flex', alignItems: 'center', gap: 8 }}>
|
||||
<User size={18} />
|
||||
<span style={{ fontWeight: 600 }}>{user.username}</span>
|
||||
<span
|
||||
style={{
|
||||
fontSize: '0.8rem',
|
||||
textTransform: 'uppercase',
|
||||
opacity: 0.85,
|
||||
border: '1px solid rgba(120, 89, 28, 0.25)',
|
||||
borderRadius: 6,
|
||||
padding: '2px 6px',
|
||||
backgroundColor: 'rgba(255, 255, 255, 0.6)',
|
||||
}}
|
||||
>
|
||||
{user.role}
|
||||
</span>
|
||||
</div>
|
||||
</DropDownButtonComponent>
|
||||
)}
|
||||
</div>
|
||||
</header>
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={showPwdDialog}
|
||||
width="480px"
|
||||
header="Passwort ändern"
|
||||
showCloseIcon={true}
|
||||
close={() => setShowPwdDialog(false)}
|
||||
footerTemplate={() => (
|
||||
<div style={{ display: 'flex', justifyContent: 'flex-end', gap: 8 }}>
|
||||
<ButtonComponent cssClass="e-flat" onClick={() => setShowPwdDialog(false)} disabled={pwdBusy}>
|
||||
Abbrechen
|
||||
</ButtonComponent>
|
||||
<ButtonComponent cssClass="e-primary" onClick={submitPasswordChange} disabled={pwdBusy}>
|
||||
{pwdBusy ? 'Speichere...' : 'Speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: 16, display: 'flex', flexDirection: 'column', gap: 16 }}>
|
||||
<div>
|
||||
<label style={{ display: 'block', marginBottom: 6, fontWeight: 500 }}>Aktuelles Passwort *</label>
|
||||
<TextBoxComponent
|
||||
type="password"
|
||||
placeholder="Aktuelles Passwort"
|
||||
value={pwdCurrent}
|
||||
input={(e: { value?: string }) => setPwdCurrent(e.value ?? '')}
|
||||
disabled={pwdBusy}
|
||||
/>
|
||||
</div>
|
||||
<div>
|
||||
<label style={{ display: 'block', marginBottom: 6, fontWeight: 500 }}>Neues Passwort *</label>
|
||||
<TextBoxComponent
|
||||
type="password"
|
||||
placeholder="Mindestens 6 Zeichen"
|
||||
value={pwdNew}
|
||||
input={(e: { value?: string }) => setPwdNew(e.value ?? '')}
|
||||
disabled={pwdBusy}
|
||||
/>
|
||||
</div>
|
||||
<div>
|
||||
<label style={{ display: 'block', marginBottom: 6, fontWeight: 500 }}>Neues Passwort bestätigen *</label>
|
||||
<TextBoxComponent
|
||||
type="password"
|
||||
placeholder="Wiederholen"
|
||||
value={pwdConfirm}
|
||||
input={(e: { value?: string }) => setPwdConfirm(e.value ?? '')}
|
||||
disabled={pwdBusy}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
<main className="page-content">
|
||||
<Outlet />
|
||||
</main>
|
||||
@@ -307,10 +476,32 @@ const Layout: React.FC = () => {
|
||||
const App: React.FC = () => {
|
||||
// Automatische Navigation zu /clients bei leerer Beschreibung entfernt
|
||||
|
||||
const RequireAuth: React.FC<{ children: React.ReactNode }> = ({ children }) => {
|
||||
const { isAuthenticated, loading } = useAuth();
|
||||
if (loading) return <div style={{ padding: 24 }}>Lade ...</div>;
|
||||
if (!isAuthenticated) return <Login />;
|
||||
return <>{children}</>;
|
||||
};
|
||||
|
||||
const RequireSuperadmin: React.FC<{ children: React.ReactNode }> = ({ children }) => {
|
||||
const { isAuthenticated, loading, user } = useAuth();
|
||||
if (loading) return <div style={{ padding: 24 }}>Lade ...</div>;
|
||||
if (!isAuthenticated) return <Login />;
|
||||
if (user?.role !== 'superadmin') return <Navigate to="/" replace />;
|
||||
return <>{children}</>;
|
||||
};
|
||||
|
||||
return (
|
||||
<ToastProvider>
|
||||
<Routes>
|
||||
<Route path="/" element={<Layout />}>
|
||||
<Route
|
||||
path="/"
|
||||
element={
|
||||
<RequireAuth>
|
||||
<Layout />
|
||||
</RequireAuth>
|
||||
}
|
||||
>
|
||||
<Route index element={<Dashboard />} />
|
||||
<Route path="termine" element={<Appointments />} />
|
||||
<Route path="ressourcen" element={<Ressourcen />} />
|
||||
@@ -319,10 +510,19 @@ const App: React.FC = () => {
|
||||
<Route path="benutzer" element={<Benutzer />} />
|
||||
<Route path="einstellungen" element={<Einstellungen />} />
|
||||
<Route path="clients" element={<Infoscreens />} />
|
||||
<Route
|
||||
path="monitoring"
|
||||
element={
|
||||
<RequireSuperadmin>
|
||||
<MonitoringDashboard />
|
||||
</RequireSuperadmin>
|
||||
}
|
||||
/>
|
||||
<Route path="setup" element={<SetupMode />} />
|
||||
<Route path="programminfo" element={<Programminfo />} />
|
||||
</Route>
|
||||
<Route path="/logout" element={<Logout />} />
|
||||
<Route path="/login" element={<Login />} />
|
||||
</Routes>
|
||||
</ToastProvider>
|
||||
);
|
||||
|
||||
182
dashboard/src/apiAuth.ts
Normal file
182
dashboard/src/apiAuth.ts
Normal file
@@ -0,0 +1,182 @@
|
||||
/**
|
||||
* Authentication API client for the dashboard.
|
||||
*
|
||||
* Provides functions to interact with auth endpoints including login,
|
||||
* logout, and fetching current user information.
|
||||
*/
|
||||
|
||||
export interface User {
|
||||
id: number;
|
||||
username: string;
|
||||
role: 'user' | 'editor' | 'admin' | 'superadmin';
|
||||
is_active: boolean;
|
||||
}
|
||||
|
||||
export interface LoginRequest {
|
||||
username: string;
|
||||
password: string;
|
||||
}
|
||||
|
||||
export interface LoginResponse {
|
||||
message: string;
|
||||
user: {
|
||||
id: number;
|
||||
username: string;
|
||||
role: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface AuthCheckResponse {
|
||||
authenticated: boolean;
|
||||
role?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Change password for the currently authenticated user.
|
||||
*/
|
||||
export async function changePassword(currentPassword: string, newPassword: string): Promise<{ message: string }> {
|
||||
const res = await fetch('/api/auth/change-password', {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ current_password: currentPassword, new_password: newPassword }),
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error(data.error || 'Failed to change password');
|
||||
}
|
||||
|
||||
return data as { message: string };
|
||||
}
|
||||
|
||||
/**
|
||||
* Authenticate a user with username and password.
|
||||
*
|
||||
* @param username - The user's username
|
||||
* @param password - The user's password
|
||||
* @returns Promise<LoginResponse>
|
||||
* @throws Error if login fails
|
||||
*/
|
||||
export async function login(username: string, password: string): Promise<LoginResponse> {
|
||||
const res = await fetch('/api/auth/login', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include', // Important for session cookies
|
||||
body: JSON.stringify({ username, password }),
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok || data.error) {
|
||||
throw new Error(data.error || 'Login failed');
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
/**
|
||||
* Log out the current user.
|
||||
*
|
||||
* @returns Promise<void>
|
||||
* @throws Error if logout fails
|
||||
*/
|
||||
export async function logout(): Promise<void> {
|
||||
const res = await fetch('/api/auth/logout', {
|
||||
method: 'POST',
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok || data.error) {
|
||||
throw new Error(data.error || 'Logout failed');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch the current authenticated user's information.
|
||||
*
|
||||
* @returns Promise<User>
|
||||
* @throws Error if not authenticated or request fails
|
||||
*/
|
||||
export async function fetchCurrentUser(): Promise<User> {
|
||||
const res = await fetch('/api/auth/me', {
|
||||
method: 'GET',
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok || data.error) {
|
||||
throw new Error(data.error || 'Failed to fetch current user');
|
||||
}
|
||||
|
||||
return data as User;
|
||||
}
|
||||
|
||||
/**
|
||||
* Quick check if user is authenticated (lighter than fetchCurrentUser).
|
||||
*
|
||||
* @returns Promise<AuthCheckResponse>
|
||||
*/
|
||||
export async function checkAuth(): Promise<AuthCheckResponse> {
|
||||
const res = await fetch('/api/auth/check', {
|
||||
method: 'GET',
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error('Failed to check authentication status');
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if a user has a specific role.
|
||||
*
|
||||
* @param user - The user object
|
||||
* @param role - The role to check for
|
||||
* @returns boolean
|
||||
*/
|
||||
export function hasRole(user: User | null, role: string): boolean {
|
||||
if (!user) return false;
|
||||
return user.role === role;
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if a user has any of the specified roles.
|
||||
*
|
||||
* @param user - The user object
|
||||
* @param roles - Array of roles to check for
|
||||
* @returns boolean
|
||||
*/
|
||||
export function hasAnyRole(user: User | null, roles: string[]): boolean {
|
||||
if (!user) return false;
|
||||
return roles.includes(user.role);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if user is superadmin.
|
||||
*/
|
||||
export function isSuperadmin(user: User | null): boolean {
|
||||
return hasRole(user, 'superadmin');
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if user is admin or higher.
|
||||
*/
|
||||
export function isAdminOrHigher(user: User | null): boolean {
|
||||
return hasAnyRole(user, ['admin', 'superadmin']);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to check if user is editor or higher.
|
||||
*/
|
||||
export function isEditorOrHigher(user: User | null): boolean {
|
||||
return hasAnyRole(user, ['editor', 'admin', 'superadmin']);
|
||||
}
|
||||
111
dashboard/src/apiClientMonitoring.ts
Normal file
111
dashboard/src/apiClientMonitoring.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
export interface MonitoringLogEntry {
|
||||
id: number;
|
||||
timestamp: string | null;
|
||||
level: 'ERROR' | 'WARN' | 'INFO' | 'DEBUG' | null;
|
||||
message: string;
|
||||
context: Record<string, unknown>;
|
||||
client_uuid?: string;
|
||||
}
|
||||
|
||||
export interface MonitoringClient {
|
||||
uuid: string;
|
||||
hostname?: string | null;
|
||||
description?: string | null;
|
||||
ip?: string | null;
|
||||
model?: string | null;
|
||||
groupId?: number | null;
|
||||
groupName?: string | null;
|
||||
registrationTime?: string | null;
|
||||
lastAlive?: string | null;
|
||||
isAlive: boolean;
|
||||
status: 'healthy' | 'warning' | 'critical' | 'offline';
|
||||
currentEventId?: number | null;
|
||||
currentProcess?: string | null;
|
||||
processStatus?: string | null;
|
||||
processPid?: number | null;
|
||||
screenHealthStatus?: string | null;
|
||||
lastScreenshotAnalyzed?: string | null;
|
||||
lastScreenshotHash?: string | null;
|
||||
latestScreenshotType?: 'periodic' | 'event_start' | 'event_stop' | null;
|
||||
priorityScreenshotType?: 'event_start' | 'event_stop' | null;
|
||||
priorityScreenshotReceivedAt?: string | null;
|
||||
hasActivePriorityScreenshot?: boolean;
|
||||
screenshotUrl: string;
|
||||
logCounts24h: {
|
||||
error: number;
|
||||
warn: number;
|
||||
info: number;
|
||||
debug: number;
|
||||
};
|
||||
latestLog?: MonitoringLogEntry | null;
|
||||
latestError?: MonitoringLogEntry | null;
|
||||
}
|
||||
|
||||
export interface MonitoringOverview {
|
||||
summary: {
|
||||
totalClients: number;
|
||||
onlineClients: number;
|
||||
offlineClients: number;
|
||||
healthyClients: number;
|
||||
warningClients: number;
|
||||
criticalClients: number;
|
||||
errorLogs: number;
|
||||
warnLogs: number;
|
||||
activePriorityScreenshots: number;
|
||||
};
|
||||
periodHours: number;
|
||||
gracePeriodSeconds: number;
|
||||
since: string;
|
||||
timestamp: string;
|
||||
clients: MonitoringClient[];
|
||||
}
|
||||
|
||||
export interface ClientLogsResponse {
|
||||
client_uuid: string;
|
||||
logs: MonitoringLogEntry[];
|
||||
count: number;
|
||||
limit: number;
|
||||
}
|
||||
|
||||
async function parseJsonResponse<T>(response: Response, fallbackMessage: string): Promise<T> {
|
||||
const data = await response.json();
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || fallbackMessage);
|
||||
}
|
||||
return data as T;
|
||||
}
|
||||
|
||||
export async function fetchMonitoringOverview(hours = 24): Promise<MonitoringOverview> {
|
||||
const response = await fetch(`/api/client-logs/monitoring-overview?hours=${hours}`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
return parseJsonResponse<MonitoringOverview>(response, 'Fehler beim Laden der Monitoring-Übersicht');
|
||||
}
|
||||
|
||||
export async function fetchRecentClientErrors(limit = 20): Promise<MonitoringLogEntry[]> {
|
||||
const response = await fetch(`/api/client-logs/recent-errors?limit=${limit}`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
const data = await parseJsonResponse<{ errors: MonitoringLogEntry[] }>(
|
||||
response,
|
||||
'Fehler beim Laden der letzten Fehler'
|
||||
);
|
||||
return data.errors;
|
||||
}
|
||||
|
||||
export async function fetchClientMonitoringLogs(
|
||||
uuid: string,
|
||||
options: { level?: string; limit?: number } = {}
|
||||
): Promise<MonitoringLogEntry[]> {
|
||||
const params = new URLSearchParams();
|
||||
if (options.level && options.level !== 'ALL') {
|
||||
params.set('level', options.level);
|
||||
}
|
||||
params.set('limit', String(options.limit ?? 100));
|
||||
|
||||
const response = await fetch(`/api/client-logs/${uuid}/logs?${params.toString()}`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
const data = await parseJsonResponse<ClientLogsResponse>(response, 'Fehler beim Laden der Client-Logs');
|
||||
return data.logs;
|
||||
}
|
||||
@@ -32,8 +32,12 @@ export async function fetchEventById(eventId: string) {
|
||||
return data;
|
||||
}
|
||||
|
||||
export async function deleteEvent(eventId: string) {
|
||||
const res = await fetch(`/api/events/${encodeURIComponent(eventId)}`, {
|
||||
export async function deleteEvent(eventId: string, force: boolean = false) {
|
||||
const url = force
|
||||
? `/api/events/${encodeURIComponent(eventId)}?force=1`
|
||||
: `/api/events/${encodeURIComponent(eventId)}`;
|
||||
|
||||
const res = await fetch(url, {
|
||||
method: 'DELETE',
|
||||
});
|
||||
const data = await res.json();
|
||||
|
||||
168
dashboard/src/apiSystemSettings.ts
Normal file
168
dashboard/src/apiSystemSettings.ts
Normal file
@@ -0,0 +1,168 @@
|
||||
/**
|
||||
* API client for system settings
|
||||
*/
|
||||
|
||||
|
||||
export interface SystemSetting {
|
||||
key: string;
|
||||
value: string | null;
|
||||
description: string | null;
|
||||
updated_at: string | null;
|
||||
}
|
||||
|
||||
export interface SupplementTableSettings {
|
||||
url: string;
|
||||
enabled: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all system settings
|
||||
*/
|
||||
export async function getAllSettings(): Promise<{ settings: SystemSetting[] }> {
|
||||
const response = await fetch(`/api/system-settings`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch settings: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific setting by key
|
||||
*/
|
||||
export async function getSetting(key: string): Promise<SystemSetting> {
|
||||
const response = await fetch(`/api/system-settings/${key}`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch setting: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Update or create a setting
|
||||
*/
|
||||
export async function updateSetting(
|
||||
key: string,
|
||||
value: string,
|
||||
description?: string
|
||||
): Promise<SystemSetting> {
|
||||
const response = await fetch(`/api/system-settings/${key}`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ value, description }),
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to update setting: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a setting
|
||||
*/
|
||||
export async function deleteSetting(key: string): Promise<{ message: string }> {
|
||||
const response = await fetch(`/api/system-settings/${key}`, {
|
||||
method: 'DELETE',
|
||||
credentials: 'include',
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to delete setting: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get supplement table settings
|
||||
*/
|
||||
export async function getSupplementTableSettings(): Promise<SupplementTableSettings> {
|
||||
const response = await fetch(`/api/system-settings/supplement-table`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch supplement table settings: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Update supplement table settings
|
||||
*/
|
||||
export async function updateSupplementTableSettings(
|
||||
url: string,
|
||||
enabled: boolean
|
||||
): Promise<SupplementTableSettings & { message: string }> {
|
||||
const response = await fetch(`/api/system-settings/supplement-table`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ url, enabled }),
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to update supplement table settings: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get holiday banner setting
|
||||
*/
|
||||
export async function getHolidayBannerSetting(): Promise<{ enabled: boolean }> {
|
||||
const response = await fetch(`/api/system-settings/holiday-banner`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch holiday banner setting: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Update holiday banner setting
|
||||
*/
|
||||
export async function updateHolidayBannerSetting(
|
||||
enabled: boolean
|
||||
): Promise<{ enabled: boolean; message: string }> {
|
||||
const response = await fetch(`/api/system-settings/holiday-banner`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ enabled }),
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to update holiday banner setting: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get organization name (public endpoint)
|
||||
*/
|
||||
export async function getOrganizationName(): Promise<{ name: string }> {
|
||||
const response = await fetch(`/api/system-settings/organization-name`, {
|
||||
credentials: 'include',
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to fetch organization name: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Update organization name (superadmin only)
|
||||
*/
|
||||
export async function updateOrganizationName(name: string): Promise<{ name: string; message: string }> {
|
||||
const response = await fetch(`/api/system-settings/organization-name`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ name }),
|
||||
});
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to update organization name: ${response.statusText}`);
|
||||
}
|
||||
return response.json();
|
||||
}
|
||||
161
dashboard/src/apiUsers.ts
Normal file
161
dashboard/src/apiUsers.ts
Normal file
@@ -0,0 +1,161 @@
|
||||
/**
|
||||
* User management API client.
|
||||
*
|
||||
* Provides functions to manage users (CRUD operations).
|
||||
* Access is role-based: admin can manage user/editor/admin, superadmin can manage all.
|
||||
*/
|
||||
|
||||
export interface UserData {
|
||||
id: number;
|
||||
username: string;
|
||||
role: 'user' | 'editor' | 'admin' | 'superadmin';
|
||||
isActive: boolean;
|
||||
lastLoginAt?: string;
|
||||
lastPasswordChangeAt?: string;
|
||||
lastFailedLoginAt?: string;
|
||||
failedLoginAttempts?: number;
|
||||
lockedUntil?: string;
|
||||
deactivatedAt?: string;
|
||||
createdAt?: string;
|
||||
updatedAt?: string;
|
||||
}
|
||||
|
||||
export interface CreateUserRequest {
|
||||
username: string;
|
||||
password: string;
|
||||
role: 'user' | 'editor' | 'admin' | 'superadmin';
|
||||
isActive?: boolean;
|
||||
}
|
||||
|
||||
export interface UpdateUserRequest {
|
||||
username?: string;
|
||||
role?: 'user' | 'editor' | 'admin' | 'superadmin';
|
||||
isActive?: boolean;
|
||||
}
|
||||
|
||||
export interface ResetPasswordRequest {
|
||||
password: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* List all users (filtered by current user's role).
|
||||
* Admin sees: user, editor, admin
|
||||
* Superadmin sees: all including superadmin
|
||||
*/
|
||||
export async function listUsers(): Promise<UserData[]> {
|
||||
const res = await fetch('/api/users', {
|
||||
method: 'GET',
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
if (!res.ok) {
|
||||
const data = await res.json();
|
||||
throw new Error(data.error || 'Failed to fetch users');
|
||||
}
|
||||
|
||||
return res.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single user by ID.
|
||||
*/
|
||||
export async function getUser(userId: number): Promise<UserData> {
|
||||
const res = await fetch(`/api/users/${userId}`, {
|
||||
method: 'GET',
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
if (!res.ok) {
|
||||
const data = await res.json();
|
||||
throw new Error(data.error || 'Failed to fetch user');
|
||||
}
|
||||
|
||||
return res.json();
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new user.
|
||||
* Admin: can create user, editor, admin
|
||||
* Superadmin: can create any role including superadmin
|
||||
*/
|
||||
export async function createUser(userData: CreateUserRequest): Promise<UserData & { message: string }> {
|
||||
const res = await fetch('/api/users', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify(userData),
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error(data.error || 'Failed to create user');
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update a user's details.
|
||||
* Restrictions:
|
||||
* - Cannot change own role
|
||||
* - Cannot change own active status
|
||||
* - Admin cannot edit superadmin users
|
||||
*/
|
||||
export async function updateUser(userId: number, userData: UpdateUserRequest): Promise<UserData & { message: string }> {
|
||||
const res = await fetch(`/api/users/${userId}`, {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify(userData),
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error(data.error || 'Failed to update user');
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset a user's password.
|
||||
* Admin: cannot reset superadmin passwords
|
||||
* Superadmin: can reset any password
|
||||
*/
|
||||
export async function resetUserPassword(userId: number, password: string): Promise<{ message: string }> {
|
||||
const res = await fetch(`/api/users/${userId}/password`, {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
credentials: 'include',
|
||||
body: JSON.stringify({ password }),
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error(data.error || 'Failed to reset password');
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
/**
|
||||
* Permanently delete a user (superadmin only).
|
||||
* Cannot delete own account.
|
||||
*/
|
||||
export async function deleteUser(userId: number): Promise<{ message: string }> {
|
||||
const res = await fetch(`/api/users/${userId}`, {
|
||||
method: 'DELETE',
|
||||
credentials: 'include',
|
||||
});
|
||||
|
||||
const data = await res.json();
|
||||
|
||||
if (!res.ok) {
|
||||
throw new Error(data.error || 'Failed to delete user');
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
import React, { useEffect, useMemo, useState } from 'react';
|
||||
import React, { useEffect, useMemo, useRef, useState } from 'react';
|
||||
import {
|
||||
ScheduleComponent,
|
||||
Day,
|
||||
@@ -63,7 +63,14 @@ type Event = {
|
||||
isHoliday?: boolean; // marker for styling/logic
|
||||
MediaId?: string | number;
|
||||
SlideshowInterval?: number;
|
||||
PageProgress?: boolean;
|
||||
AutoProgress?: boolean;
|
||||
WebsiteUrl?: string;
|
||||
// Video-specific fields
|
||||
Autoplay?: boolean;
|
||||
Loop?: boolean;
|
||||
Volume?: number;
|
||||
Muted?: boolean;
|
||||
Icon?: string; // <--- Icon ergänzen!
|
||||
Type?: string; // <--- Typ ergänzen, falls benötigt
|
||||
OccurrenceOfId?: string; // Serieninstanz
|
||||
@@ -75,22 +82,6 @@ type Event = {
|
||||
RecurrenceException?: string;
|
||||
};
|
||||
|
||||
type RawEvent = {
|
||||
Id: string;
|
||||
Subject: string;
|
||||
StartTime: string;
|
||||
EndTime: string;
|
||||
IsAllDay: boolean;
|
||||
MediaId?: string | number;
|
||||
Icon?: string; // <--- Icon ergänzen!
|
||||
Type?: string;
|
||||
OccurrenceOfId?: string;
|
||||
RecurrenceRule?: string | null;
|
||||
RecurrenceEnd?: string | null;
|
||||
SkipHolidays?: boolean;
|
||||
RecurrenceException?: string;
|
||||
};
|
||||
|
||||
// CLDR-Daten laden (direkt die JSON-Objekte übergeben)
|
||||
loadCldr(
|
||||
caGregorian as object,
|
||||
@@ -207,6 +198,18 @@ const Appointments: React.FC = () => {
|
||||
const [hasSchoolYearPlan, setHasSchoolYearPlan] = React.useState<boolean>(false);
|
||||
const [periods, setPeriods] = React.useState<{ id: number; label: string }[]>([]);
|
||||
const [activePeriodId, setActivePeriodId] = React.useState<number | null>(null);
|
||||
const getWeekMonday = (date: Date): Date => {
|
||||
const d = new Date(date);
|
||||
const day = d.getDay();
|
||||
const diffToMonday = (day + 6) % 7; // Monday = 0
|
||||
d.setDate(d.getDate() - diffToMonday);
|
||||
d.setHours(12, 0, 0, 0); // use noon to avoid TZ shifting back a day
|
||||
return d;
|
||||
};
|
||||
|
||||
const [selectedDate, setSelectedDate] = useState<Date>(() => getWeekMonday(new Date()));
|
||||
const navigationSynced = useRef(false);
|
||||
|
||||
|
||||
// Confirmation dialog state
|
||||
const [confirmDialogOpen, setConfirmDialogOpen] = React.useState(false);
|
||||
@@ -217,6 +220,44 @@ const Appointments: React.FC = () => {
|
||||
onCancel: () => void;
|
||||
} | null>(null);
|
||||
|
||||
// Recurring deletion dialog state
|
||||
const [recurringDeleteDialogOpen, setRecurringDeleteDialogOpen] = React.useState(false);
|
||||
const [recurringDeleteData, setRecurringDeleteData] = React.useState<{
|
||||
event: Event;
|
||||
onChoice: (choice: 'series' | 'occurrence' | 'cancel') => void;
|
||||
} | null>(null);
|
||||
|
||||
// Series deletion final confirmation dialog (after choosing 'series')
|
||||
const [seriesConfirmDialogOpen, setSeriesConfirmDialogOpen] = React.useState(false);
|
||||
const [seriesConfirmData, setSeriesConfirmData] = React.useState<{
|
||||
event: Event;
|
||||
onConfirm: () => void;
|
||||
onCancel: () => void;
|
||||
} | null>(null);
|
||||
|
||||
const showSeriesConfirmDialog = (event: Event): Promise<boolean> => {
|
||||
return new Promise(resolve => {
|
||||
console.log('[Delete] showSeriesConfirmDialog invoked for event', event.Id);
|
||||
// Defer open to next tick to avoid race with closing previous dialog
|
||||
setSeriesConfirmData({
|
||||
event,
|
||||
onConfirm: () => {
|
||||
console.log('[Delete] Series confirm dialog: confirmed');
|
||||
setSeriesConfirmDialogOpen(false);
|
||||
resolve(true);
|
||||
},
|
||||
onCancel: () => {
|
||||
console.log('[Delete] Series confirm dialog: cancelled');
|
||||
setSeriesConfirmDialogOpen(false);
|
||||
resolve(false);
|
||||
}
|
||||
});
|
||||
setTimeout(() => {
|
||||
setSeriesConfirmDialogOpen(true);
|
||||
}, 0);
|
||||
});
|
||||
};
|
||||
|
||||
// Helper function to show confirmation dialog
|
||||
const showConfirmDialog = (title: string, message: string): Promise<boolean> => {
|
||||
return new Promise((resolve) => {
|
||||
@@ -236,6 +277,20 @@ const Appointments: React.FC = () => {
|
||||
});
|
||||
};
|
||||
|
||||
// Helper function to show recurring event deletion dialog
|
||||
const showRecurringDeleteDialog = (event: Event): Promise<'series' | 'occurrence' | 'cancel'> => {
|
||||
return new Promise((resolve) => {
|
||||
setRecurringDeleteData({
|
||||
event,
|
||||
onChoice: (choice: 'series' | 'occurrence' | 'cancel') => {
|
||||
setRecurringDeleteDialogOpen(false);
|
||||
resolve(choice);
|
||||
}
|
||||
});
|
||||
setRecurringDeleteDialogOpen(true);
|
||||
});
|
||||
};
|
||||
|
||||
// Gruppen laden
|
||||
useEffect(() => {
|
||||
fetchGroups()
|
||||
@@ -325,11 +380,11 @@ const Appointments: React.FC = () => {
|
||||
const expandedEvents: Event[] = [];
|
||||
|
||||
for (const e of data) {
|
||||
if (e.RecurrenceRule) {
|
||||
if (e.recurrenceRule) {
|
||||
// Parse EXDATE list
|
||||
const exdates = new Set<string>();
|
||||
if (e.RecurrenceException) {
|
||||
e.RecurrenceException.split(',').forEach((dateStr: string) => {
|
||||
if (e.recurrenceException) {
|
||||
e.recurrenceException.split(',').forEach((dateStr: string) => {
|
||||
const trimmed = dateStr.trim();
|
||||
exdates.add(trimmed);
|
||||
});
|
||||
@@ -337,37 +392,53 @@ const Appointments: React.FC = () => {
|
||||
|
||||
// Let Syncfusion handle ALL recurrence patterns natively for proper badge display
|
||||
expandedEvents.push({
|
||||
Id: e.Id,
|
||||
Subject: e.Subject,
|
||||
StartTime: parseEventDate(e.StartTime),
|
||||
EndTime: parseEventDate(e.EndTime),
|
||||
IsAllDay: e.IsAllDay,
|
||||
MediaId: e.MediaId,
|
||||
Icon: e.Icon,
|
||||
Type: e.Type,
|
||||
OccurrenceOfId: e.OccurrenceOfId,
|
||||
Id: e.id,
|
||||
Subject: e.subject,
|
||||
StartTime: parseEventDate(e.startTime),
|
||||
EndTime: parseEventDate(e.endTime),
|
||||
IsAllDay: e.isAllDay,
|
||||
MediaId: e.mediaId,
|
||||
SlideshowInterval: e.slideshowInterval,
|
||||
PageProgress: e.pageProgress,
|
||||
AutoProgress: e.autoProgress,
|
||||
WebsiteUrl: e.websiteUrl,
|
||||
Autoplay: e.autoplay,
|
||||
Loop: e.loop,
|
||||
Volume: e.volume,
|
||||
Muted: e.muted,
|
||||
Icon: e.icon,
|
||||
Type: e.type,
|
||||
OccurrenceOfId: e.occurrenceOfId,
|
||||
Recurrence: true,
|
||||
RecurrenceRule: e.RecurrenceRule,
|
||||
RecurrenceEnd: e.RecurrenceEnd ?? null,
|
||||
SkipHolidays: e.SkipHolidays ?? false,
|
||||
RecurrenceException: e.RecurrenceException || undefined,
|
||||
RecurrenceRule: e.recurrenceRule,
|
||||
RecurrenceEnd: e.recurrenceEnd ?? null,
|
||||
SkipHolidays: e.skipHolidays ?? false,
|
||||
RecurrenceException: e.recurrenceException || undefined,
|
||||
});
|
||||
} else {
|
||||
// Non-recurring event - add as-is
|
||||
expandedEvents.push({
|
||||
Id: e.Id,
|
||||
Subject: e.Subject,
|
||||
StartTime: parseEventDate(e.StartTime),
|
||||
EndTime: parseEventDate(e.EndTime),
|
||||
IsAllDay: e.IsAllDay,
|
||||
MediaId: e.MediaId,
|
||||
Icon: e.Icon,
|
||||
Type: e.Type,
|
||||
OccurrenceOfId: e.OccurrenceOfId,
|
||||
Id: e.id,
|
||||
Subject: e.subject,
|
||||
StartTime: parseEventDate(e.startTime),
|
||||
EndTime: parseEventDate(e.endTime),
|
||||
IsAllDay: e.isAllDay,
|
||||
MediaId: e.mediaId,
|
||||
SlideshowInterval: e.slideshowInterval,
|
||||
PageProgress: e.pageProgress,
|
||||
AutoProgress: e.autoProgress,
|
||||
WebsiteUrl: e.websiteUrl,
|
||||
Autoplay: e.autoplay,
|
||||
Loop: e.loop,
|
||||
Volume: e.volume,
|
||||
Muted: e.muted,
|
||||
Icon: e.icon,
|
||||
Type: e.type,
|
||||
OccurrenceOfId: e.occurrenceOfId,
|
||||
Recurrence: false,
|
||||
RecurrenceRule: null,
|
||||
RecurrenceEnd: null,
|
||||
SkipHolidays: e.SkipHolidays ?? false,
|
||||
SkipHolidays: e.skipHolidays ?? false,
|
||||
RecurrenceException: undefined,
|
||||
});
|
||||
}
|
||||
@@ -452,28 +523,10 @@ const Appointments: React.FC = () => {
|
||||
}, [holidays, allowScheduleOnHolidays]);
|
||||
|
||||
const dataSource = useMemo(() => {
|
||||
// Filter: Events with SkipHolidays=true are never shown on holidays, regardless of toggle
|
||||
const filteredEvents = events.filter(ev => {
|
||||
if (ev.SkipHolidays) {
|
||||
// If event falls within a holiday, hide it
|
||||
const s = ev.StartTime instanceof Date ? ev.StartTime : new Date(ev.StartTime);
|
||||
const e = ev.EndTime instanceof Date ? ev.EndTime : new Date(ev.EndTime);
|
||||
for (const h of holidays) {
|
||||
const hs = new Date(h.start_date + 'T00:00:00');
|
||||
const he = new Date(h.end_date + 'T23:59:59');
|
||||
if (
|
||||
(s >= hs && s <= he) ||
|
||||
(e >= hs && e <= he) ||
|
||||
(s <= hs && e >= he)
|
||||
) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
return true;
|
||||
});
|
||||
return [...filteredEvents, ...holidayDisplayEvents, ...holidayBlockEvents];
|
||||
}, [events, holidayDisplayEvents, holidayBlockEvents, holidays]);
|
||||
// Existing events should always be visible; holiday skipping for recurring events
|
||||
// is handled via RecurrenceException from the backend.
|
||||
return [...events, ...holidayDisplayEvents, ...holidayBlockEvents];
|
||||
}, [events, holidayDisplayEvents, holidayBlockEvents]);
|
||||
|
||||
// Removed dataSource logging
|
||||
|
||||
@@ -563,6 +616,22 @@ const Appointments: React.FC = () => {
|
||||
updateHolidaysInView();
|
||||
}, [holidays, updateHolidaysInView]);
|
||||
|
||||
// Inject global z-index fixes for dialogs (only once)
|
||||
React.useEffect(() => {
|
||||
if (typeof document !== 'undefined' && !document.getElementById('series-dialog-zfix')) {
|
||||
const style = document.createElement('style');
|
||||
style.id = 'series-dialog-zfix';
|
||||
style.textContent = `\n .final-series-dialog.e-dialog { z-index: 25000 !important; }\n .final-series-dialog + .e-dlg-overlay { z-index: 24990 !important; }\n .recurring-delete-dialog.e-dialog { z-index: 24000 !important; }\n .recurring-delete-dialog + .e-dlg-overlay { z-index: 23990 !important; }\n `;
|
||||
document.head.appendChild(style);
|
||||
}
|
||||
}, []);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (seriesConfirmDialogOpen) {
|
||||
console.log('[Delete] Series confirm dialog now visible');
|
||||
}
|
||||
}, [seriesConfirmDialogOpen]);
|
||||
|
||||
return (
|
||||
<div>
|
||||
<h1 style={{ fontSize: '1.5rem', fontWeight: 700, marginBottom: 16 }}>Terminmanagement</h1>
|
||||
@@ -605,6 +674,7 @@ const Appointments: React.FC = () => {
|
||||
change={async (e: { value: number }) => {
|
||||
const id = Number(e.value);
|
||||
if (!id) return;
|
||||
if (activePeriodId === id) return; // avoid firing on initial mount
|
||||
try {
|
||||
const updated = await setActiveAcademicPeriod(id);
|
||||
setActivePeriodId(updated.id);
|
||||
@@ -616,6 +686,7 @@ const Appointments: React.FC = () => {
|
||||
scheduleRef.current.selectedDate = target;
|
||||
scheduleRef.current.dataBind?.();
|
||||
}
|
||||
setSelectedDate(target);
|
||||
updateHolidaysInView();
|
||||
} catch (err) {
|
||||
console.error('Aktive Periode setzen fehlgeschlagen:', err);
|
||||
@@ -733,11 +804,15 @@ const Appointments: React.FC = () => {
|
||||
setModalOpen(false);
|
||||
setEditMode(false); // Editiermodus zurücksetzen
|
||||
}}
|
||||
onSave={async () => {
|
||||
onSave={async (eventData) => {
|
||||
console.log('Modal saved event data:', eventData);
|
||||
|
||||
// The CustomEventModal already handled the API calls internally
|
||||
// For now, just refresh the data (the recurring event logic is handled in the modal itself)
|
||||
setModalOpen(false);
|
||||
setEditMode(false);
|
||||
|
||||
// Force immediate data refresh
|
||||
// Refresh the data and scheduler
|
||||
await fetchAndSetEvents();
|
||||
|
||||
// Defer refresh to avoid interfering with current React commit
|
||||
@@ -749,14 +824,23 @@ const Appointments: React.FC = () => {
|
||||
groupName={groups.find(g => g.id === selectedGroupId) ?? { id: selectedGroupId, name: '' }}
|
||||
groupColor={selectedGroupId ? getGroupColor(selectedGroupId, groups) : undefined}
|
||||
editMode={editMode} // NEU: Prop für Editiermodus
|
||||
blockHolidays={!allowScheduleOnHolidays}
|
||||
isHolidayRange={(s, e) => isWithinHolidayRange(s, e)}
|
||||
/>
|
||||
<ScheduleComponent
|
||||
key={`scheduler-${selectedDate.toISOString().slice(0, 10)}`}
|
||||
ref={scheduleRef}
|
||||
height="750px"
|
||||
locale="de"
|
||||
currentView="Week"
|
||||
firstDayOfWeek={1}
|
||||
enablePersistence={false}
|
||||
selectedDate={selectedDate}
|
||||
created={() => {
|
||||
const inst = scheduleRef.current;
|
||||
if (inst && selectedDate) {
|
||||
inst.selectedDate = selectedDate;
|
||||
inst.dataBind?.();
|
||||
}
|
||||
}}
|
||||
eventSettings={{
|
||||
dataSource: dataSource,
|
||||
fields: {
|
||||
@@ -775,12 +859,24 @@ const Appointments: React.FC = () => {
|
||||
updateHolidaysInView();
|
||||
// Bei Navigation oder Viewwechsel Events erneut laden (für Range-basierte Expansion)
|
||||
if (args && (args.requestType === 'dateNavigate' || args.requestType === 'viewNavigate')) {
|
||||
if (!navigationSynced.current) {
|
||||
navigationSynced.current = true;
|
||||
if (scheduleRef.current && selectedDate) {
|
||||
scheduleRef.current.selectedDate = selectedDate;
|
||||
scheduleRef.current.dataBind?.();
|
||||
}
|
||||
return;
|
||||
}
|
||||
if (scheduleRef.current?.selectedDate) {
|
||||
setSelectedDate(new Date(scheduleRef.current.selectedDate));
|
||||
}
|
||||
fetchAndSetEvents();
|
||||
return;
|
||||
}
|
||||
|
||||
// Persist UI-driven changes (drag/resize/editor fallbacks)
|
||||
if (args && args.requestType === 'eventChanged') {
|
||||
console.log('actionComplete: Processing eventChanged from direct UI interaction (drag/resize)');
|
||||
try {
|
||||
type SchedulerEvent = Partial<Event> & {
|
||||
Id?: string | number;
|
||||
@@ -810,18 +906,64 @@ const Appointments: React.FC = () => {
|
||||
payload.end = e.toISOString();
|
||||
}
|
||||
|
||||
// Single occurrence change from a recurring master (our manual expansion marks OccurrenceOfId)
|
||||
if (changed.OccurrenceOfId) {
|
||||
if (!changed.StartTime) return; // cannot determine occurrence date
|
||||
// Check if this is a single occurrence edit by looking at the original master event
|
||||
const eventId = String(changed.Id);
|
||||
|
||||
// Debug logging to understand what Syncfusion sends
|
||||
console.log('actionComplete eventChanged - Debug info:', {
|
||||
eventId,
|
||||
changedRecurrenceRule: changed.RecurrenceRule,
|
||||
changedRecurrenceID: changed.RecurrenceID,
|
||||
changedStartTime: changed.StartTime,
|
||||
changedSubject: changed.Subject,
|
||||
payload,
|
||||
fullChangedObject: JSON.stringify(changed, null, 2)
|
||||
});
|
||||
|
||||
// First, fetch the master event to check if it has a RecurrenceRule
|
||||
let masterEvent = null;
|
||||
let isMasterRecurring = false;
|
||||
try {
|
||||
masterEvent = await fetchEventById(eventId);
|
||||
isMasterRecurring = !!masterEvent.recurrenceRule;
|
||||
console.log('Master event info:', {
|
||||
masterRecurrenceRule: masterEvent.recurrenceRule,
|
||||
masterStartTime: masterEvent.startTime,
|
||||
isMasterRecurring
|
||||
});
|
||||
} catch (err) {
|
||||
console.error('Failed to fetch master event:', err);
|
||||
}
|
||||
|
||||
// KEY DETECTION: Syncfusion sets RecurrenceID when editing a single occurrence
|
||||
const hasRecurrenceID = 'RecurrenceID' in changed && !!(changed as Record<string, unknown>).RecurrenceID;
|
||||
|
||||
// When dragging a single occurrence, Syncfusion may not provide RecurrenceID
|
||||
// but it won't provide RecurrenceRule on the changed object
|
||||
const isRecurrenceRuleStripped = isMasterRecurring && !changed.RecurrenceRule;
|
||||
|
||||
console.log('FINAL Edit detection:', {
|
||||
isMasterRecurring,
|
||||
hasRecurrenceID,
|
||||
isRecurrenceRuleStripped,
|
||||
masterHasRule: masterEvent?.RecurrenceRule ? 'YES' : 'NO',
|
||||
changedHasRule: changed.RecurrenceRule ? 'YES' : 'NO',
|
||||
decision: (hasRecurrenceID || isRecurrenceRuleStripped) ? 'DETACH' : 'UPDATE'
|
||||
});
|
||||
|
||||
// SINGLE OCCURRENCE EDIT detection:
|
||||
// 1. RecurrenceID is set (explicit single occurrence marker)
|
||||
// 2. OR master has RecurrenceRule but changed object doesn't (stripped during single edit)
|
||||
if (isMasterRecurring && (hasRecurrenceID || isRecurrenceRuleStripped) && changed.StartTime) {
|
||||
// This is a single occurrence edit - detach it
|
||||
console.log('Detaching single occurrence...');
|
||||
const occStart = changed.StartTime instanceof Date ? changed.StartTime : new Date(changed.StartTime as string);
|
||||
const occDate = occStart.toISOString().split('T')[0];
|
||||
await detachEventOccurrence(Number(changed.OccurrenceOfId), occDate, payload);
|
||||
} else if (changed.RecurrenceRule) {
|
||||
// Change to master series (non-manually expanded recurrences)
|
||||
await updateEvent(String(changed.Id), payload);
|
||||
} else if (changed.Id) {
|
||||
// Regular single event
|
||||
await updateEvent(String(changed.Id), payload);
|
||||
await detachEventOccurrence(Number(eventId), occDate, payload);
|
||||
} else {
|
||||
// This is a series edit or regular single event
|
||||
console.log('Updating event directly...');
|
||||
await updateEvent(eventId, payload);
|
||||
}
|
||||
|
||||
// Refresh events and scheduler cache after persisting
|
||||
@@ -860,6 +1002,96 @@ const Appointments: React.FC = () => {
|
||||
setModalOpen(true);
|
||||
}}
|
||||
popupOpen={async args => {
|
||||
// Intercept Syncfusion's recurrence choice dialog (RecurrenceAlert) and replace with custom
|
||||
if (args.type === 'RecurrenceAlert') {
|
||||
// Prevent default Syncfusion dialog
|
||||
args.cancel = true;
|
||||
const event = args.data;
|
||||
console.log('[RecurrenceAlert] Intercepted for event', event?.Id);
|
||||
if (!event) return;
|
||||
|
||||
// Show our custom recurring delete dialog
|
||||
const choice = await showRecurringDeleteDialog(event);
|
||||
let didDelete = false;
|
||||
try {
|
||||
if (choice === 'series') {
|
||||
const confirmed = await showSeriesConfirmDialog(event);
|
||||
if (confirmed) {
|
||||
await deleteEvent(event.Id, true);
|
||||
didDelete = true;
|
||||
}
|
||||
} else if (choice === 'occurrence') {
|
||||
const occurrenceDate = event.StartTime instanceof Date
|
||||
? event.StartTime.toISOString().split('T')[0]
|
||||
: new Date(event.StartTime).toISOString().split('T')[0];
|
||||
// If this is the master being edited for a single occurrence, treat as occurrence delete
|
||||
if (event.OccurrenceOfId) {
|
||||
await deleteEventOccurrence(event.OccurrenceOfId, occurrenceDate);
|
||||
} else {
|
||||
await deleteEventOccurrence(event.Id, occurrenceDate);
|
||||
}
|
||||
didDelete = true;
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('Fehler bei RecurrenceAlert Löschung:', e);
|
||||
}
|
||||
if (didDelete) {
|
||||
await fetchAndSetEvents();
|
||||
setTimeout(() => scheduleRef.current?.refreshEvents?.(), 0);
|
||||
}
|
||||
return; // handled
|
||||
}
|
||||
if (args.type === 'DeleteAlert') {
|
||||
// Handle delete confirmation directly here to avoid multiple dialogs
|
||||
args.cancel = true;
|
||||
const event = args.data;
|
||||
let didDelete = false;
|
||||
|
||||
try {
|
||||
// 1) Single occurrence of a recurring event → delete occurrence only
|
||||
if (event.OccurrenceOfId && event.StartTime) {
|
||||
console.log('[Delete] Deleting single occurrence via OccurrenceOfId path', {
|
||||
eventId: event.Id,
|
||||
masterId: event.OccurrenceOfId,
|
||||
start: event.StartTime
|
||||
});
|
||||
const occurrenceDate = event.StartTime instanceof Date
|
||||
? event.StartTime.toISOString().split('T')[0]
|
||||
: new Date(event.StartTime).toISOString().split('T')[0];
|
||||
await deleteEventOccurrence(event.OccurrenceOfId, occurrenceDate);
|
||||
didDelete = true;
|
||||
}
|
||||
// 2) Recurring master event deletion → show deletion choice dialog
|
||||
else if (event.RecurrenceRule) {
|
||||
// For recurring events the RecurrenceAlert should have been intercepted.
|
||||
console.log('[DeleteAlert] Recurring event delete without RecurrenceAlert (fallback)');
|
||||
const confirmed = await showSeriesConfirmDialog(event);
|
||||
if (confirmed) {
|
||||
await deleteEvent(event.Id, true);
|
||||
didDelete = true;
|
||||
}
|
||||
}
|
||||
// 3) Single non-recurring event → delete normally with simple confirmation
|
||||
else {
|
||||
console.log('Deleting single non-recurring event:', event.Id);
|
||||
await deleteEvent(event.Id, false);
|
||||
didDelete = true;
|
||||
}
|
||||
|
||||
// Refresh events only if a deletion actually occurred
|
||||
if (didDelete) {
|
||||
await fetchAndSetEvents();
|
||||
setTimeout(() => {
|
||||
scheduleRef.current?.refreshEvents?.();
|
||||
}, 0);
|
||||
}
|
||||
|
||||
} catch (err) {
|
||||
console.error('Fehler beim Löschen:', err);
|
||||
}
|
||||
return; // Exit early for delete operations
|
||||
}
|
||||
|
||||
if (args.type === 'Editor') {
|
||||
args.cancel = true;
|
||||
const event = args.data;
|
||||
@@ -943,9 +1175,11 @@ const Appointments: React.FC = () => {
|
||||
}
|
||||
}
|
||||
|
||||
// Fixed: Ensure OccurrenceOfId is set for recurring events in native recurrence mode
|
||||
|
||||
const modalData = {
|
||||
Id: (event.OccurrenceOfId && !isSingleOccurrence) ? event.OccurrenceOfId : event.Id, // Use master ID for series edit, occurrence ID for single edit
|
||||
OccurrenceOfId: event.OccurrenceOfId, // Master event ID if this is an occurrence
|
||||
OccurrenceOfId: event.OccurrenceOfId || (event.RecurrenceRule ? event.Id : undefined), // Master event ID - use current ID if it's a recurring master
|
||||
occurrenceDate: isSingleOccurrence ? event.StartTime : null, // Store occurrence date for single occurrence editing
|
||||
isSingleOccurrence,
|
||||
title: eventDataToUse.Subject,
|
||||
@@ -960,7 +1194,13 @@ const Appointments: React.FC = () => {
|
||||
skipHolidays: isSingleOccurrence ? false : (eventDataToUse.SkipHolidays ?? false),
|
||||
media,
|
||||
slideshowInterval: eventDataToUse.SlideshowInterval ?? 10,
|
||||
pageProgress: eventDataToUse.PageProgress ?? true,
|
||||
autoProgress: eventDataToUse.AutoProgress ?? true,
|
||||
websiteUrl: eventDataToUse.WebsiteUrl ?? '',
|
||||
autoplay: eventDataToUse.Autoplay ?? true,
|
||||
loop: eventDataToUse.Loop ?? true,
|
||||
volume: eventDataToUse.Volume ?? 0.8,
|
||||
muted: eventDataToUse.Muted ?? false,
|
||||
};
|
||||
|
||||
setModalInitialData(modalData);
|
||||
@@ -969,37 +1209,6 @@ const Appointments: React.FC = () => {
|
||||
}
|
||||
}}
|
||||
eventRendered={(args: EventRenderedArgs) => {
|
||||
// Always hide events that skip holidays when they fall on holidays, regardless of toggle
|
||||
if (args.data) {
|
||||
const ev = args.data as unknown as Partial<Event>;
|
||||
if (ev.SkipHolidays && !args.data.isHoliday) {
|
||||
const s =
|
||||
args.data.StartTime instanceof Date
|
||||
? args.data.StartTime
|
||||
: new Date(args.data.StartTime);
|
||||
const e =
|
||||
args.data.EndTime instanceof Date ? args.data.EndTime : new Date(args.data.EndTime);
|
||||
if (isWithinHolidayRange(s, e)) {
|
||||
args.cancel = true;
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Blende Nicht-Ferien-Events aus, falls sie in Ferien fallen und Terminieren nicht erlaubt ist
|
||||
// Hide events on holidays if not allowed
|
||||
if (!allowScheduleOnHolidays && args.data && !args.data.isHoliday) {
|
||||
const s =
|
||||
args.data.StartTime instanceof Date
|
||||
? args.data.StartTime
|
||||
: new Date(args.data.StartTime);
|
||||
const e =
|
||||
args.data.EndTime instanceof Date ? args.data.EndTime : new Date(args.data.EndTime);
|
||||
if (isWithinHolidayRange(s, e)) {
|
||||
args.cancel = true;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if (selectedGroupId && args.data && args.data.Id) {
|
||||
const groupColor = getGroupColor(selectedGroupId, groups);
|
||||
@@ -1029,54 +1238,14 @@ const Appointments: React.FC = () => {
|
||||
}
|
||||
}}
|
||||
actionBegin={async (args: ActionEventArgs) => {
|
||||
// Delete operations are now handled in popupOpen to avoid multiple dialogs
|
||||
if (args.requestType === 'eventRemove') {
|
||||
// args.data ist ein Array von zu löschenden Events
|
||||
const toDelete = Array.isArray(args.data) ? args.data : [args.data];
|
||||
for (const ev of toDelete) {
|
||||
try {
|
||||
// 1) Single occurrence of a recurring event → delete occurrence only
|
||||
if (ev.OccurrenceOfId && ev.StartTime) {
|
||||
const occurrenceDate = ev.StartTime instanceof Date
|
||||
? ev.StartTime.toISOString().split('T')[0]
|
||||
: new Date(ev.StartTime).toISOString().split('T')[0];
|
||||
await deleteEventOccurrence(ev.OccurrenceOfId, occurrenceDate);
|
||||
continue;
|
||||
}
|
||||
|
||||
// 2) Recurring master being removed unexpectedly → block deletion (safety)
|
||||
// Syncfusion can sometimes raise eventRemove during edits; do NOT delete the series here.
|
||||
if (ev.RecurrenceRule) {
|
||||
console.warn('Blocked deletion of recurring master event via eventRemove.');
|
||||
// If the user truly wants to delete the series, provide an explicit UI path.
|
||||
continue;
|
||||
}
|
||||
|
||||
// 3) Single non-recurring event → delete normally
|
||||
await deleteEvent(ev.Id);
|
||||
} catch (err) {
|
||||
console.error('Fehler beim Löschen:', err);
|
||||
}
|
||||
}
|
||||
// Events nach Löschen neu laden
|
||||
if (selectedGroupId) {
|
||||
fetchEvents(selectedGroupId, showInactive)
|
||||
.then((data: RawEvent[]) => {
|
||||
const mapped: Event[] = data.map((e: RawEvent) => ({
|
||||
Id: e.Id,
|
||||
Subject: e.Subject,
|
||||
StartTime: parseEventDate(e.StartTime),
|
||||
EndTime: parseEventDate(e.EndTime),
|
||||
IsAllDay: e.IsAllDay,
|
||||
MediaId: e.MediaId,
|
||||
SkipHolidays: e.SkipHolidays ?? false,
|
||||
}));
|
||||
setEvents(mapped);
|
||||
})
|
||||
.catch(console.error);
|
||||
}
|
||||
// Syncfusion soll das Event nicht selbst löschen
|
||||
// Cancel all delete operations here - they're handled in popupOpen
|
||||
args.cancel = true;
|
||||
} else if (
|
||||
return;
|
||||
}
|
||||
|
||||
if (
|
||||
(args.requestType === 'eventCreate' || args.requestType === 'eventChange') &&
|
||||
!allowScheduleOnHolidays
|
||||
) {
|
||||
@@ -1097,7 +1266,6 @@ const Appointments: React.FC = () => {
|
||||
}
|
||||
}
|
||||
}}
|
||||
firstDayOfWeek={1}
|
||||
renderCell={(args: RenderCellEventArgs) => {
|
||||
// Nur für Arbeitszellen (Stunden-/Tageszellen)
|
||||
if (args.elementType === 'workCells') {
|
||||
@@ -1156,6 +1324,167 @@ const Appointments: React.FC = () => {
|
||||
</div>
|
||||
</DialogComponent>
|
||||
)}
|
||||
|
||||
{/* Recurring Event Deletion Dialog */}
|
||||
{recurringDeleteDialogOpen && recurringDeleteData && (
|
||||
<DialogComponent
|
||||
target="#root"
|
||||
visible={recurringDeleteDialogOpen}
|
||||
width="500px"
|
||||
zIndex={18000}
|
||||
cssClass="recurring-delete-dialog"
|
||||
header={() => (
|
||||
<div style={{
|
||||
padding: '12px 20px',
|
||||
background: '#dc3545',
|
||||
color: 'white',
|
||||
fontWeight: 600,
|
||||
borderRadius: '6px 6px 0 0'
|
||||
}}>
|
||||
🗑️ Wiederkehrenden Termin löschen
|
||||
</div>
|
||||
)}
|
||||
showCloseIcon={true}
|
||||
close={() => recurringDeleteData.onChoice('cancel')}
|
||||
isModal={true}
|
||||
footerTemplate={() => (
|
||||
<div style={{ padding: '12px 20px', display: 'flex', gap: '12px', justifyContent: 'flex-end' }}>
|
||||
<button
|
||||
className="e-btn e-outline"
|
||||
onClick={() => recurringDeleteData.onChoice('cancel')}
|
||||
style={{ minWidth: '100px' }}
|
||||
>
|
||||
Abbrechen
|
||||
</button>
|
||||
<button
|
||||
className="e-btn e-warning"
|
||||
onClick={() => recurringDeleteData.onChoice('occurrence')}
|
||||
style={{ minWidth: '140px' }}
|
||||
>
|
||||
Nur diesen Termin
|
||||
</button>
|
||||
<button
|
||||
className="e-btn e-danger"
|
||||
onClick={() => recurringDeleteData.onChoice('series')}
|
||||
style={{ minWidth: '140px' }}
|
||||
>
|
||||
Gesamte Serie
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: '24px', fontSize: '14px', lineHeight: 1.5 }}>
|
||||
<div style={{ marginBottom: '16px', fontSize: '16px', fontWeight: 500 }}>
|
||||
Sie möchten einen wiederkehrenden Termin löschen:
|
||||
</div>
|
||||
<div style={{
|
||||
background: '#f8f9fa',
|
||||
border: '1px solid #e9ecef',
|
||||
borderRadius: '6px',
|
||||
padding: '12px',
|
||||
marginBottom: '20px',
|
||||
fontWeight: 500
|
||||
}}>
|
||||
📅 {recurringDeleteData.event.Subject}
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: '16px' }}>
|
||||
<strong>Was möchten Sie löschen?</strong>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: '12px' }}>
|
||||
<div style={{ display: 'flex', alignItems: 'flex-start', gap: '8px' }}>
|
||||
<span style={{ color: '#fd7e14', fontSize: '16px' }}>📝</span>
|
||||
<div>
|
||||
<strong>Nur diesen Termin:</strong> Löscht nur den ausgewählten Termin. Die anderen Termine der Serie bleiben bestehen.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: '20px' }}>
|
||||
<div style={{ display: 'flex', alignItems: 'flex-start', gap: '8px' }}>
|
||||
<span style={{ color: '#dc3545', fontSize: '16px' }}>⚠️</span>
|
||||
<div>
|
||||
<strong>Gesamte Serie:</strong> Löscht <u>alle Termine</u> dieser Wiederholungsserie. Diese Aktion kann nicht rückgängig gemacht werden!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
)}
|
||||
|
||||
{/* Final Series Deletion Confirmation Dialog */}
|
||||
{seriesConfirmDialogOpen && seriesConfirmData && (
|
||||
<DialogComponent
|
||||
target="#root"
|
||||
visible={seriesConfirmDialogOpen}
|
||||
width="520px"
|
||||
zIndex={19000}
|
||||
cssClass="final-series-dialog"
|
||||
header={() => (
|
||||
<div style={{
|
||||
padding: '12px 20px',
|
||||
background: '#b91c1c',
|
||||
color: 'white',
|
||||
fontWeight: 600,
|
||||
borderRadius: '6px 6px 0 0'
|
||||
}}>
|
||||
⚠️ Serie endgültig löschen
|
||||
</div>
|
||||
)}
|
||||
showCloseIcon={true}
|
||||
close={() => seriesConfirmData.onCancel()}
|
||||
isModal={true}
|
||||
footerTemplate={() => (
|
||||
<div style={{ padding: '12px 20px', display: 'flex', gap: '12px', justifyContent: 'flex-end' }}>
|
||||
<button
|
||||
className="e-btn e-outline"
|
||||
onClick={seriesConfirmData.onCancel}
|
||||
style={{ minWidth: '110px' }}
|
||||
>
|
||||
Abbrechen
|
||||
</button>
|
||||
<button
|
||||
className="e-btn e-danger"
|
||||
onClick={seriesConfirmData.onConfirm}
|
||||
style={{ minWidth: '180px' }}
|
||||
>
|
||||
Serie löschen
|
||||
</button>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: '24px', fontSize: '14px', lineHeight: 1.55 }}>
|
||||
<div style={{ marginBottom: '14px' }}>
|
||||
Sie sind dabei die <strong>gesamte Terminserie</strong> zu löschen:
|
||||
</div>
|
||||
<div style={{
|
||||
background: '#fef2f2',
|
||||
border: '1px solid #fecaca',
|
||||
borderRadius: 6,
|
||||
padding: '10px 14px',
|
||||
marginBottom: 18,
|
||||
fontWeight: 500
|
||||
}}>
|
||||
📅 {seriesConfirmData.event.Subject}
|
||||
</div>
|
||||
<ul style={{ margin: '0 0 18px 18px', padding: 0 }}>
|
||||
<li>Alle zukünftigen und vergangenen Vorkommen werden entfernt.</li>
|
||||
<li>Dieser Vorgang kann nicht rückgängig gemacht werden.</li>
|
||||
<li>Einzelne bereits abgetrennte Einzeltermine bleiben bestehen.</li>
|
||||
</ul>
|
||||
<div style={{
|
||||
background: '#fff7ed',
|
||||
border: '1px solid #ffedd5',
|
||||
borderRadius: 6,
|
||||
padding: '10px 14px',
|
||||
fontSize: 13
|
||||
}}>
|
||||
Wenn Sie nur einen einzelnen Termin entfernen möchten, schließen Sie diesen Dialog und wählen Sie im vorherigen Dialog "Nur diesen Termin".
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
import React from 'react';
|
||||
const Benutzer: React.FC = () => (
|
||||
<div>
|
||||
<h2 className="text-xl font-bold mb-4">Benutzer</h2>
|
||||
<p>Willkommen im Infoscreen-Management Benutzer.</p>
|
||||
</div>
|
||||
);
|
||||
export default Benutzer;
|
||||
@@ -19,9 +19,16 @@ type CustomEventData = {
|
||||
weekdays: number[];
|
||||
repeatUntil: Date | null;
|
||||
skipHolidays: boolean;
|
||||
media?: { id: string; path: string; name: string } | null; // <--- ergänzt
|
||||
slideshowInterval?: number; // <--- ergänzt
|
||||
websiteUrl?: string; // <--- ergänzt
|
||||
media?: { id: string; path: string; name: string } | null;
|
||||
slideshowInterval?: number;
|
||||
pageProgress?: boolean;
|
||||
autoProgress?: boolean;
|
||||
websiteUrl?: string;
|
||||
// Video-specific fields
|
||||
autoplay?: boolean;
|
||||
loop?: boolean;
|
||||
volume?: number;
|
||||
muted?: boolean;
|
||||
};
|
||||
|
||||
// Typ für initialData erweitern, damit Id unterstützt wird
|
||||
@@ -38,8 +45,7 @@ type CustomEventModalProps = {
|
||||
groupName: string | { id: string | null; name: string };
|
||||
groupColor?: string;
|
||||
editMode?: boolean;
|
||||
blockHolidays?: boolean;
|
||||
isHolidayRange?: (start: Date, end: Date) => boolean;
|
||||
// Removed unused blockHolidays and isHolidayRange
|
||||
};
|
||||
|
||||
const weekdayOptions = [
|
||||
@@ -68,8 +74,6 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
groupName,
|
||||
groupColor,
|
||||
editMode,
|
||||
blockHolidays,
|
||||
isHolidayRange,
|
||||
}) => {
|
||||
const [title, setTitle] = React.useState(initialData.title || '');
|
||||
const [startDate, setStartDate] = React.useState(initialData.startDate || null);
|
||||
@@ -98,12 +102,66 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
path: string;
|
||||
name: string;
|
||||
} | null>(null);
|
||||
// General settings state for presentation
|
||||
// Removed unused generalLoaded and setGeneralLoaded
|
||||
// Removed unused generalLoaded/generalSlideshowInterval/generalPageProgress/generalAutoProgress
|
||||
|
||||
// Per-event state
|
||||
const [slideshowInterval, setSlideshowInterval] = React.useState<number>(
|
||||
initialData.slideshowInterval ?? 10
|
||||
);
|
||||
const [pageProgress, setPageProgress] = React.useState<boolean>(
|
||||
initialData.pageProgress ?? true
|
||||
);
|
||||
const [autoProgress, setAutoProgress] = React.useState<boolean>(
|
||||
initialData.autoProgress ?? true
|
||||
);
|
||||
const [websiteUrl, setWebsiteUrl] = React.useState<string>(initialData.websiteUrl ?? '');
|
||||
|
||||
// Video-specific state with system defaults loading
|
||||
const [autoplay, setAutoplay] = React.useState<boolean>(initialData.autoplay ?? true);
|
||||
const [loop, setLoop] = React.useState<boolean>(initialData.loop ?? true);
|
||||
const [volume, setVolume] = React.useState<number>(initialData.volume ?? 0.8);
|
||||
const [muted, setMuted] = React.useState<boolean>(initialData.muted ?? false);
|
||||
const [videoDefaultsLoaded, setVideoDefaultsLoaded] = React.useState<boolean>(false);
|
||||
|
||||
const [mediaModalOpen, setMediaModalOpen] = React.useState(false);
|
||||
|
||||
// Load system video defaults once when opening for a new video event
|
||||
React.useEffect(() => {
|
||||
if (open && !editMode && !videoDefaultsLoaded) {
|
||||
(async () => {
|
||||
try {
|
||||
const api = await import('../apiSystemSettings');
|
||||
const keys = ['video_autoplay', 'video_loop', 'video_volume', 'video_muted'] as const;
|
||||
const [autoplayRes, loopRes, volumeRes, mutedRes] = await Promise.all(
|
||||
keys.map(k => api.getSetting(k).catch(() => ({ value: null } as { value: string | null })))
|
||||
);
|
||||
|
||||
// Only apply defaults if not already set from initialData
|
||||
if (initialData.autoplay === undefined) {
|
||||
setAutoplay(autoplayRes.value == null ? true : autoplayRes.value === 'true');
|
||||
}
|
||||
if (initialData.loop === undefined) {
|
||||
setLoop(loopRes.value == null ? true : loopRes.value === 'true');
|
||||
}
|
||||
if (initialData.volume === undefined) {
|
||||
const volParsed = volumeRes.value == null ? 0.8 : parseFloat(String(volumeRes.value));
|
||||
setVolume(Number.isFinite(volParsed) ? volParsed : 0.8);
|
||||
}
|
||||
if (initialData.muted === undefined) {
|
||||
setMuted(mutedRes.value == null ? false : mutedRes.value === 'true');
|
||||
}
|
||||
|
||||
setVideoDefaultsLoaded(true);
|
||||
} catch {
|
||||
// Silently fall back to hard-coded defaults
|
||||
setVideoDefaultsLoaded(true);
|
||||
}
|
||||
})();
|
||||
}
|
||||
}, [open, editMode, videoDefaultsLoaded, initialData]);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (open) {
|
||||
const isSingleOccurrence = initialData.isSingleOccurrence || false;
|
||||
@@ -131,9 +189,19 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
// --- KORREKTUR: Media, SlideshowInterval, WebsiteUrl aus initialData übernehmen ---
|
||||
setMedia(initialData.media ?? null);
|
||||
setSlideshowInterval(initialData.slideshowInterval ?? 10);
|
||||
setPageProgress(initialData.pageProgress ?? true);
|
||||
setAutoProgress(initialData.autoProgress ?? true);
|
||||
setWebsiteUrl(initialData.websiteUrl ?? '');
|
||||
|
||||
// Video fields - use initialData values when editing
|
||||
if (editMode) {
|
||||
setAutoplay(initialData.autoplay ?? true);
|
||||
setLoop(initialData.loop ?? true);
|
||||
setVolume(initialData.volume ?? 0.8);
|
||||
setMuted(initialData.muted ?? false);
|
||||
}
|
||||
}
|
||||
}, [open, initialData]);
|
||||
}, [open, initialData, editMode]);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (!mediaModalOpen && pendingMedia) {
|
||||
@@ -182,42 +250,16 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
if (type === 'website') {
|
||||
if (!websiteUrl.trim()) newErrors.websiteUrl = 'Webseiten-URL ist erforderlich';
|
||||
}
|
||||
|
||||
// Holiday blocking: prevent creating when range overlaps
|
||||
if (
|
||||
!editMode &&
|
||||
blockHolidays &&
|
||||
startDate &&
|
||||
startTime &&
|
||||
endTime &&
|
||||
typeof isHolidayRange === 'function'
|
||||
) {
|
||||
const s = new Date(
|
||||
startDate.getFullYear(),
|
||||
startDate.getMonth(),
|
||||
startDate.getDate(),
|
||||
startTime.getHours(),
|
||||
startTime.getMinutes()
|
||||
);
|
||||
const e = new Date(
|
||||
startDate.getFullYear(),
|
||||
startDate.getMonth(),
|
||||
startDate.getDate(),
|
||||
endTime.getHours(),
|
||||
endTime.getMinutes()
|
||||
);
|
||||
if (isHolidayRange(s, e)) {
|
||||
newErrors.startDate = 'Dieser Zeitraum liegt in den Ferien und ist gesperrt.';
|
||||
}
|
||||
if (type === 'video') {
|
||||
if (!media) newErrors.media = 'Bitte ein Video auswählen';
|
||||
}
|
||||
// Holiday blocking logic removed (blockHolidays, isHolidayRange no longer used)
|
||||
|
||||
if (Object.keys(newErrors).length > 0) {
|
||||
setErrors(newErrors);
|
||||
return;
|
||||
}
|
||||
|
||||
setErrors({});
|
||||
|
||||
const group_id = typeof groupName === 'object' && groupName !== null ? groupName.id : groupName;
|
||||
|
||||
// Build recurrence rule if repeat is enabled
|
||||
@@ -269,7 +311,6 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
startDate,
|
||||
startTime,
|
||||
endTime,
|
||||
// Initialize required fields
|
||||
repeat: isSingleOccurrence ? false : repeat,
|
||||
weekdays: isSingleOccurrence ? [] : weekdays,
|
||||
repeatUntil: isSingleOccurrence ? null : repeatUntil,
|
||||
@@ -282,14 +323,24 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
};
|
||||
|
||||
if (type === 'presentation') {
|
||||
payload.event_media_id = media?.id;
|
||||
payload.event_media_id = media?.id ? Number(media.id) : undefined;
|
||||
payload.slideshow_interval = slideshowInterval;
|
||||
payload.page_progress = pageProgress;
|
||||
payload.auto_progress = autoProgress;
|
||||
}
|
||||
|
||||
if (type === 'website') {
|
||||
payload.website_url = websiteUrl;
|
||||
}
|
||||
|
||||
if (type === 'video') {
|
||||
payload.event_media_id = media?.id ? Number(media.id) : undefined;
|
||||
payload.autoplay = autoplay;
|
||||
payload.loop = loop;
|
||||
payload.volume = volume;
|
||||
payload.muted = muted;
|
||||
}
|
||||
|
||||
try {
|
||||
let res;
|
||||
if (editMode && initialData && typeof initialData.Id === 'string') {
|
||||
@@ -596,6 +647,20 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
value={String(slideshowInterval)}
|
||||
change={e => setSlideshowInterval(Number(e.value))}
|
||||
/>
|
||||
<div style={{ marginTop: 8 }}>
|
||||
<CheckBoxComponent
|
||||
label="Seitenfortschritt anzeigen"
|
||||
checked={pageProgress}
|
||||
change={e => setPageProgress(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
<div style={{ marginTop: 8 }}>
|
||||
<CheckBoxComponent
|
||||
label="Automatischer Fortschritt"
|
||||
checked={autoProgress}
|
||||
change={e => setAutoProgress(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
{type === 'website' && (
|
||||
@@ -608,6 +673,61 @@ const CustomEventModal: React.FC<CustomEventModalProps> = ({
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
{type === 'video' && (
|
||||
<div>
|
||||
<div style={{ marginBottom: 8, marginTop: 16 }}>
|
||||
<button
|
||||
className="e-btn"
|
||||
onClick={() => setMediaModalOpen(true)}
|
||||
style={{ width: '100%' }}
|
||||
>
|
||||
Video auswählen/hochladen
|
||||
</button>
|
||||
</div>
|
||||
<div style={{ marginBottom: 8 }}>
|
||||
<b>Ausgewähltes Video:</b>{' '}
|
||||
{media ? (
|
||||
media.path
|
||||
) : (
|
||||
<span style={{ color: '#888' }}>Kein Video ausgewählt</span>
|
||||
)}
|
||||
</div>
|
||||
<div style={{ marginTop: 8 }}>
|
||||
<CheckBoxComponent
|
||||
label="Automatisch abspielen"
|
||||
checked={autoplay}
|
||||
change={e => setAutoplay(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
<div style={{ marginTop: 8 }}>
|
||||
<CheckBoxComponent
|
||||
label="In Schleife abspielen"
|
||||
checked={loop}
|
||||
change={e => setLoop(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
<div style={{ marginTop: 8 }}>
|
||||
<label style={{ display: 'block', marginBottom: 4, fontWeight: 500, fontSize: '14px' }}>
|
||||
Lautstärke
|
||||
</label>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 12 }}>
|
||||
<TextBoxComponent
|
||||
placeholder="0.0 - 1.0"
|
||||
floatLabelType="Never"
|
||||
type="number"
|
||||
value={String(volume)}
|
||||
change={e => setVolume(Math.max(0, Math.min(1, Number(e.value))))}
|
||||
style={{ flex: 1 }}
|
||||
/>
|
||||
<CheckBoxComponent
|
||||
label="Ton aus"
|
||||
checked={muted}
|
||||
change={e => setMuted(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import React, { useState } from 'react';
|
||||
import React, { useMemo, useState } from 'react';
|
||||
import { useAuth } from '../useAuth';
|
||||
import { DialogComponent } from '@syncfusion/ej2-react-popups';
|
||||
import {
|
||||
FileManagerComponent,
|
||||
@@ -19,6 +20,8 @@ type CustomSelectUploadEventModalProps = {
|
||||
|
||||
const CustomSelectUploadEventModal: React.FC<CustomSelectUploadEventModalProps> = props => {
|
||||
const { open, onClose, onSelect } = props;
|
||||
const { user } = useAuth();
|
||||
const isSuperadmin = useMemo(() => user?.role === 'superadmin', [user]);
|
||||
|
||||
const [selectedFile, setSelectedFile] = useState<{
|
||||
id: string;
|
||||
@@ -63,6 +66,23 @@ const CustomSelectUploadEventModal: React.FC<CustomSelectUploadEventModalProps>
|
||||
}
|
||||
};
|
||||
|
||||
type FileItem = { name: string; isFile: boolean };
|
||||
type ReadSuccessArgs = { action: string; result?: { files?: FileItem[] } };
|
||||
type FileOpenArgs = { fileDetails?: FileItem; cancel?: boolean };
|
||||
|
||||
const handleSuccess = (args: ReadSuccessArgs) => {
|
||||
if (isSuperadmin) return;
|
||||
if (args && args.action === 'read' && args.result && Array.isArray(args.result.files)) {
|
||||
args.result.files = args.result.files.filter((f: FileItem) => !(f.name === 'converted' && !f.isFile));
|
||||
}
|
||||
};
|
||||
|
||||
const handleFileOpen = (args: FileOpenArgs) => {
|
||||
if (!isSuperadmin && args && args.fileDetails && args.fileDetails.name === 'converted' && !args.fileDetails.isFile) {
|
||||
args.cancel = true;
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<DialogComponent
|
||||
target="#root"
|
||||
@@ -84,6 +104,9 @@ const CustomSelectUploadEventModal: React.FC<CustomSelectUploadEventModalProps>
|
||||
)}
|
||||
>
|
||||
<FileManagerComponent
|
||||
cssClass="e-bigger media-icons-xl"
|
||||
success={handleSuccess}
|
||||
fileOpen={handleFileOpen}
|
||||
ajaxSettings={{
|
||||
url: hostUrl + 'operations',
|
||||
getImageUrl: hostUrl + 'get-image',
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,87 +0,0 @@
|
||||
import React from 'react';
|
||||
import { listHolidays, uploadHolidaysCsv, type Holiday } from './apiHolidays';
|
||||
|
||||
const Einstellungen: React.FC = () => {
|
||||
const [file, setFile] = React.useState<File | null>(null);
|
||||
const [busy, setBusy] = React.useState(false);
|
||||
const [message, setMessage] = React.useState<string | null>(null);
|
||||
const [holidays, setHolidays] = React.useState<Holiday[]>([]);
|
||||
|
||||
const refresh = React.useCallback(async () => {
|
||||
try {
|
||||
const data = await listHolidays();
|
||||
setHolidays(data.holidays);
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Laden der Ferien';
|
||||
setMessage(msg);
|
||||
}
|
||||
}, []);
|
||||
|
||||
React.useEffect(() => {
|
||||
refresh();
|
||||
}, [refresh]);
|
||||
|
||||
const onUpload = async () => {
|
||||
if (!file) return;
|
||||
setBusy(true);
|
||||
setMessage(null);
|
||||
try {
|
||||
const res = await uploadHolidaysCsv(file);
|
||||
setMessage(`Import erfolgreich: ${res.inserted} neu, ${res.updated} aktualisiert.`);
|
||||
await refresh();
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Import.';
|
||||
setMessage(msg);
|
||||
} finally {
|
||||
setBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div>
|
||||
<h2 className="text-xl font-bold mb-4">Einstellungen</h2>
|
||||
<div className="space-y-4">
|
||||
<section className="p-4 border rounded-md">
|
||||
<h3 className="font-semibold mb-2">Schulferien importieren</h3>
|
||||
<p className="text-sm text-gray-600 mb-2">
|
||||
Unterstützte Formate:
|
||||
<br />• CSV mit Kopfzeile: <code>name</code>, <code>start_date</code>,{' '}
|
||||
<code>end_date</code>, optional <code>region</code>
|
||||
<br />• TXT/CSV ohne Kopfzeile mit Spalten: interner Name, <strong>Name</strong>,{' '}
|
||||
<strong>Start (YYYYMMDD)</strong>, <strong>Ende (YYYYMMDD)</strong>, optional interne
|
||||
Info (ignoriert)
|
||||
</p>
|
||||
<div className="flex items-center gap-3">
|
||||
<input
|
||||
type="file"
|
||||
accept=".csv,text/csv,.txt,text/plain"
|
||||
onChange={e => setFile(e.target.files?.[0] ?? null)}
|
||||
/>
|
||||
<button className="e-btn e-primary" onClick={onUpload} disabled={!file || busy}>
|
||||
{busy ? 'Importiere…' : 'CSV/TXT importieren'}
|
||||
</button>
|
||||
</div>
|
||||
{message && <div className="mt-2 text-sm">{message}</div>}
|
||||
</section>
|
||||
|
||||
<section className="p-4 border rounded-md">
|
||||
<h3 className="font-semibold mb-2">Importierte Ferien</h3>
|
||||
{holidays.length === 0 ? (
|
||||
<div className="text-sm text-gray-600">Keine Einträge vorhanden.</div>
|
||||
) : (
|
||||
<ul className="text-sm list-disc pl-6">
|
||||
{holidays.slice(0, 20).map(h => (
|
||||
<li key={h.id}>
|
||||
{h.name}: {h.start_date} – {h.end_date}
|
||||
{h.region ? ` (${h.region})` : ''}
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
)}
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default Einstellungen;
|
||||
@@ -1,5 +1,7 @@
|
||||
/* Tailwind removed: base/components/utilities directives no longer used. */
|
||||
|
||||
/* Custom overrides moved to theme-overrides.css to load after Syncfusion styles */
|
||||
|
||||
/* :root {
|
||||
font-family: system-ui, Avenir, Helvetica, Arial, sans-serif;
|
||||
line-height: 1.5;
|
||||
|
||||
@@ -141,6 +141,25 @@ const Infoscreen_groups: React.FC = () => {
|
||||
]);
|
||||
setNewGroupName('');
|
||||
setShowDialog(false);
|
||||
|
||||
// Update group order to include the new group
|
||||
try {
|
||||
const orderResponse = await fetch('/api/groups/order');
|
||||
if (orderResponse.ok) {
|
||||
const orderData = await orderResponse.json();
|
||||
const currentOrder = orderData.order || [];
|
||||
// Add new group ID to the end if not already present
|
||||
if (!currentOrder.includes(newGroup.id)) {
|
||||
await fetch('/api/groups/order', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ order: [...currentOrder, newGroup.id] }),
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to update group order:', err);
|
||||
}
|
||||
} catch (err) {
|
||||
toast.show({
|
||||
content: (err as Error).message,
|
||||
@@ -154,6 +173,10 @@ const Infoscreen_groups: React.FC = () => {
|
||||
// Löschen einer Gruppe
|
||||
const handleDeleteGroup = async (groupName: string) => {
|
||||
try {
|
||||
// Find the group ID before deleting
|
||||
const groupToDelete = groups.find(g => g.headerText === groupName);
|
||||
const deletedGroupId = groupToDelete?.id;
|
||||
|
||||
// Clients der Gruppe in "Nicht zugeordnet" verschieben
|
||||
const groupClients = clients.filter(c => c.Status === groupName);
|
||||
if (groupClients.length > 0) {
|
||||
@@ -172,6 +195,27 @@ const Infoscreen_groups: React.FC = () => {
|
||||
timeOut: 5000,
|
||||
showCloseButton: false,
|
||||
});
|
||||
|
||||
// Update group order to remove the deleted group
|
||||
if (deletedGroupId) {
|
||||
try {
|
||||
const orderResponse = await fetch('/api/groups/order');
|
||||
if (orderResponse.ok) {
|
||||
const orderData = await orderResponse.json();
|
||||
const currentOrder = orderData.order || [];
|
||||
// Remove deleted group ID from order
|
||||
const updatedOrder = currentOrder.filter((id: number) => id !== deletedGroupId);
|
||||
await fetch('/api/groups/order', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ order: updatedOrder }),
|
||||
});
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Failed to update group order:', err);
|
||||
}
|
||||
}
|
||||
|
||||
// Gruppen und Clients neu laden
|
||||
const groupData = await fetchGroups();
|
||||
const groupMap = Object.fromEntries(groupData.map((g: Group) => [g.id, g.name]));
|
||||
|
||||
98
dashboard/src/login.tsx
Normal file
98
dashboard/src/login.tsx
Normal file
@@ -0,0 +1,98 @@
|
||||
import React, { useState } from 'react';
|
||||
import { useNavigate } from 'react-router-dom';
|
||||
import { useAuth } from './useAuth';
|
||||
|
||||
export default function Login() {
|
||||
const { login, loading, error, logout } = useAuth();
|
||||
const [username, setUsername] = useState('');
|
||||
const [password, setPassword] = useState('');
|
||||
const [message, setMessage] = useState<string | null>(null);
|
||||
const isDev = import.meta.env.MODE !== 'production';
|
||||
const navigate = useNavigate();
|
||||
|
||||
const handleSubmit = async (e: React.FormEvent) => {
|
||||
e.preventDefault();
|
||||
setMessage(null);
|
||||
try {
|
||||
await login(username, password);
|
||||
setMessage('Login erfolgreich');
|
||||
// Redirect to dashboard after successful login
|
||||
navigate('/');
|
||||
} catch (err) {
|
||||
setMessage(err instanceof Error ? err.message : 'Login fehlgeschlagen');
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div style={{ display: 'flex', justifyContent: 'center', alignItems: 'center', minHeight: '100vh' }}>
|
||||
<form onSubmit={handleSubmit} style={{ width: 360, padding: 24, border: '1px solid #ddd', borderRadius: 8, background: '#fff' }}>
|
||||
<h2 style={{ marginTop: 0 }}>Anmeldung</h2>
|
||||
{message && <div style={{ color: message.includes('erfolgreich') ? 'green' : 'crimson', marginBottom: 12 }}>{message}</div>}
|
||||
{error && <div style={{ color: 'crimson', marginBottom: 12 }}>{error}</div>}
|
||||
<div style={{ marginBottom: 12 }}>
|
||||
<label style={{ display: 'block', marginBottom: 4 }}>Benutzername</label>
|
||||
<input
|
||||
type="text"
|
||||
value={username}
|
||||
onChange={(e) => setUsername(e.target.value)}
|
||||
disabled={loading}
|
||||
style={{ width: '100%', padding: 8 }}
|
||||
autoFocus
|
||||
/>
|
||||
</div>
|
||||
<div style={{ marginBottom: 12 }}>
|
||||
<label style={{ display: 'block', marginBottom: 4 }}>Passwort</label>
|
||||
<input
|
||||
type="password"
|
||||
value={password}
|
||||
onChange={(e) => setPassword(e.target.value)}
|
||||
disabled={loading}
|
||||
style={{ width: '100%', padding: 8 }}
|
||||
/>
|
||||
</div>
|
||||
<button type="submit" disabled={loading} style={{ width: '100%', padding: 10 }}>
|
||||
{loading ? 'Anmelden ...' : 'Anmelden'}
|
||||
</button>
|
||||
{isDev && (
|
||||
<button
|
||||
type="button"
|
||||
onClick={async () => {
|
||||
setMessage(null);
|
||||
try {
|
||||
const res = await fetch('/api/auth/dev-login-superadmin', {
|
||||
method: 'POST',
|
||||
credentials: 'include',
|
||||
});
|
||||
const data = await res.json();
|
||||
if (!res.ok || data.error) throw new Error(data.error || 'Dev-Login fehlgeschlagen');
|
||||
setMessage('Dev-Login erfolgreich (Superadmin)');
|
||||
// Refresh the page/state; the RequireAuth will render the app
|
||||
window.location.href = '/';
|
||||
} catch (err) {
|
||||
setMessage(err instanceof Error ? err.message : 'Dev-Login fehlgeschlagen');
|
||||
}
|
||||
}}
|
||||
disabled={loading}
|
||||
style={{ width: '100%', padding: 10, marginTop: 10 }}
|
||||
>
|
||||
Dev-Login (Superadmin)
|
||||
</button>
|
||||
)}
|
||||
<button
|
||||
type="button"
|
||||
onClick={async () => {
|
||||
try {
|
||||
await logout();
|
||||
setMessage('Abgemeldet.');
|
||||
} catch {
|
||||
// ignore
|
||||
}
|
||||
}}
|
||||
style={{ width: '100%', padding: 10, marginTop: 10, background: '#f5f5f5' }}
|
||||
>
|
||||
Abmelden & zurück zur Anmeldung
|
||||
</button>
|
||||
</form>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -1,12 +1,41 @@
|
||||
import React from 'react';
|
||||
import React, { useEffect, useState } from 'react';
|
||||
import { useNavigate } from 'react-router-dom';
|
||||
import { useAuth } from './useAuth';
|
||||
|
||||
const Logout: React.FC = () => (
|
||||
<div className="flex items-center justify-center h-screen">
|
||||
<div className="text-center">
|
||||
<h2 className="text-2xl font-bold mb-4">Abmeldung</h2>
|
||||
<p>Sie haben sich erfolgreich abgemeldet.</p>
|
||||
const Logout: React.FC = () => {
|
||||
const navigate = useNavigate();
|
||||
const { logout } = useAuth();
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
let mounted = true;
|
||||
(async () => {
|
||||
try {
|
||||
await logout();
|
||||
} catch (err) {
|
||||
if (mounted) {
|
||||
const msg = err instanceof Error ? err.message : 'Logout fehlgeschlagen';
|
||||
setError(msg);
|
||||
}
|
||||
} finally {
|
||||
// Weiter zur Login-Seite, auch wenn Logout-Request fehlschlägt
|
||||
navigate('/login', { replace: true });
|
||||
}
|
||||
})();
|
||||
return () => {
|
||||
mounted = false;
|
||||
};
|
||||
}, [logout, navigate]);
|
||||
|
||||
return (
|
||||
<div className="flex items-center justify-center h-screen">
|
||||
<div className="text-center">
|
||||
<h2 className="text-2xl font-bold mb-4">Abmeldung</h2>
|
||||
<p>{error ? `Hinweis: ${error}` : 'Sie werden abgemeldet …'}</p>
|
||||
<p style={{ marginTop: 16 }}>Falls nichts passiert: <a href="/login">Zur Login-Seite</a></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
);
|
||||
};
|
||||
|
||||
export default Logout;
|
||||
|
||||
@@ -2,6 +2,7 @@ import { StrictMode } from 'react';
|
||||
import { createRoot } from 'react-dom/client';
|
||||
import './index.css';
|
||||
import App from './App.tsx';
|
||||
import { AuthProvider } from './useAuth';
|
||||
import { registerLicense } from '@syncfusion/ej2-base';
|
||||
import '@syncfusion/ej2-base/styles/material3.css';
|
||||
import '@syncfusion/ej2-navigations/styles/material3.css';
|
||||
@@ -20,6 +21,7 @@ import '@syncfusion/ej2-lists/styles/material3.css';
|
||||
import '@syncfusion/ej2-calendars/styles/material3.css';
|
||||
import '@syncfusion/ej2-splitbuttons/styles/material3.css';
|
||||
import '@syncfusion/ej2-icons/styles/material3.css';
|
||||
import './theme-overrides.css';
|
||||
|
||||
// Setze hier deinen Lizenzschlüssel ein
|
||||
registerLicense(
|
||||
@@ -28,6 +30,8 @@ registerLicense(
|
||||
|
||||
createRoot(document.getElementById('root')!).render(
|
||||
<StrictMode>
|
||||
<App />
|
||||
<AuthProvider>
|
||||
<App />
|
||||
</AuthProvider>
|
||||
</StrictMode>
|
||||
);
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import React, { useState, useRef } from 'react';
|
||||
/* eslint-disable @typescript-eslint/no-explicit-any */
|
||||
import React, { useState, useRef, useMemo } from 'react';
|
||||
import CustomMediaInfoPanel from './components/CustomMediaInfoPanel';
|
||||
import {
|
||||
FileManagerComponent,
|
||||
@@ -7,10 +8,13 @@ import {
|
||||
DetailsView,
|
||||
Toolbar,
|
||||
} from '@syncfusion/ej2-react-filemanager';
|
||||
import { useAuth } from './useAuth';
|
||||
|
||||
const hostUrl = '/api/eventmedia/filemanager/'; // Dein Backend-Endpunkt für FileManager
|
||||
|
||||
const Media: React.FC = () => {
|
||||
const { user } = useAuth();
|
||||
const isSuperadmin = useMemo(() => user?.role === 'superadmin', [user]);
|
||||
// State für die angezeigten Dateidetails
|
||||
const [fileDetails] = useState<null | {
|
||||
name: string;
|
||||
@@ -43,6 +47,25 @@ const Media: React.FC = () => {
|
||||
}
|
||||
}, [viewMode]);
|
||||
|
||||
type FileItem = { name: string; isFile: boolean };
|
||||
type ReadSuccessArgs = { action: string; result?: { files?: FileItem[] } };
|
||||
type FileOpenArgs = { fileDetails?: FileItem; cancel?: boolean };
|
||||
|
||||
// Hide "converted" for non-superadmins after data load
|
||||
const handleSuccess = (args: ReadSuccessArgs) => {
|
||||
if (isSuperadmin) return;
|
||||
if (args && args.action === 'read' && args.result && Array.isArray(args.result.files)) {
|
||||
args.result.files = args.result.files.filter((f: FileItem) => !(f.name === 'converted' && !f.isFile));
|
||||
}
|
||||
};
|
||||
|
||||
// Prevent opening the "converted" folder for non-superadmins
|
||||
const handleFileOpen = (args: FileOpenArgs) => {
|
||||
if (!isSuperadmin && args && args.fileDetails && args.fileDetails.name === 'converted' && !args.fileDetails.isFile) {
|
||||
args.cancel = true;
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div>
|
||||
<h2 className="text-xl font-bold mb-4">Medien</h2>
|
||||
@@ -65,12 +88,98 @@ const Media: React.FC = () => {
|
||||
{/* Debug-Ausgabe entfernt, da ReactNode erwartet wird */}
|
||||
<FileManagerComponent
|
||||
ref={fileManagerRef}
|
||||
cssClass="e-bigger media-icons-xl"
|
||||
success={handleSuccess}
|
||||
fileOpen={handleFileOpen}
|
||||
ajaxSettings={{
|
||||
url: hostUrl + 'operations',
|
||||
getImageUrl: hostUrl + 'get-image',
|
||||
uploadUrl: hostUrl + 'upload',
|
||||
downloadUrl: hostUrl + 'download',
|
||||
}}
|
||||
// Increase upload settings: default maxFileSize for Syncfusion FileManager is ~30_000_000 (30 MB).
|
||||
// Set `maxFileSize` in bytes and `allowedExtensions` for video types you want to accept.
|
||||
// We disable autoUpload so we can validate duration client-side before sending.
|
||||
uploadSettings={{
|
||||
maxFileSize: 1.5 * 1024 * 1024 * 1024, // 1.5 GB - enough for 10min Full HD video at high bitrate
|
||||
allowedExtensions: '.pdf,.ppt,.pptx,.odp,.mp4,.webm,.ogg,.mov,.mkv,.avi,.wmv,.flv,.mpg,.mpeg,.jpg,.jpeg,.png,.gif,.bmp,.tiff,.svg',
|
||||
autoUpload: false,
|
||||
minFileSize: 0, // Allow all file sizes (no minimum)
|
||||
// chunkSize can be added later once server supports chunk assembly
|
||||
}}
|
||||
// Validate video duration (max 10 minutes) before starting upload.
|
||||
created={() => {
|
||||
try {
|
||||
const el = fileManagerRef.current?.element as any;
|
||||
const inst = el && el.ej2_instances && el.ej2_instances[0];
|
||||
const maxSeconds = 10 * 60; // 10 minutes
|
||||
if (inst && inst.uploadObj) {
|
||||
// Override the selected handler to validate files before upload
|
||||
const originalSelected = inst.uploadObj.selected;
|
||||
inst.uploadObj.selected = async (args: any) => {
|
||||
const filesData = args && (args.filesData || args.files) ? (args.filesData || args.files) : [];
|
||||
const tooLong: string[] = [];
|
||||
// Helper to get native File object
|
||||
const getRawFile = (fd: any) => fd && (fd.rawFile || fd.file || fd) as File;
|
||||
|
||||
const checks = Array.from(filesData).map((fd: any) => {
|
||||
const file = getRawFile(fd);
|
||||
if (!file) return Promise.resolve(true);
|
||||
// Only check video MIME types or common extensions
|
||||
if (!file.type.startsWith('video') && !/\.(mp4|webm|ogg|mov|mkv)$/i.test(file.name)) {
|
||||
return Promise.resolve(true);
|
||||
}
|
||||
return new Promise<boolean>((resolve) => {
|
||||
const url = URL.createObjectURL(file);
|
||||
const video = document.createElement('video');
|
||||
video.preload = 'metadata';
|
||||
video.src = url;
|
||||
const clean = () => {
|
||||
try { URL.revokeObjectURL(url); } catch { /* noop */ }
|
||||
};
|
||||
video.onloadedmetadata = function () {
|
||||
clean();
|
||||
if (video.duration && video.duration <= maxSeconds) {
|
||||
resolve(true);
|
||||
} else {
|
||||
tooLong.push(`${file.name} (${Math.round(video.duration||0)}s)`);
|
||||
resolve(false);
|
||||
}
|
||||
};
|
||||
video.onerror = function () {
|
||||
clean();
|
||||
// If metadata can't be read, allow upload and let server verify
|
||||
resolve(true);
|
||||
};
|
||||
});
|
||||
});
|
||||
|
||||
const results = await Promise.all(checks);
|
||||
const allOk = results.every(Boolean);
|
||||
if (!allOk) {
|
||||
// Cancel the automatic upload and show error to user
|
||||
args.cancel = true;
|
||||
const msg = `Upload blocked: the following videos exceed ${maxSeconds} seconds:\n` + tooLong.join('\n');
|
||||
// Use alert for now; replace with project's toast system if available
|
||||
alert(msg);
|
||||
return;
|
||||
}
|
||||
// All files OK — proceed with original selected handler if present,
|
||||
// otherwise start upload programmatically
|
||||
if (typeof originalSelected === 'function') {
|
||||
try { originalSelected.call(inst.uploadObj, args); } catch { /* noop */ }
|
||||
}
|
||||
// If autoUpload is false we need to start upload manually
|
||||
try {
|
||||
inst.uploadObj.upload(args && (args.filesData || args.files));
|
||||
} catch { /* ignore — uploader may handle starting itself */ }
|
||||
};
|
||||
}
|
||||
} catch (e) {
|
||||
// Non-fatal: if we can't hook uploader, uploads will behave normally
|
||||
console.error('Could not attach video-duration hook to uploader', e);
|
||||
}
|
||||
}}
|
||||
toolbarSettings={{
|
||||
items: [
|
||||
'NewFolder',
|
||||
|
||||
373
dashboard/src/monitoring.css
Normal file
373
dashboard/src/monitoring.css
Normal file
@@ -0,0 +1,373 @@
|
||||
.monitoring-page {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 1.25rem;
|
||||
padding: 0.5rem 0.25rem 1rem;
|
||||
}
|
||||
|
||||
.monitoring-header-row {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: flex-start;
|
||||
gap: 1rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.monitoring-title {
|
||||
margin: 0;
|
||||
font-size: 1.75rem;
|
||||
font-weight: 700;
|
||||
color: #5c4318;
|
||||
}
|
||||
|
||||
.monitoring-subtitle {
|
||||
margin: 0.35rem 0 0;
|
||||
color: #6b7280;
|
||||
max-width: 60ch;
|
||||
}
|
||||
|
||||
.monitoring-toolbar {
|
||||
display: flex;
|
||||
align-items: end;
|
||||
gap: 0.75rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.monitoring-toolbar-field {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.35rem;
|
||||
min-width: 190px;
|
||||
}
|
||||
|
||||
.monitoring-toolbar-field-compact {
|
||||
min-width: 160px;
|
||||
}
|
||||
|
||||
.monitoring-toolbar-field label {
|
||||
font-size: 0.875rem;
|
||||
font-weight: 600;
|
||||
color: #5b4b32;
|
||||
}
|
||||
|
||||
.monitoring-meta-row {
|
||||
display: flex;
|
||||
gap: 1rem;
|
||||
flex-wrap: wrap;
|
||||
color: #6b7280;
|
||||
font-size: 0.92rem;
|
||||
}
|
||||
|
||||
.monitoring-summary-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.monitoring-metric-card {
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.monitoring-metric-content {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.35rem;
|
||||
}
|
||||
|
||||
.monitoring-metric-title {
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
color: #6b7280;
|
||||
}
|
||||
|
||||
.monitoring-metric-value {
|
||||
font-size: 2rem;
|
||||
font-weight: 700;
|
||||
color: #1f2937;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.monitoring-metric-subtitle {
|
||||
font-size: 0.85rem;
|
||||
color: #64748b;
|
||||
}
|
||||
|
||||
.monitoring-main-grid {
|
||||
display: grid;
|
||||
grid-template-columns: minmax(0, 2fr) minmax(320px, 1fr);
|
||||
gap: 1rem;
|
||||
align-items: start;
|
||||
}
|
||||
|
||||
.monitoring-sidebar-column {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.monitoring-panel {
|
||||
background: #fff;
|
||||
border: 1px solid #e5e7eb;
|
||||
border-radius: 16px;
|
||||
padding: 1.1rem;
|
||||
box-shadow: 0 12px 40px rgb(120 89 28 / 8%);
|
||||
}
|
||||
|
||||
.monitoring-clients-panel {
|
||||
min-width: 0;
|
||||
}
|
||||
|
||||
.monitoring-panel-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
margin-bottom: 0.85rem;
|
||||
}
|
||||
|
||||
.monitoring-panel-header-stacked {
|
||||
align-items: end;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.monitoring-panel-header h3 {
|
||||
margin: 0;
|
||||
font-size: 1.1rem;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.monitoring-panel-header span {
|
||||
color: #6b7280;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.monitoring-detail-card .e-card-content {
|
||||
padding-top: 0;
|
||||
}
|
||||
|
||||
.monitoring-detail-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.75rem;
|
||||
}
|
||||
|
||||
.monitoring-detail-row {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
gap: 1rem;
|
||||
align-items: flex-start;
|
||||
border-bottom: 1px solid #f1f5f9;
|
||||
padding-bottom: 0.55rem;
|
||||
}
|
||||
|
||||
.monitoring-detail-row span {
|
||||
color: #64748b;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.monitoring-detail-row strong {
|
||||
text-align: right;
|
||||
color: #111827;
|
||||
}
|
||||
|
||||
.monitoring-status-badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
padding: 0.22rem 0.6rem;
|
||||
border-radius: 999px;
|
||||
font-weight: 700;
|
||||
font-size: 0.78rem;
|
||||
letter-spacing: 0.01em;
|
||||
}
|
||||
|
||||
.monitoring-screenshot {
|
||||
width: 100%;
|
||||
border-radius: 12px;
|
||||
border: 1px solid #e5e7eb;
|
||||
background: linear-gradient(135deg, #f8fafc, #e2e8f0);
|
||||
min-height: 180px;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
.monitoring-screenshot-meta {
|
||||
margin-top: 0.55rem;
|
||||
font-size: 0.88rem;
|
||||
color: #64748b;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.35rem;
|
||||
}
|
||||
|
||||
.monitoring-shot-type {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
border-radius: 999px;
|
||||
padding: 0.15rem 0.55rem;
|
||||
font-size: 0.78rem;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.monitoring-shot-type-periodic {
|
||||
background: #e2e8f0;
|
||||
color: #334155;
|
||||
}
|
||||
|
||||
.monitoring-shot-type-event {
|
||||
background: #ffedd5;
|
||||
color: #9a3412;
|
||||
}
|
||||
|
||||
.monitoring-shot-type-active {
|
||||
box-shadow: 0 0 0 2px #fdba74;
|
||||
}
|
||||
|
||||
.monitoring-error-box {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.5rem;
|
||||
padding: 0.85rem;
|
||||
border-radius: 12px;
|
||||
background: linear-gradient(135deg, #fff1f2, #fee2e2);
|
||||
border: 1px solid #fecdd3;
|
||||
}
|
||||
|
||||
.monitoring-error-time {
|
||||
color: #9f1239;
|
||||
font-size: 0.85rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.monitoring-error-message {
|
||||
color: #4c0519;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.monitoring-mono {
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace;
|
||||
font-size: 0.85rem;
|
||||
}
|
||||
|
||||
.monitoring-log-detail-row {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
gap: 1rem;
|
||||
align-items: flex-start;
|
||||
border-bottom: 1px solid #f1f5f9;
|
||||
padding-bottom: 0.55rem;
|
||||
}
|
||||
|
||||
.monitoring-log-detail-row span {
|
||||
color: #64748b;
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.monitoring-log-detail-row strong {
|
||||
text-align: right;
|
||||
color: #111827;
|
||||
}
|
||||
|
||||
.monitoring-log-context {
|
||||
margin: 0;
|
||||
background: #f8fafc;
|
||||
border: 1px solid #e2e8f0;
|
||||
border-radius: 10px;
|
||||
padding: 0.75rem;
|
||||
white-space: pre-wrap;
|
||||
overflow-wrap: anywhere;
|
||||
max-height: 280px;
|
||||
overflow: auto;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace;
|
||||
font-size: 0.84rem;
|
||||
color: #0f172a;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-content {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 1rem;
|
||||
padding: 0.9rem 1rem 0.55rem;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-body {
|
||||
min-height: 340px;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: space-between;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-actions {
|
||||
margin-top: 0.5rem;
|
||||
padding: 0 1rem 0.9rem;
|
||||
display: flex;
|
||||
justify-content: flex-end;
|
||||
}
|
||||
|
||||
.monitoring-log-context-title {
|
||||
font-weight: 600;
|
||||
margin-bottom: 0.55rem;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-content .monitoring-log-detail-row {
|
||||
padding: 0.1rem 0 0.75rem;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-content .monitoring-log-context {
|
||||
padding: 0.95rem;
|
||||
border-radius: 12px;
|
||||
}
|
||||
|
||||
.monitoring-lower-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(2, minmax(0, 1fr));
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
@media (width <= 1200px) {
|
||||
.monitoring-main-grid,
|
||||
.monitoring-lower-grid {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
}
|
||||
|
||||
@media (width <= 720px) {
|
||||
.monitoring-page {
|
||||
padding: 0.25rem 0 0.75rem;
|
||||
}
|
||||
|
||||
.monitoring-title {
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
|
||||
.monitoring-header-row,
|
||||
.monitoring-panel-header,
|
||||
.monitoring-detail-row,
|
||||
.monitoring-log-detail-row {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.monitoring-detail-row strong,
|
||||
.monitoring-log-detail-row strong {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.monitoring-toolbar,
|
||||
.monitoring-toolbar-field,
|
||||
.monitoring-toolbar-field-compact {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-content {
|
||||
padding: 0.4rem 0.2rem 0.1rem;
|
||||
gap: 0.75rem;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-body {
|
||||
min-height: 300px;
|
||||
}
|
||||
|
||||
.monitoring-log-dialog-actions {
|
||||
padding: 0 0.2rem 0.4rem;
|
||||
}
|
||||
}
|
||||
573
dashboard/src/monitoring.tsx
Normal file
573
dashboard/src/monitoring.tsx
Normal file
@@ -0,0 +1,573 @@
|
||||
import React from 'react';
|
||||
import {
|
||||
fetchClientMonitoringLogs,
|
||||
fetchMonitoringOverview,
|
||||
fetchRecentClientErrors,
|
||||
type MonitoringClient,
|
||||
type MonitoringLogEntry,
|
||||
type MonitoringOverview,
|
||||
} from './apiClientMonitoring';
|
||||
import { useAuth } from './useAuth';
|
||||
import { ButtonComponent } from '@syncfusion/ej2-react-buttons';
|
||||
import { DropDownListComponent } from '@syncfusion/ej2-react-dropdowns';
|
||||
import {
|
||||
GridComponent,
|
||||
ColumnsDirective,
|
||||
ColumnDirective,
|
||||
Inject,
|
||||
Page,
|
||||
Search,
|
||||
Sort,
|
||||
Toolbar,
|
||||
} from '@syncfusion/ej2-react-grids';
|
||||
import { MessageComponent } from '@syncfusion/ej2-react-notifications';
|
||||
import { DialogComponent } from '@syncfusion/ej2-react-popups';
|
||||
import './monitoring.css';
|
||||
|
||||
const REFRESH_INTERVAL_MS = 15000;
|
||||
const PRIORITY_REFRESH_INTERVAL_MS = 3000;
|
||||
|
||||
const hourOptions = [
|
||||
{ text: 'Letzte 6 Stunden', value: 6 },
|
||||
{ text: 'Letzte 24 Stunden', value: 24 },
|
||||
{ text: 'Letzte 72 Stunden', value: 72 },
|
||||
{ text: 'Letzte 168 Stunden', value: 168 },
|
||||
];
|
||||
|
||||
const logLevelOptions = [
|
||||
{ text: 'Alle Logs', value: 'ALL' },
|
||||
{ text: 'ERROR', value: 'ERROR' },
|
||||
{ text: 'WARN', value: 'WARN' },
|
||||
{ text: 'INFO', value: 'INFO' },
|
||||
{ text: 'DEBUG', value: 'DEBUG' },
|
||||
];
|
||||
|
||||
const statusPalette: Record<string, { label: string; color: string; background: string }> = {
|
||||
healthy: { label: 'Stabil', color: '#166534', background: '#dcfce7' },
|
||||
warning: { label: 'Warnung', color: '#92400e', background: '#fef3c7' },
|
||||
critical: { label: 'Kritisch', color: '#991b1b', background: '#fee2e2' },
|
||||
offline: { label: 'Offline', color: '#334155', background: '#e2e8f0' },
|
||||
};
|
||||
|
||||
function parseUtcDate(value?: string | null): Date | null {
|
||||
if (!value) return null;
|
||||
const trimmed = value.trim();
|
||||
if (!trimmed) return null;
|
||||
|
||||
const hasTimezone = /[zZ]$|[+-]\d{2}:?\d{2}$/.test(trimmed);
|
||||
const utcValue = hasTimezone ? trimmed : `${trimmed}Z`;
|
||||
const parsed = new Date(utcValue);
|
||||
if (Number.isNaN(parsed.getTime())) return null;
|
||||
return parsed;
|
||||
}
|
||||
|
||||
function formatTimestamp(value?: string | null): string {
|
||||
if (!value) return 'Keine Daten';
|
||||
const date = parseUtcDate(value);
|
||||
if (!date) return value;
|
||||
return date.toLocaleString('de-DE');
|
||||
}
|
||||
|
||||
function formatRelative(value?: string | null): string {
|
||||
if (!value) return 'Keine Daten';
|
||||
const date = parseUtcDate(value);
|
||||
if (!date) return 'Unbekannt';
|
||||
|
||||
const diffMs = Date.now() - date.getTime();
|
||||
const diffMinutes = Math.floor(diffMs / 60000);
|
||||
const diffHours = Math.floor(diffMinutes / 60);
|
||||
const diffDays = Math.floor(diffHours / 24);
|
||||
|
||||
if (diffMinutes < 1) return 'gerade eben';
|
||||
if (diffMinutes < 60) return `vor ${diffMinutes} Min.`;
|
||||
if (diffHours < 24) return `vor ${diffHours} Std.`;
|
||||
return `vor ${diffDays} Tag${diffDays === 1 ? '' : 'en'}`;
|
||||
}
|
||||
|
||||
function statusBadge(status: string) {
|
||||
const palette = statusPalette[status] || statusPalette.offline;
|
||||
return (
|
||||
<span
|
||||
className="monitoring-status-badge"
|
||||
style={{ color: palette.color, backgroundColor: palette.background }}
|
||||
>
|
||||
{palette.label}
|
||||
</span>
|
||||
);
|
||||
}
|
||||
|
||||
function screenshotTypeBadge(type?: string | null, hasPriority = false) {
|
||||
const normalized = (type || 'periodic').toLowerCase();
|
||||
const map: Record<string, { label: string; className: string }> = {
|
||||
periodic: { label: 'Periodisch', className: 'monitoring-shot-type-periodic' },
|
||||
event_start: { label: 'Event-Start', className: 'monitoring-shot-type-event' },
|
||||
event_stop: { label: 'Event-Stopp', className: 'monitoring-shot-type-event' },
|
||||
};
|
||||
|
||||
const info = map[normalized] || map.periodic;
|
||||
const classes = `monitoring-shot-type ${info.className}${hasPriority ? ' monitoring-shot-type-active' : ''}`;
|
||||
return <span className={classes}>{info.label}</span>;
|
||||
}
|
||||
|
||||
function renderMetricCard(title: string, value: number, subtitle: string, accent: string) {
|
||||
return (
|
||||
<div className="e-card monitoring-metric-card" style={{ borderTop: `4px solid ${accent}` }}>
|
||||
<div className="e-card-content monitoring-metric-content">
|
||||
<div className="monitoring-metric-title">{title}</div>
|
||||
<div className="monitoring-metric-value">{value}</div>
|
||||
<div className="monitoring-metric-subtitle">{subtitle}</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function renderContext(context?: Record<string, unknown>): string {
|
||||
if (!context || Object.keys(context).length === 0) {
|
||||
return 'Kein Kontext vorhanden';
|
||||
}
|
||||
try {
|
||||
return JSON.stringify(context, null, 2);
|
||||
} catch {
|
||||
return 'Kontext konnte nicht formatiert werden';
|
||||
}
|
||||
}
|
||||
|
||||
function buildScreenshotUrl(client: MonitoringClient, overviewTimestamp?: string | null): string {
|
||||
const refreshKey = client.lastScreenshotHash || client.lastScreenshotAnalyzed || overviewTimestamp;
|
||||
if (!refreshKey) {
|
||||
return client.screenshotUrl;
|
||||
}
|
||||
|
||||
const separator = client.screenshotUrl.includes('?') ? '&' : '?';
|
||||
return `${client.screenshotUrl}${separator}v=${encodeURIComponent(refreshKey)}`;
|
||||
}
|
||||
|
||||
const MonitoringDashboard: React.FC = () => {
|
||||
const { user } = useAuth();
|
||||
const [hours, setHours] = React.useState<number>(24);
|
||||
const [logLevel, setLogLevel] = React.useState<string>('ALL');
|
||||
const [overview, setOverview] = React.useState<MonitoringOverview | null>(null);
|
||||
const [recentErrors, setRecentErrors] = React.useState<MonitoringLogEntry[]>([]);
|
||||
const [clientLogs, setClientLogs] = React.useState<MonitoringLogEntry[]>([]);
|
||||
const [selectedClientUuid, setSelectedClientUuid] = React.useState<string | null>(null);
|
||||
const [loading, setLoading] = React.useState<boolean>(true);
|
||||
const [error, setError] = React.useState<string | null>(null);
|
||||
const [logsLoading, setLogsLoading] = React.useState<boolean>(false);
|
||||
const [screenshotErrored, setScreenshotErrored] = React.useState<boolean>(false);
|
||||
const selectedClientUuidRef = React.useRef<string | null>(null);
|
||||
const [selectedLogEntry, setSelectedLogEntry] = React.useState<MonitoringLogEntry | null>(null);
|
||||
|
||||
const selectedClient = React.useMemo<MonitoringClient | null>(() => {
|
||||
if (!overview || !selectedClientUuid) return null;
|
||||
return overview.clients.find(client => client.uuid === selectedClientUuid) || null;
|
||||
}, [overview, selectedClientUuid]);
|
||||
|
||||
const selectedClientScreenshotUrl = React.useMemo<string | null>(() => {
|
||||
if (!selectedClient) return null;
|
||||
return buildScreenshotUrl(selectedClient, overview?.timestamp || null);
|
||||
}, [selectedClient, overview?.timestamp]);
|
||||
|
||||
React.useEffect(() => {
|
||||
selectedClientUuidRef.current = selectedClientUuid;
|
||||
}, [selectedClientUuid]);
|
||||
|
||||
const loadOverview = React.useCallback(async (requestedHours: number, preserveSelection = true) => {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
try {
|
||||
const [overviewData, errorsData] = await Promise.all([
|
||||
fetchMonitoringOverview(requestedHours),
|
||||
fetchRecentClientErrors(25),
|
||||
]);
|
||||
setOverview(overviewData);
|
||||
setRecentErrors(errorsData);
|
||||
|
||||
const currentSelection = selectedClientUuidRef.current;
|
||||
const nextSelectedUuid =
|
||||
preserveSelection && currentSelection && overviewData.clients.some(client => client.uuid === currentSelection)
|
||||
? currentSelection
|
||||
: overviewData.clients[0]?.uuid || null;
|
||||
|
||||
setSelectedClientUuid(nextSelectedUuid);
|
||||
setScreenshotErrored(false);
|
||||
} catch (loadError) {
|
||||
setError(loadError instanceof Error ? loadError.message : 'Monitoring-Daten konnten nicht geladen werden');
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, []);
|
||||
|
||||
React.useEffect(() => {
|
||||
loadOverview(hours, false);
|
||||
}, [hours, loadOverview]);
|
||||
|
||||
React.useEffect(() => {
|
||||
const hasActivePriorityScreenshots = (overview?.summary.activePriorityScreenshots || 0) > 0;
|
||||
const intervalMs = hasActivePriorityScreenshots ? PRIORITY_REFRESH_INTERVAL_MS : REFRESH_INTERVAL_MS;
|
||||
const intervalId = window.setInterval(() => {
|
||||
loadOverview(hours);
|
||||
}, intervalMs);
|
||||
|
||||
return () => window.clearInterval(intervalId);
|
||||
}, [hours, loadOverview, overview?.summary.activePriorityScreenshots]);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (!selectedClientUuid) {
|
||||
setClientLogs([]);
|
||||
return;
|
||||
}
|
||||
|
||||
let active = true;
|
||||
const loadLogs = async () => {
|
||||
setLogsLoading(true);
|
||||
try {
|
||||
const logs = await fetchClientMonitoringLogs(selectedClientUuid, { level: logLevel, limit: 100 });
|
||||
if (active) {
|
||||
setClientLogs(logs);
|
||||
}
|
||||
} catch (loadError) {
|
||||
if (active) {
|
||||
setClientLogs([]);
|
||||
setError(loadError instanceof Error ? loadError.message : 'Client-Logs konnten nicht geladen werden');
|
||||
}
|
||||
} finally {
|
||||
if (active) {
|
||||
setLogsLoading(false);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
loadLogs();
|
||||
return () => {
|
||||
active = false;
|
||||
};
|
||||
}, [selectedClientUuid, logLevel]);
|
||||
|
||||
React.useEffect(() => {
|
||||
setScreenshotErrored(false);
|
||||
}, [selectedClientUuid]);
|
||||
|
||||
if (!user || user.role !== 'superadmin') {
|
||||
return (
|
||||
<MessageComponent severity="Error" content="Dieses Monitoring-Dashboard ist nur für Superadministratoren sichtbar." />
|
||||
);
|
||||
}
|
||||
|
||||
const clientGridData = (overview?.clients || []).map(client => ({
|
||||
...client,
|
||||
displayName: client.description || client.hostname || client.uuid,
|
||||
lastAliveDisplay: formatTimestamp(client.lastAlive),
|
||||
currentProcessDisplay: client.currentProcess || 'kein Prozess',
|
||||
processStatusDisplay: client.processStatus || 'unbekannt',
|
||||
errorCount: client.logCounts24h.error,
|
||||
warnCount: client.logCounts24h.warn,
|
||||
}));
|
||||
|
||||
return (
|
||||
<div className="monitoring-page">
|
||||
<div className="monitoring-header-row">
|
||||
<div>
|
||||
<h2 className="monitoring-title">Monitor-Dashboard</h2>
|
||||
<p className="monitoring-subtitle">
|
||||
Live-Zustand der Infoscreen-Clients, Prozessstatus und zentrale Fehlerprotokolle.
|
||||
</p>
|
||||
</div>
|
||||
<div className="monitoring-toolbar">
|
||||
<div className="monitoring-toolbar-field">
|
||||
<label>Zeitraum</label>
|
||||
<DropDownListComponent
|
||||
dataSource={hourOptions}
|
||||
fields={{ text: 'text', value: 'value' }}
|
||||
value={hours}
|
||||
change={(args: { value: number }) => setHours(Number(args.value))}
|
||||
/>
|
||||
</div>
|
||||
<ButtonComponent cssClass="e-primary" onClick={() => loadOverview(hours)} disabled={loading}>
|
||||
Aktualisieren
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{error && <MessageComponent severity="Error" content={error} />}
|
||||
|
||||
{overview && (
|
||||
<div className="monitoring-meta-row">
|
||||
<span>Stand: {formatTimestamp(overview.timestamp)}</span>
|
||||
<span>Alive-Fenster: {overview.gracePeriodSeconds} Sekunden</span>
|
||||
<span>Betrachtungszeitraum: {overview.periodHours} Stunden</span>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="monitoring-summary-grid">
|
||||
{renderMetricCard('Clients gesamt', overview?.summary.totalClients || 0, 'Registrierte Displays', '#7c3aed')}
|
||||
{renderMetricCard('Online', overview?.summary.onlineClients || 0, 'Heartbeat innerhalb der Grace-Periode', '#15803d')}
|
||||
{renderMetricCard('Warnungen', overview?.summary.warningClients || 0, 'Warn-Logs oder Übergangszustände', '#d97706')}
|
||||
{renderMetricCard('Kritisch', overview?.summary.criticalClients || 0, 'Crashs oder Fehler-Logs', '#dc2626')}
|
||||
{renderMetricCard('Offline', overview?.summary.offlineClients || 0, 'Keine frischen Signale', '#475569')}
|
||||
{renderMetricCard('Prioritäts-Screens', overview?.summary.activePriorityScreenshots || 0, 'Event-Start/Stop aktiv', '#ea580c')}
|
||||
{renderMetricCard('Fehler-Logs', overview?.summary.errorLogs || 0, 'Im gewählten Zeitraum', '#b91c1c')}
|
||||
</div>
|
||||
|
||||
{loading && !overview ? (
|
||||
<MessageComponent severity="Info" content="Monitoring-Daten werden geladen ..." />
|
||||
) : (
|
||||
<div className="monitoring-main-grid">
|
||||
<div className="monitoring-panel monitoring-clients-panel">
|
||||
<div className="monitoring-panel-header">
|
||||
<h3>Client-Zustand</h3>
|
||||
<span>{overview?.clients.length || 0} Einträge</span>
|
||||
</div>
|
||||
<GridComponent
|
||||
dataSource={clientGridData}
|
||||
allowPaging={true}
|
||||
pageSettings={{ pageSize: 10 }}
|
||||
allowSorting={true}
|
||||
toolbar={['Search']}
|
||||
height={460}
|
||||
rowSelected={(args: { data: MonitoringClient }) => {
|
||||
setSelectedClientUuid(args.data.uuid);
|
||||
}}
|
||||
>
|
||||
<ColumnsDirective>
|
||||
<ColumnDirective
|
||||
field="status"
|
||||
headerText="Status"
|
||||
width="120"
|
||||
template={(props: MonitoringClient) => statusBadge(props.status)}
|
||||
/>
|
||||
<ColumnDirective field="displayName" headerText="Client" width="190" />
|
||||
<ColumnDirective field="groupName" headerText="Gruppe" width="150" />
|
||||
<ColumnDirective field="currentProcessDisplay" headerText="Prozess" width="130" />
|
||||
<ColumnDirective field="processStatusDisplay" headerText="Prozessstatus" width="130" />
|
||||
<ColumnDirective field="errorCount" headerText="ERROR" textAlign="Right" width="90" />
|
||||
<ColumnDirective field="warnCount" headerText="WARN" textAlign="Right" width="90" />
|
||||
<ColumnDirective field="lastAliveDisplay" headerText="Letztes Signal" width="170" />
|
||||
</ColumnsDirective>
|
||||
<Inject services={[Page, Search, Sort, Toolbar]} />
|
||||
</GridComponent>
|
||||
</div>
|
||||
|
||||
<div className="monitoring-sidebar-column">
|
||||
<div className="e-card monitoring-detail-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-title">Aktiver Client</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
{selectedClient ? (
|
||||
<div className="monitoring-detail-list">
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Name</span>
|
||||
<strong>{selectedClient.description || selectedClient.hostname || selectedClient.uuid}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Status</span>
|
||||
<strong>{statusBadge(selectedClient.status)}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>UUID</span>
|
||||
<strong className="monitoring-mono">{selectedClient.uuid}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Raumgruppe</span>
|
||||
<strong>{selectedClient.groupName || 'Nicht zugeordnet'}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Prozess</span>
|
||||
<strong>{selectedClient.currentProcess || 'kein Prozess'}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>PID</span>
|
||||
<strong>{selectedClient.processPid || 'keine PID'}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Event-ID</span>
|
||||
<strong>{selectedClient.currentEventId || 'keine Zuordnung'}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Letztes Signal</span>
|
||||
<strong>{formatRelative(selectedClient.lastAlive)}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Bildschirmstatus</span>
|
||||
<strong>{selectedClient.screenHealthStatus || 'UNKNOWN'}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Letzte Analyse</span>
|
||||
<strong>{formatTimestamp(selectedClient.lastScreenshotAnalyzed)}</strong>
|
||||
</div>
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Screenshot-Typ</span>
|
||||
<strong>
|
||||
{screenshotTypeBadge(
|
||||
selectedClient.latestScreenshotType,
|
||||
!!selectedClient.hasActivePriorityScreenshot
|
||||
)}
|
||||
</strong>
|
||||
</div>
|
||||
{selectedClient.priorityScreenshotReceivedAt && (
|
||||
<div className="monitoring-detail-row">
|
||||
<span>Priorität empfangen</span>
|
||||
<strong>{formatTimestamp(selectedClient.priorityScreenshotReceivedAt)}</strong>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
) : (
|
||||
<MessageComponent severity="Info" content="Wählen Sie links einen Client aus." />
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="e-card monitoring-detail-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-title">Der letzte Screenshot</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
{selectedClient ? (
|
||||
<>
|
||||
{screenshotErrored ? (
|
||||
<MessageComponent severity="Warning" content="Für diesen Client liegt noch kein Screenshot vor." />
|
||||
) : (
|
||||
<img
|
||||
src={selectedClientScreenshotUrl || selectedClient.screenshotUrl}
|
||||
alt={`Screenshot ${selectedClient.uuid}`}
|
||||
className="monitoring-screenshot"
|
||||
onError={() => setScreenshotErrored(true)}
|
||||
/>
|
||||
)}
|
||||
<div className="monitoring-screenshot-meta">
|
||||
<span>Empfangen: {formatTimestamp(selectedClient.lastScreenshotAnalyzed)}</span>
|
||||
<span>
|
||||
Typ:{' '}
|
||||
{screenshotTypeBadge(
|
||||
selectedClient.latestScreenshotType,
|
||||
!!selectedClient.hasActivePriorityScreenshot
|
||||
)}
|
||||
</span>
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
<MessageComponent severity="Info" content="Kein Client ausgewählt." />
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="e-card monitoring-detail-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-title">Letzter Fehler</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
{selectedClient?.latestError ? (
|
||||
<div className="monitoring-error-box">
|
||||
<div className="monitoring-error-time">{formatTimestamp(selectedClient.latestError.timestamp)}</div>
|
||||
<div className="monitoring-error-message">{selectedClient.latestError.message}</div>
|
||||
</div>
|
||||
) : (
|
||||
<MessageComponent severity="Success" content="Kein ERROR-Log für den ausgewählten Client gefunden." />
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="monitoring-lower-grid">
|
||||
<div className="monitoring-panel">
|
||||
<div className="monitoring-panel-header monitoring-panel-header-stacked">
|
||||
<div>
|
||||
<h3>Client-Logs</h3>
|
||||
<span>{selectedClient ? `Client ${selectedClient.uuid}` : 'Kein Client ausgewählt'}</span>
|
||||
</div>
|
||||
<div className="monitoring-toolbar-field monitoring-toolbar-field-compact">
|
||||
<label>Level</label>
|
||||
<DropDownListComponent
|
||||
dataSource={logLevelOptions}
|
||||
fields={{ text: 'text', value: 'value' }}
|
||||
value={logLevel}
|
||||
change={(args: { value: string }) => setLogLevel(String(args.value))}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
{logsLoading && <MessageComponent severity="Info" content="Client-Logs werden geladen ..." />}
|
||||
<GridComponent
|
||||
dataSource={clientLogs}
|
||||
allowPaging={true}
|
||||
pageSettings={{ pageSize: 8 }}
|
||||
allowSorting={true}
|
||||
height={320}
|
||||
rowSelected={(args: { data: MonitoringLogEntry }) => {
|
||||
setSelectedLogEntry(args.data);
|
||||
}}
|
||||
>
|
||||
<ColumnsDirective>
|
||||
<ColumnDirective field="timestamp" headerText="Zeit" width="180" template={(props: MonitoringLogEntry) => formatTimestamp(props.timestamp)} />
|
||||
<ColumnDirective field="level" headerText="Level" width="90" />
|
||||
<ColumnDirective field="message" headerText="Nachricht" width="360" />
|
||||
</ColumnsDirective>
|
||||
<Inject services={[Page, Sort]} />
|
||||
</GridComponent>
|
||||
</div>
|
||||
|
||||
<div className="monitoring-panel">
|
||||
<div className="monitoring-panel-header">
|
||||
<h3>Letzte Fehler systemweit</h3>
|
||||
<span>{recentErrors.length} Einträge</span>
|
||||
</div>
|
||||
<GridComponent dataSource={recentErrors} allowPaging={true} pageSettings={{ pageSize: 8 }} allowSorting={true} height={320}>
|
||||
<ColumnsDirective>
|
||||
<ColumnDirective field="timestamp" headerText="Zeit" width="180" template={(props: MonitoringLogEntry) => formatTimestamp(props.timestamp)} />
|
||||
<ColumnDirective field="client_uuid" headerText="Client" width="220" />
|
||||
<ColumnDirective field="message" headerText="Nachricht" width="360" />
|
||||
</ColumnsDirective>
|
||||
<Inject services={[Page, Sort]} />
|
||||
</GridComponent>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={!!selectedLogEntry}
|
||||
width="860px"
|
||||
minHeight="420px"
|
||||
header="Log-Details"
|
||||
animationSettings={{ effect: 'None' }}
|
||||
buttons={[]}
|
||||
showCloseIcon={true}
|
||||
close={() => setSelectedLogEntry(null)}
|
||||
>
|
||||
{selectedLogEntry && (
|
||||
<div className="monitoring-log-dialog-body">
|
||||
<div className="monitoring-log-dialog-content">
|
||||
<div className="monitoring-log-detail-row">
|
||||
<span>Zeit</span>
|
||||
<strong>{formatTimestamp(selectedLogEntry.timestamp)}</strong>
|
||||
</div>
|
||||
<div className="monitoring-log-detail-row">
|
||||
<span>Level</span>
|
||||
<strong>{selectedLogEntry.level || 'Unbekannt'}</strong>
|
||||
</div>
|
||||
<div className="monitoring-log-detail-row">
|
||||
<span>Nachricht</span>
|
||||
<strong style={{ whiteSpace: 'normal', textAlign: 'left' }}>{selectedLogEntry.message}</strong>
|
||||
</div>
|
||||
<div>
|
||||
<div className="monitoring-log-context-title">Kontext</div>
|
||||
<pre className="monitoring-log-context">{renderContext(selectedLogEntry.context)}</pre>
|
||||
</div>
|
||||
</div>
|
||||
<div className="monitoring-log-dialog-actions">
|
||||
<ButtonComponent onClick={() => setSelectedLogEntry(null)}>Schließen</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</DialogComponent>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default MonitoringDashboard;
|
||||
177
dashboard/src/ressourcen.css
Normal file
177
dashboard/src/ressourcen.css
Normal file
@@ -0,0 +1,177 @@
|
||||
/* Ressourcen - Timeline Schedule Styles */
|
||||
|
||||
.ressourcen-container {
|
||||
padding: 20px;
|
||||
background-color: #f5f5f5;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
.ressourcen-title {
|
||||
font-size: 28px;
|
||||
font-weight: 600;
|
||||
margin-bottom: 20px;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.ressourcen-controls {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 15px;
|
||||
margin-bottom: 30px;
|
||||
align-items: center;
|
||||
background-color: white;
|
||||
padding: 15px;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 2px 4px rgb(0 0 0 / 10%);
|
||||
}
|
||||
|
||||
.ressourcen-control-group {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.ressourcen-label {
|
||||
font-weight: 500;
|
||||
color: #555;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.ressourcen-button-group {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.ressourcen-button {
|
||||
border-radius: 4px !important;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Group Order Panel */
|
||||
.ressourcen-order-panel {
|
||||
background: white;
|
||||
padding: 15px;
|
||||
margin-bottom: 15px;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 2px 4px rgb(0 0 0 / 10%);
|
||||
}
|
||||
|
||||
.ressourcen-order-header {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.ressourcen-order-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 8px;
|
||||
max-height: 250px;
|
||||
overflow-y: auto;
|
||||
padding: 8px;
|
||||
background-color: #f9f9f9;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.ressourcen-order-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 8px;
|
||||
background: white;
|
||||
border: 1px solid #e0e0e0;
|
||||
border-radius: 4px;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.ressourcen-order-position {
|
||||
font-weight: 600;
|
||||
color: #666;
|
||||
min-width: 24px;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
.ressourcen-order-name {
|
||||
flex: 1;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.ressourcen-order-buttons {
|
||||
display: flex;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.ressourcen-order-buttons .e-btn {
|
||||
min-width: 32px !important;
|
||||
}
|
||||
|
||||
.ressourcen-loading {
|
||||
text-align: center;
|
||||
padding: 40px;
|
||||
background-color: white;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 2px 4px rgb(0 0 0 / 10%);
|
||||
}
|
||||
|
||||
.ressourcen-loading p {
|
||||
font-size: 16px;
|
||||
color: #666;
|
||||
}
|
||||
|
||||
.ressourcen-timeline-wrapper {
|
||||
background-color: white;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 2px 8px rgb(0 0 0 / 10%);
|
||||
overflow: hidden;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
/* Scheduler Timeline Styling */
|
||||
.e-schedule {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu,
|
||||
Cantarell, 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif;
|
||||
}
|
||||
|
||||
.e-schedule .e-timeline-view {
|
||||
border: none;
|
||||
}
|
||||
|
||||
.e-schedule .e-date-header {
|
||||
background-color: #f9f9f9;
|
||||
border-bottom: 1px solid #e0e0e0;
|
||||
}
|
||||
|
||||
.e-schedule .e-header-cells {
|
||||
font-weight: 600;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.ressourcen-timeline-wrapper .e-schedule {
|
||||
flex: 1;
|
||||
height: 100% !important;
|
||||
}
|
||||
|
||||
.e-schedule .e-work-cells {
|
||||
background-color: #fafafa;
|
||||
border-color: #f0f0f0;
|
||||
}
|
||||
|
||||
/* Set compact row height */
|
||||
.e-schedule .e-timeline-view .e-content-wrap table tbody tr {
|
||||
height: 65px;
|
||||
}
|
||||
|
||||
.e-schedule .e-timeline-view .e-content-wrap .e-work-cells {
|
||||
height: 65px;
|
||||
}
|
||||
|
||||
/* Event bar styling */
|
||||
.e-schedule .e-appointment {
|
||||
border-radius: 4px;
|
||||
color: white;
|
||||
line-height: normal;
|
||||
}
|
||||
|
||||
.e-schedule .e-appointment .e-subject {
|
||||
font-size: 12px;
|
||||
font-weight: 500;
|
||||
}
|
||||
@@ -1,8 +1,373 @@
|
||||
import React from 'react';
|
||||
const Ressourcen: React.FC = () => (
|
||||
<div>
|
||||
<h2 className="text-xl font-bold mb-4">Ressourcen</h2>
|
||||
<p>Willkommen im Infoscreen-Management Ressourcen.</p>
|
||||
</div>
|
||||
);
|
||||
import React, { useEffect, useState } from 'react';
|
||||
import {
|
||||
ScheduleComponent,
|
||||
ViewsDirective,
|
||||
ViewDirective,
|
||||
Inject,
|
||||
TimelineViews,
|
||||
Resize,
|
||||
DragAndDrop,
|
||||
ResourcesDirective,
|
||||
ResourceDirective,
|
||||
} from '@syncfusion/ej2-react-schedule';
|
||||
import { ButtonComponent } from '@syncfusion/ej2-react-buttons';
|
||||
import { fetchGroupsWithClients, type Group } from './apiClients';
|
||||
import { fetchEvents } from './apiEvents';
|
||||
import { getGroupColor } from './groupColors';
|
||||
import './ressourcen.css';
|
||||
|
||||
interface ScheduleEvent {
|
||||
Id: number;
|
||||
Subject: string;
|
||||
StartTime: Date;
|
||||
EndTime: Date;
|
||||
ResourceId: number;
|
||||
EventType?: string;
|
||||
}
|
||||
|
||||
type TimelineView = 'day' | 'week';
|
||||
|
||||
const Ressourcen: React.FC = () => {
|
||||
const [scheduleData, setScheduleData] = useState<ScheduleEvent[]>([]);
|
||||
const [groups, setGroups] = useState<Group[]>([]);
|
||||
const [groupOrder, setGroupOrder] = useState<number[]>([]);
|
||||
const [showOrderPanel, setShowOrderPanel] = useState<boolean>(false);
|
||||
const [timelineView] = useState<TimelineView>('day');
|
||||
const [viewDate, setViewDate] = useState<Date>(() => {
|
||||
const now = new Date();
|
||||
now.setHours(0, 0, 0, 0);
|
||||
return now;
|
||||
});
|
||||
const [loading, setLoading] = useState<boolean>(false);
|
||||
const scheduleRef = React.useRef<ScheduleComponent>(null);
|
||||
|
||||
// Calculate dynamic height based on number of groups
|
||||
const calculatedHeight = React.useMemo(() => {
|
||||
const rowHeight = 65; // px per row
|
||||
const headerHeight = 100; // approx header height
|
||||
const totalHeight = groups.length * rowHeight + headerHeight;
|
||||
return `${totalHeight}px`;
|
||||
}, [groups.length]);
|
||||
|
||||
// Load groups on mount
|
||||
useEffect(() => {
|
||||
const loadGroups = async () => {
|
||||
try {
|
||||
console.log('[Ressourcen] Loading groups...');
|
||||
const fetchedGroups = await fetchGroupsWithClients();
|
||||
console.log('[Ressourcen] Fetched groups:', fetchedGroups);
|
||||
// Filter out "Nicht zugeordnet" but show all other groups even if empty
|
||||
const filteredGroups = fetchedGroups.filter(
|
||||
(group) => group.name !== 'Nicht zugeordnet'
|
||||
);
|
||||
console.log('[Ressourcen] Filtered groups:', filteredGroups);
|
||||
setGroups(filteredGroups);
|
||||
} catch (error) {
|
||||
console.error('Fehler beim Laden der Gruppen:', error);
|
||||
}
|
||||
};
|
||||
loadGroups();
|
||||
}, []);
|
||||
|
||||
// Helper: Parse ISO date string
|
||||
const parseUTCDate = React.useCallback((dateStr: string): Date => {
|
||||
const utcStr = dateStr.endsWith('Z') ? dateStr : dateStr + 'Z';
|
||||
return new Date(utcStr);
|
||||
}, []);
|
||||
|
||||
// Calculate date range based on view
|
||||
const getDateRange = React.useCallback((): { start: Date; end: Date } => {
|
||||
const start = new Date(viewDate);
|
||||
start.setHours(0, 0, 0, 0);
|
||||
|
||||
const end = new Date(start);
|
||||
if (timelineView === 'day') {
|
||||
end.setHours(23, 59, 59, 999);
|
||||
} else if (timelineView === 'week') {
|
||||
end.setDate(start.getDate() + 6);
|
||||
end.setHours(23, 59, 59, 999);
|
||||
}
|
||||
return { start, end };
|
||||
}, [viewDate, timelineView]);
|
||||
|
||||
// Load events for all groups
|
||||
useEffect(() => {
|
||||
if (groups.length === 0) {
|
||||
console.log('[Ressourcen] No groups to load events for');
|
||||
setScheduleData([]);
|
||||
return;
|
||||
}
|
||||
|
||||
const loadEventsForAllGroups = async () => {
|
||||
setLoading(true);
|
||||
console.log('[Ressourcen] Loading events for', groups.length, 'groups');
|
||||
try {
|
||||
const { start, end } = getDateRange();
|
||||
const events: ScheduleEvent[] = [];
|
||||
let eventId = 1;
|
||||
|
||||
// Create events for each group
|
||||
for (const group of groups) {
|
||||
try {
|
||||
console.log(`[Ressourcen] Fetching events for group "${group.name}" (ID: ${group.id})`);
|
||||
const apiEvents = await fetchEvents(group.id.toString(), true, {
|
||||
start,
|
||||
end,
|
||||
});
|
||||
console.log(`[Ressourcen] Got ${apiEvents?.length || 0} events for group "${group.name}"`);
|
||||
|
||||
if (Array.isArray(apiEvents) && apiEvents.length > 0) {
|
||||
for (const event of apiEvents) {
|
||||
const eventTitle = event.subject || event.title || 'Unnamed Event';
|
||||
const eventType = event.type || event.event_type || 'other';
|
||||
const eventStart = event.startTime || event.start;
|
||||
const eventEnd = event.endTime || event.end;
|
||||
|
||||
if (!eventStart || !eventEnd) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const parsedStart = parseUTCDate(eventStart);
|
||||
const parsedEnd = parseUTCDate(eventEnd);
|
||||
|
||||
// Keep only events that overlap the visible range.
|
||||
if (parsedEnd < start || parsedStart > end) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Capitalize first letter of event type
|
||||
const formattedType = eventType.charAt(0).toUpperCase() + eventType.slice(1);
|
||||
|
||||
events.push({
|
||||
Id: eventId++,
|
||||
Subject: `${formattedType} - ${eventTitle}`,
|
||||
StartTime: parsedStart,
|
||||
EndTime: parsedEnd,
|
||||
ResourceId: group.id,
|
||||
EventType: eventType,
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Fehler beim Laden von Ereignissen für Gruppe ${group.name}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
console.log('[Ressourcen] Final events:', events);
|
||||
setScheduleData(events);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
loadEventsForAllGroups();
|
||||
}, [groups, timelineView, viewDate, parseUTCDate, getDateRange]);
|
||||
|
||||
// Load saved group order from backend on mount
|
||||
useEffect(() => {
|
||||
const loadGroupOrder = async () => {
|
||||
try {
|
||||
console.log('[Ressourcen] Loading saved group order from backend...');
|
||||
const response = await fetch('/api/groups/order');
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
console.log('[Ressourcen] Retrieved group order:', data);
|
||||
if (data.order && Array.isArray(data.order)) {
|
||||
// Filter order to only include IDs that exist in current groups
|
||||
const existingGroupIds = groups.map(g => g.id);
|
||||
const validOrder = data.order.filter((id: number) => existingGroupIds.includes(id));
|
||||
|
||||
// Add any missing group IDs that aren't in the saved order
|
||||
const missingIds = existingGroupIds.filter(id => !validOrder.includes(id));
|
||||
const finalOrder = [...validOrder, ...missingIds];
|
||||
|
||||
console.log('[Ressourcen] Synced order:', finalOrder);
|
||||
setGroupOrder(finalOrder);
|
||||
} else {
|
||||
// No saved order, use default (current group order)
|
||||
setGroupOrder(groups.map(g => g.id));
|
||||
}
|
||||
} else {
|
||||
console.log('[Ressourcen] No saved order found, using default');
|
||||
setGroupOrder(groups.map(g => g.id));
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[Ressourcen] Error loading group order:', error);
|
||||
// Fall back to default order
|
||||
setGroupOrder(groups.map(g => g.id));
|
||||
}
|
||||
};
|
||||
|
||||
if (groups.length > 0 && groupOrder.length === 0) {
|
||||
loadGroupOrder();
|
||||
}
|
||||
}, [groups, groupOrder.length]);
|
||||
|
||||
// Move group up in order
|
||||
const moveGroupUp = (groupId: number) => {
|
||||
const index = groupOrder.indexOf(groupId);
|
||||
if (index > 0) {
|
||||
const newOrder = [...groupOrder];
|
||||
[newOrder[index - 1], newOrder[index]] = [newOrder[index], newOrder[index - 1]];
|
||||
setGroupOrder(newOrder);
|
||||
}
|
||||
};
|
||||
|
||||
// Move group down in order
|
||||
const moveGroupDown = (groupId: number) => {
|
||||
const index = groupOrder.indexOf(groupId);
|
||||
if (index < groupOrder.length - 1) {
|
||||
const newOrder = [...groupOrder];
|
||||
[newOrder[index], newOrder[index + 1]] = [newOrder[index + 1], newOrder[index]];
|
||||
setGroupOrder(newOrder);
|
||||
}
|
||||
};
|
||||
|
||||
// Save group order to backend
|
||||
const saveGroupOrder = async () => {
|
||||
try {
|
||||
console.log('[Ressourcen] Saving group order:', groupOrder);
|
||||
const response = await fetch('/api/groups/order', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ order: groupOrder }),
|
||||
});
|
||||
if (!response.ok) throw new Error('Failed to save group order');
|
||||
console.log('[Ressourcen] Group order saved successfully');
|
||||
} catch (error) {
|
||||
console.error('Fehler beim Speichern der Reihenfolge:', error);
|
||||
}
|
||||
};
|
||||
|
||||
// Get sorted groups based on current order
|
||||
const sortedGroups = React.useMemo(() => {
|
||||
if (groupOrder.length === 0) return groups;
|
||||
|
||||
// Map order to actual groups
|
||||
const ordered = groupOrder
|
||||
.map(id => groups.find(g => g.id === id))
|
||||
.filter((g): g is Group => g !== undefined);
|
||||
|
||||
// Add any groups not in the order (new groups)
|
||||
const orderedIds = new Set(ordered.map(g => g.id));
|
||||
const unordered = groups.filter(g => !orderedIds.has(g.id));
|
||||
|
||||
return [...ordered, ...unordered];
|
||||
}, [groups, groupOrder]);
|
||||
|
||||
return (
|
||||
<div className="ressourcen-container">
|
||||
<h1 className="ressourcen-title">📊 Ressourcen - Übersicht</h1>
|
||||
|
||||
<div style={{ marginBottom: '15px' }}>
|
||||
<ButtonComponent
|
||||
cssClass={showOrderPanel ? 'e-success' : 'e-outline'}
|
||||
onClick={() => setShowOrderPanel(!showOrderPanel)}
|
||||
>
|
||||
{showOrderPanel ? '✓ Reihenfolge' : 'Reihenfolge ändern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
|
||||
{/* Group Order Control Panel */}
|
||||
{showOrderPanel && (
|
||||
<div className="ressourcen-order-panel">
|
||||
<div className="ressourcen-order-header">
|
||||
<h3 style={{ margin: '0 0 12px 0', fontSize: '14px', fontWeight: 600 }}>
|
||||
📋 Reihenfolge der Gruppen
|
||||
</h3>
|
||||
<div className="ressourcen-order-list">
|
||||
{sortedGroups.map((group, index) => (
|
||||
<div key={group.id} className="ressourcen-order-item">
|
||||
<span className="ressourcen-order-position">{index + 1}.</span>
|
||||
<span className="ressourcen-order-name">{group.name}</span>
|
||||
<div className="ressourcen-order-buttons">
|
||||
<ButtonComponent
|
||||
cssClass="e-outline e-small"
|
||||
onClick={() => moveGroupUp(group.id)}
|
||||
disabled={index === 0}
|
||||
title="Nach oben"
|
||||
style={{ padding: '4px 8px', minWidth: '32px' }}
|
||||
>
|
||||
↑
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-outline e-small"
|
||||
onClick={() => moveGroupDown(group.id)}
|
||||
disabled={index === sortedGroups.length - 1}
|
||||
title="Nach unten"
|
||||
style={{ padding: '4px 8px', minWidth: '32px' }}
|
||||
>
|
||||
↓
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
<ButtonComponent
|
||||
cssClass="e-success"
|
||||
onClick={saveGroupOrder}
|
||||
style={{ marginTop: '12px', width: '100%' }}
|
||||
>
|
||||
💾 Reihenfolge speichern
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Timeline Schedule */}
|
||||
{loading ? (
|
||||
<div className="ressourcen-loading">
|
||||
<p>Wird geladen...</p>
|
||||
</div>
|
||||
) : (
|
||||
<div className="ressourcen-timeline-wrapper">
|
||||
<ScheduleComponent
|
||||
ref={scheduleRef}
|
||||
height={calculatedHeight}
|
||||
width="100%"
|
||||
eventSettings={{ dataSource: scheduleData }}
|
||||
selectedDate={viewDate}
|
||||
currentView={timelineView === 'day' ? 'TimelineDay' : 'TimelineWeek'}
|
||||
group={{ resources: ['Groups'], allowGroupEdit: false }}
|
||||
timeScale={{ interval: 60, slotCount: 1 }}
|
||||
rowAutoHeight={false}
|
||||
actionComplete={(args) => {
|
||||
if (args.requestType === 'dateNavigate' || args.requestType === 'viewNavigate') {
|
||||
const selected = scheduleRef.current?.selectedDate;
|
||||
if (selected) {
|
||||
const normalized = new Date(selected);
|
||||
normalized.setHours(0, 0, 0, 0);
|
||||
setViewDate(normalized);
|
||||
}
|
||||
}
|
||||
}}
|
||||
>
|
||||
<ViewsDirective>
|
||||
<ViewDirective option="TimelineDay" displayName="Tag"></ViewDirective>
|
||||
<ViewDirective option="TimelineWeek" displayName="Woche"></ViewDirective>
|
||||
</ViewsDirective>
|
||||
<ResourcesDirective>
|
||||
<ResourceDirective
|
||||
field="ResourceId"
|
||||
title="Gruppe"
|
||||
name="Groups"
|
||||
allowMultiple={false}
|
||||
dataSource={sortedGroups.map((g) => ({
|
||||
text: g.name,
|
||||
id: g.id,
|
||||
color: getGroupColor(g.id.toString(), groups.map(grp => ({ id: grp.id.toString() }))),
|
||||
}))}
|
||||
textField="text"
|
||||
idField="id"
|
||||
colorField="color"
|
||||
/>
|
||||
</ResourcesDirective>
|
||||
<Inject services={[TimelineViews, Resize, DragAndDrop]} />
|
||||
</ScheduleComponent>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default Ressourcen;
|
||||
|
||||
954
dashboard/src/settings.tsx
Normal file
954
dashboard/src/settings.tsx
Normal file
@@ -0,0 +1,954 @@
|
||||
import React from 'react';
|
||||
import { TabComponent, TabItemDirective, TabItemsDirective } from '@syncfusion/ej2-react-navigations';
|
||||
import { NumericTextBoxComponent, TextBoxComponent } from '@syncfusion/ej2-react-inputs';
|
||||
import { ButtonComponent, CheckBoxComponent } from '@syncfusion/ej2-react-buttons';
|
||||
import { ToastComponent } from '@syncfusion/ej2-react-notifications';
|
||||
import { listHolidays, uploadHolidaysCsv, type Holiday } from './apiHolidays';
|
||||
import { getSupplementTableSettings, updateSupplementTableSettings, getHolidayBannerSetting, updateHolidayBannerSetting } from './apiSystemSettings';
|
||||
import { useAuth } from './useAuth';
|
||||
import { DropDownListComponent } from '@syncfusion/ej2-react-dropdowns';
|
||||
import { listAcademicPeriods, getActiveAcademicPeriod, setActiveAcademicPeriod, type AcademicPeriod } from './apiAcademicPeriods';
|
||||
import { Link } from 'react-router-dom';
|
||||
|
||||
// Minimal event type for Syncfusion Tab 'selected' callback
|
||||
type TabSelectedEvent = { selectedIndex?: number };
|
||||
|
||||
const Einstellungen: React.FC = () => {
|
||||
// Presentation settings state
|
||||
const [presentationInterval, setPresentationInterval] = React.useState(10);
|
||||
const [presentationPageProgress, setPresentationPageProgress] = React.useState(true);
|
||||
const [presentationAutoProgress, setPresentationAutoProgress] = React.useState(true);
|
||||
const [presentationBusy, setPresentationBusy] = React.useState(false);
|
||||
|
||||
// Load settings from backend
|
||||
const loadPresentationSettings = React.useCallback(async () => {
|
||||
try {
|
||||
const keys = [
|
||||
'presentation_interval',
|
||||
'presentation_page_progress',
|
||||
'presentation_auto_progress',
|
||||
];
|
||||
const results = await Promise.all(keys.map(k => import('./apiSystemSettings').then(m => m.getSetting(k))));
|
||||
setPresentationInterval(Number(results[0].value) || 10);
|
||||
setPresentationPageProgress(results[1].value === 'true');
|
||||
setPresentationAutoProgress(results[2].value === 'true');
|
||||
} catch {
|
||||
showToast('Fehler beim Laden der Präsentations-Einstellungen', 'e-toast-danger');
|
||||
}
|
||||
}, []);
|
||||
|
||||
React.useEffect(() => {
|
||||
loadPresentationSettings();
|
||||
}, [loadPresentationSettings]);
|
||||
|
||||
const onSavePresentationSettings = async () => {
|
||||
setPresentationBusy(true);
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
await Promise.all([
|
||||
api.updateSetting('presentation_interval', String(presentationInterval)),
|
||||
api.updateSetting('presentation_page_progress', presentationPageProgress ? 'true' : 'false'),
|
||||
api.updateSetting('presentation_auto_progress', presentationAutoProgress ? 'true' : 'false'),
|
||||
]);
|
||||
showToast('Präsentations-Einstellungen gespeichert', 'e-toast-success');
|
||||
} catch {
|
||||
showToast('Fehler beim Speichern der Präsentations-Einstellungen', 'e-toast-danger');
|
||||
} finally {
|
||||
setPresentationBusy(false);
|
||||
}
|
||||
};
|
||||
const { user } = useAuth();
|
||||
const toastRef = React.useRef<ToastComponent>(null);
|
||||
|
||||
// Holidays state
|
||||
const [file, setFile] = React.useState<File | null>(null);
|
||||
const [busy, setBusy] = React.useState(false);
|
||||
const [message, setMessage] = React.useState<string | null>(null);
|
||||
const [holidays, setHolidays] = React.useState<Holiday[]>([]);
|
||||
const [periods, setPeriods] = React.useState<AcademicPeriod[]>([]);
|
||||
const [activePeriodId, setActivePeriodId] = React.useState<number | null>(null);
|
||||
const periodOptions = React.useMemo(() =>
|
||||
periods.map(p => ({ id: p.id, name: p.display_name || p.name })),
|
||||
[periods]
|
||||
);
|
||||
|
||||
// Supplement table state
|
||||
const [supplementUrl, setSupplementUrl] = React.useState('');
|
||||
const [supplementEnabled, setSupplementEnabled] = React.useState(false);
|
||||
const [supplementBusy, setSupplementBusy] = React.useState(false);
|
||||
|
||||
// Holiday banner state
|
||||
const [holidayBannerEnabled, setHolidayBannerEnabled] = React.useState(true);
|
||||
const [holidayBannerBusy, setHolidayBannerBusy] = React.useState(false);
|
||||
|
||||
// Video defaults state (Admin+)
|
||||
const [videoAutoplay, setVideoAutoplay] = React.useState<boolean>(true);
|
||||
const [videoLoop, setVideoLoop] = React.useState<boolean>(true);
|
||||
const [videoVolume, setVideoVolume] = React.useState<number>(0.8);
|
||||
const [videoMuted, setVideoMuted] = React.useState<boolean>(false);
|
||||
const [videoBusy, setVideoBusy] = React.useState<boolean>(false);
|
||||
|
||||
// Organization name state (Superadmin only)
|
||||
const [organizationName, setOrganizationName] = React.useState<string>('');
|
||||
const [organizationBusy, setOrganizationBusy] = React.useState<boolean>(false);
|
||||
|
||||
// Advanced options: Scheduler refresh interval (Superadmin)
|
||||
const [refreshSeconds, setRefreshSeconds] = React.useState<number>(0);
|
||||
const [refreshBusy, setRefreshBusy] = React.useState<boolean>(false);
|
||||
|
||||
const showToast = (content: string, cssClass: string = 'e-toast-success') => {
|
||||
if (toastRef.current) {
|
||||
toastRef.current.show({
|
||||
content,
|
||||
cssClass,
|
||||
timeOut: 3000,
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
const refresh = React.useCallback(async () => {
|
||||
try {
|
||||
const data = await listHolidays();
|
||||
setHolidays(data.holidays);
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Laden der Ferien';
|
||||
setMessage(msg);
|
||||
showToast(msg, 'e-toast-danger');
|
||||
}
|
||||
}, []);
|
||||
|
||||
const loadSupplementSettings = React.useCallback(async () => {
|
||||
try {
|
||||
const data = await getSupplementTableSettings();
|
||||
setSupplementUrl(data.url || '');
|
||||
setSupplementEnabled(data.enabled || false);
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Laden der Vertretungsplan-Einstellungen';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
}
|
||||
}, []);
|
||||
|
||||
const loadHolidayBannerSetting = React.useCallback(async () => {
|
||||
try {
|
||||
const data = await getHolidayBannerSetting();
|
||||
setHolidayBannerEnabled(data.enabled);
|
||||
} catch (e) {
|
||||
console.error('Fehler beim Laden der Ferienbanner-Einstellung:', e);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Load video default settings (with fallbacks)
|
||||
const loadVideoSettings = React.useCallback(async () => {
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
const keys = ['video_autoplay', 'video_loop', 'video_volume', 'video_muted'] as const;
|
||||
const [autoplay, loop, volume, muted] = await Promise.all(
|
||||
keys.map(k => api.getSetting(k).catch(() => ({ value: null } as { value: string | null })))
|
||||
);
|
||||
setVideoAutoplay(autoplay.value == null ? true : autoplay.value === 'true');
|
||||
setVideoLoop(loop.value == null ? true : loop.value === 'true');
|
||||
const volParsed = volume.value == null ? 0.8 : parseFloat(String(volume.value));
|
||||
setVideoVolume(Number.isFinite(volParsed) ? volParsed : 0.8);
|
||||
setVideoMuted(muted.value == null ? false : muted.value === 'true');
|
||||
} catch (e) {
|
||||
// Fallback to defaults on any error
|
||||
setVideoAutoplay(true);
|
||||
setVideoLoop(true);
|
||||
setVideoVolume(0.8);
|
||||
setVideoMuted(false);
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Laden der Video-Standards';
|
||||
showToast(msg, 'e-toast-warning');
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Load organization name (superadmin only, but endpoint is public)
|
||||
const loadOrganizationName = React.useCallback(async () => {
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
const data = await api.getOrganizationName();
|
||||
setOrganizationName(data.name || '');
|
||||
} catch (e) {
|
||||
console.error('Fehler beim Laden des Organisationsnamens:', e);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Load scheduler refresh seconds (superadmin)
|
||||
const loadRefreshSeconds = React.useCallback(async () => {
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
const setting = await api.getSetting('refresh_seconds');
|
||||
const parsed = Number(setting.value ?? 0);
|
||||
setRefreshSeconds(Number.isFinite(parsed) ? parsed : 0);
|
||||
} catch {
|
||||
// Default to 0 if not present
|
||||
setRefreshSeconds(0);
|
||||
}
|
||||
}, []);
|
||||
|
||||
const loadAcademicPeriods = React.useCallback(async () => {
|
||||
try {
|
||||
const [list, active] = await Promise.all([
|
||||
listAcademicPeriods(),
|
||||
getActiveAcademicPeriod(),
|
||||
]);
|
||||
setPeriods(list);
|
||||
setActivePeriodId(active ? active.id : null);
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Laden der Schuljahre/Perioden';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
}
|
||||
}, []);
|
||||
|
||||
React.useEffect(() => {
|
||||
refresh();
|
||||
loadHolidayBannerSetting(); // Everyone can see this
|
||||
if (user) {
|
||||
// Academic periods for all users
|
||||
loadAcademicPeriods();
|
||||
// System settings only for admin/superadmin (will render only if allowed)
|
||||
if (['admin', 'superadmin'].includes(user.role)) {
|
||||
loadSupplementSettings();
|
||||
loadPresentationSettings();
|
||||
loadVideoSettings();
|
||||
}
|
||||
// Organization name only for superadmin
|
||||
if (user.role === 'superadmin') {
|
||||
loadOrganizationName();
|
||||
loadRefreshSeconds();
|
||||
}
|
||||
}
|
||||
}, [refresh, loadSupplementSettings, loadAcademicPeriods, loadPresentationSettings, loadVideoSettings, loadHolidayBannerSetting, loadOrganizationName, loadRefreshSeconds, user]);
|
||||
|
||||
const onUpload = async () => {
|
||||
if (!file) return;
|
||||
setBusy(true);
|
||||
setMessage(null);
|
||||
try {
|
||||
const res = await uploadHolidaysCsv(file);
|
||||
const msg = `Import erfolgreich: ${res.inserted} neu, ${res.updated} aktualisiert.`;
|
||||
setMessage(msg);
|
||||
showToast(msg, 'e-toast-success');
|
||||
await refresh();
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Import.';
|
||||
setMessage(msg);
|
||||
showToast(msg, 'e-toast-danger');
|
||||
} finally {
|
||||
setBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
const onSaveSupplementSettings = async () => {
|
||||
setSupplementBusy(true);
|
||||
try {
|
||||
await updateSupplementTableSettings(supplementUrl, supplementEnabled);
|
||||
showToast('Vertretungsplan-Einstellungen erfolgreich gespeichert', 'e-toast-success');
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Speichern der Einstellungen';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
} finally {
|
||||
setSupplementBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
const onTestSupplementUrl = () => {
|
||||
if (supplementUrl) {
|
||||
window.open(supplementUrl, '_blank');
|
||||
} else {
|
||||
showToast('Bitte geben Sie zuerst eine URL ein', 'e-toast-warning');
|
||||
}
|
||||
};
|
||||
|
||||
const onSaveHolidayBannerSetting = async () => {
|
||||
setHolidayBannerBusy(true);
|
||||
try {
|
||||
await updateHolidayBannerSetting(holidayBannerEnabled);
|
||||
showToast('Ferienbanner-Einstellung gespeichert', 'e-toast-success');
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Speichern';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
} finally {
|
||||
setHolidayBannerBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
const onSaveVideoSettings = async () => {
|
||||
setVideoBusy(true);
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
await Promise.all([
|
||||
api.updateSetting('video_autoplay', videoAutoplay ? 'true' : 'false'),
|
||||
api.updateSetting('video_loop', videoLoop ? 'true' : 'false'),
|
||||
api.updateSetting('video_volume', String(videoVolume)),
|
||||
api.updateSetting('video_muted', videoMuted ? 'true' : 'false'),
|
||||
]);
|
||||
showToast('Video-Standardeinstellungen gespeichert', 'e-toast-success');
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Speichern der Video-Standards';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
} finally {
|
||||
setVideoBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
const onSaveOrganizationName = async () => {
|
||||
setOrganizationBusy(true);
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
await api.updateOrganizationName(organizationName);
|
||||
showToast('Organisationsname gespeichert', 'e-toast-success');
|
||||
// Trigger a custom event to notify App.tsx to refresh the header
|
||||
window.dispatchEvent(new Event('organizationNameUpdated'));
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Speichern des Organisationsnamens';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
} finally {
|
||||
setOrganizationBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
const onSaveRefreshSeconds = async () => {
|
||||
setRefreshBusy(true);
|
||||
try {
|
||||
const api = await import('./apiSystemSettings');
|
||||
await api.updateSetting('refresh_seconds', String(refreshSeconds));
|
||||
showToast('Scheduler-Refresh-Intervall gespeichert', 'e-toast-success');
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Speichern des Intervalls';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
} finally {
|
||||
setRefreshBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Determine which tabs to show based on role
|
||||
const isAdmin = !!(user && ['admin', 'superadmin'].includes(user.role));
|
||||
const isSuperadmin = !!(user && user.role === 'superadmin');
|
||||
|
||||
// Preserve selected nested-tab indices to avoid resets on parent re-render
|
||||
const [academicTabIndex, setAcademicTabIndex] = React.useState(0);
|
||||
const [displayTabIndex, setDisplayTabIndex] = React.useState(0);
|
||||
const [mediaTabIndex, setMediaTabIndex] = React.useState(0);
|
||||
const [eventsTabIndex, setEventsTabIndex] = React.useState(0);
|
||||
const [usersTabIndex, setUsersTabIndex] = React.useState(0);
|
||||
const [systemTabIndex, setSystemTabIndex] = React.useState(0);
|
||||
|
||||
// ---------- Leaf content functions (second-level tabs) ----------
|
||||
// Academic Calendar
|
||||
// (Old separate Import/List tab contents removed in favor of combined tab)
|
||||
|
||||
// Combined Import + List tab content
|
||||
const HolidaysImportAndListContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
{/* Import Card */}
|
||||
<div className="e-card" style={{ marginBottom: 20 }}>
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Schulferien importieren</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
<p style={{ marginBottom: 12, fontSize: '14px', color: '#666' }}>
|
||||
Unterstützte Formate:
|
||||
<br />• CSV mit Kopfzeile: <code>name</code>, <code>start_date</code>, <code>end_date</code>, optional <code>region</code>
|
||||
<br />• TXT/CSV ohne Kopfzeile mit Spalten: interner Name, <strong>Name</strong>, <strong>Start (YYYYMMDD)</strong>, <strong>Ende (YYYYMMDD)</strong>, optional interne Info (ignoriert)
|
||||
</p>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 12, marginBottom: 12 }}>
|
||||
<input type="file" accept=".csv,text/csv,.txt,text/plain" onChange={e => setFile(e.target.files?.[0] ?? null)} />
|
||||
<ButtonComponent cssClass="e-primary" onClick={onUpload} disabled={!file || busy}>
|
||||
{busy ? 'Importiere…' : 'CSV/TXT importieren'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
{message && <div style={{ marginTop: 8, fontSize: '14px' }}>{message}</div>}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* List Card */}
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Importierte Ferien</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
{holidays.length === 0 ? (
|
||||
<div style={{ fontSize: '14px', color: '#666' }}>Keine Einträge vorhanden.</div>
|
||||
) : (
|
||||
<ul style={{ fontSize: '14px', listStyle: 'disc', paddingLeft: 24 }}>
|
||||
{holidays.slice(0, 20).map(h => (
|
||||
<li key={h.id}>
|
||||
{h.name}: {h.start_date} – {h.end_date}
|
||||
{h.region ? ` (${h.region})` : ''}
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Dashboard Display Settings Card */}
|
||||
<div className="e-card" style={{ marginTop: 20 }}>
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Dashboard-Anzeige</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<CheckBoxComponent
|
||||
label="Ferienstatus-Banner auf Dashboard anzeigen"
|
||||
checked={holidayBannerEnabled}
|
||||
change={(e) => setHolidayBannerEnabled(e.checked || false)}
|
||||
/>
|
||||
<div style={{ fontSize: '12px', color: '#666', marginTop: 4, marginLeft: 24 }}>
|
||||
Zeigt eine Information an, ob ein Ferienplan für die aktive Periode importiert wurde.
|
||||
</div>
|
||||
</div>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={onSaveHolidayBannerSetting}
|
||||
disabled={holidayBannerBusy}
|
||||
>
|
||||
{holidayBannerBusy ? 'Speichere…' : 'Einstellung speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const AcademicPeriodsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Akademische Perioden</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
{periods.length === 0 ? (
|
||||
<div style={{ fontSize: '14px', color: '#666' }}>Keine Perioden gefunden.</div>
|
||||
) : (
|
||||
<div style={{ display: 'flex', gap: 12, alignItems: 'center', flexWrap: 'wrap' }}>
|
||||
<div style={{ minWidth: 260 }}>
|
||||
<DropDownListComponent
|
||||
dataSource={periodOptions}
|
||||
fields={{ text: 'name', value: 'id' }}
|
||||
value={activePeriodId ?? undefined}
|
||||
change={(e) => setActivePeriodId(Number(e.value))}
|
||||
placeholder="Aktive Periode wählen"
|
||||
popupHeight="250px"
|
||||
/>
|
||||
</div>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
disabled={activePeriodId == null}
|
||||
onClick={async () => {
|
||||
if (activePeriodId == null) return;
|
||||
try {
|
||||
const p = await setActiveAcademicPeriod(activePeriodId);
|
||||
showToast(`Aktive Periode gesetzt: ${p.display_name || p.name}`, 'e-toast-success');
|
||||
await loadAcademicPeriods();
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : 'Fehler beim Setzen der aktiven Periode';
|
||||
showToast(msg, 'e-toast-danger');
|
||||
}
|
||||
}}
|
||||
>
|
||||
Als aktiv setzen
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
// Display & Clients
|
||||
const DisplayDefaultsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Standard-Einstellungen</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ color: '#666', fontSize: 14 }}>
|
||||
Platzhalter für Anzeige-Defaults (z. B. Heartbeat-Timeout, Screenshot-Aufbewahrung, Standard-Gruppe).
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const ClientsConfigContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Client-Konfiguration</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
<div style={{ display: 'flex', gap: 12, flexWrap: 'wrap' }}>
|
||||
<Link to="/clients"><ButtonComponent>Infoscreen-Clients öffnen</ButtonComponent></Link>
|
||||
<Link to="/infoscr_groups"><ButtonComponent>Raumgruppen öffnen</ButtonComponent></Link>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const UploadSettingsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Upload-Einstellungen</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ color: '#666', fontSize: 14 }}>
|
||||
Platzhalter für Upload-Limits, erlaubte Dateitypen und Standardspeicherorte.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const ConversionStatusContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Konvertierungsstatus</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ color: '#666', fontSize: 14 }}>
|
||||
Platzhalter – Konversionsstatus-Übersicht kann hier integriert werden (siehe API /api/conversions/...).
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
// Events
|
||||
const WebUntisSettingsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">WebUntis / Vertretungsplan</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Vertretungsplan URL
|
||||
</label>
|
||||
<TextBoxComponent
|
||||
placeholder="https://example.com/vertretungsplan"
|
||||
value={supplementUrl}
|
||||
change={(e) => setSupplementUrl(e.value || '')}
|
||||
cssClass="e-outline"
|
||||
width="100%"
|
||||
/>
|
||||
<div style={{ fontSize: '12px', color: '#666', marginTop: 4 }}>
|
||||
Diese URL wird verwendet, um die Stundenplan-Änderungstabelle (Vertretungsplan) anzuzeigen.
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<CheckBoxComponent
|
||||
label="Vertretungsplan aktiviert"
|
||||
checked={supplementEnabled}
|
||||
change={(e) => setSupplementEnabled(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ display: 'flex', gap: 12 }}>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={onSaveSupplementSettings}
|
||||
disabled={supplementBusy}
|
||||
>
|
||||
{supplementBusy ? 'Speichere…' : 'Einstellungen speichern'}
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-outline"
|
||||
onClick={onTestSupplementUrl}
|
||||
disabled={!supplementUrl}
|
||||
>
|
||||
Vorschau öffnen
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const PresentationsDefaultsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Präsentationen</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ fontSize: 14 }}>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Slide-Show Intervall (Sekunden)
|
||||
</label>
|
||||
<NumericTextBoxComponent
|
||||
min={1}
|
||||
max={600}
|
||||
step={1}
|
||||
value={presentationInterval}
|
||||
format="n0"
|
||||
change={(e: { value: number }) => setPresentationInterval(Number(e.value) || 10)}
|
||||
placeholder="Intervall in Sekunden"
|
||||
width="120px"
|
||||
/>
|
||||
</div>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<CheckBoxComponent
|
||||
label="Seitenfortschritt anzeigen (Fortschrittsbalken)"
|
||||
checked={presentationPageProgress}
|
||||
change={e => setPresentationPageProgress(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<CheckBoxComponent
|
||||
label="Präsentationsfortschritt anzeigen (Fortschrittsbalken)"
|
||||
checked={presentationAutoProgress}
|
||||
change={e => setPresentationAutoProgress(e.checked || false)}
|
||||
/>
|
||||
</div>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={onSavePresentationSettings}
|
||||
style={{ marginTop: 8 }}
|
||||
disabled={presentationBusy}
|
||||
>
|
||||
{presentationBusy ? 'Speichere…' : 'Einstellungen speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const WebsitesDefaultsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Webseiten</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ color: '#666', fontSize: 14 }}>
|
||||
Platzhalter für Standardwerte (Lade-Timeout, Intervall, Sandbox/Embedding) für Webseiten.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const VideosDefaultsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Videos</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ fontSize: 14 }}>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<CheckBoxComponent
|
||||
label="Automatisch abspielen"
|
||||
checked={videoAutoplay}
|
||||
change={e => setVideoAutoplay(!!e.checked)}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<CheckBoxComponent
|
||||
label="In Schleife abspielen"
|
||||
checked={videoLoop}
|
||||
change={e => setVideoLoop(!!e.checked)}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>Lautstärke</label>
|
||||
<div style={{ display: 'flex', alignItems: 'center', gap: 12 }}>
|
||||
<NumericTextBoxComponent
|
||||
min={0}
|
||||
max={1}
|
||||
step={0.05}
|
||||
value={videoVolume}
|
||||
format="n2"
|
||||
change={(e: { value: number }) => {
|
||||
const v = typeof e.value === 'number' ? e.value : Number(e.value);
|
||||
if (Number.isFinite(v)) setVideoVolume(Math.max(0, Math.min(1, v)));
|
||||
}}
|
||||
placeholder="0.0–1.0"
|
||||
width="140px"
|
||||
/>
|
||||
<CheckBoxComponent
|
||||
label="Ton aus"
|
||||
checked={videoMuted}
|
||||
change={e => setVideoMuted(!!e.checked)}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={onSaveVideoSettings}
|
||||
disabled={videoBusy}
|
||||
>
|
||||
{videoBusy ? 'Speichere…' : 'Einstellungen speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const MessagesDefaultsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Mitteilungen</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ color: '#666', fontSize: 14 }}>
|
||||
Platzhalter für Standardwerte (Dauer, Layout/Styling) für Mitteilungen.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const OtherDefaultsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Sonstige</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ color: '#666', fontSize: 14 }}>
|
||||
Platzhalter für sonstige Eventtypen.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
// Users
|
||||
const UsersQuickActionsContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Schnellaktionen</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content">
|
||||
<div style={{ display: 'flex', gap: 12, flexWrap: 'wrap' }}>
|
||||
<Link to="/benutzer"><ButtonComponent cssClass="e-primary">Benutzerverwaltung öffnen</ButtonComponent></Link>
|
||||
<ButtonComponent disabled title="Demnächst">Benutzer einladen</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
// System
|
||||
const OrganizationInfoContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Organisationsinformationen</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ padding: '16px' }}>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 600 }}>
|
||||
Organisationsname
|
||||
</label>
|
||||
<TextBoxComponent
|
||||
placeholder="z.B. Meine Schule GmbH"
|
||||
value={organizationName}
|
||||
change={(e) => setOrganizationName(e.value || '')}
|
||||
cssClass="e-outline"
|
||||
floatLabelType="Never"
|
||||
style={{ width: '100%', maxWidth: '500px' }}
|
||||
/>
|
||||
<div style={{ marginTop: 8, color: '#666', fontSize: 13 }}>
|
||||
Dieser Name wird im Header des Dashboards angezeigt.
|
||||
</div>
|
||||
</div>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={onSaveOrganizationName}
|
||||
disabled={organizationBusy}
|
||||
>
|
||||
{organizationBusy ? 'Speichern...' : 'Speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
const AdvancedConfigContent = () => (
|
||||
<div style={{ padding: 20 }}>
|
||||
<div className="e-card">
|
||||
<div className="e-card-header">
|
||||
<div className="e-card-header-caption">
|
||||
<div className="e-card-header-title">Erweiterte Konfiguration</div>
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card-content" style={{ padding: '16px' }}>
|
||||
<div style={{ marginBottom: 20 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 600 }}>
|
||||
Scheduler: Refresh-Intervall (Sekunden)
|
||||
</label>
|
||||
<NumericTextBoxComponent
|
||||
value={refreshSeconds}
|
||||
min={0}
|
||||
step={1}
|
||||
format="n0"
|
||||
change={(e) => setRefreshSeconds(Number(e.value ?? 0))}
|
||||
cssClass="e-outline"
|
||||
width={200}
|
||||
/>
|
||||
<div style={{ marginTop: 8, color: '#666', fontSize: 13 }}>
|
||||
Legt fest, wie oft der Scheduler Ereignisse an Clients republiziert, auch wenn sich nichts geändert hat.
|
||||
<br />
|
||||
<strong>0:</strong> Deaktiviert (Scheduler sendet nur bei Änderungen).
|
||||
<br />
|
||||
<strong>>0:</strong> Sekunden zwischen Republizierungen (z.B. 600 = alle 10 Min).
|
||||
<br />
|
||||
Der Scheduler liest die Datenbank immer alle 30 Sekunden neu ab und sendet dann bei Bedarf die Änderungen an den Terminen.
|
||||
</div>
|
||||
<div style={{ marginTop: 12 }}>
|
||||
<ButtonComponent cssClass="e-primary" onClick={onSaveRefreshSeconds} disabled={refreshBusy}>
|
||||
{refreshBusy ? 'Speichern...' : 'Speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
// ---------- Nested Tab wrappers (first-level tabs -> second-level content) ----------
|
||||
const AcademicCalendarTabs = () => (
|
||||
<TabComponent
|
||||
heightAdjustMode="Auto"
|
||||
selectedItem={academicTabIndex}
|
||||
selected={(e: TabSelectedEvent) => setAcademicTabIndex(e.selectedIndex ?? 0)}
|
||||
>
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '📥 Import & Liste' }} content={HolidaysImportAndListContent} />
|
||||
<TabItemDirective header={{ text: '🗂️ Perioden' }} content={AcademicPeriodsContent} />
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
);
|
||||
|
||||
const DisplayClientsTabs = () => (
|
||||
<TabComponent
|
||||
heightAdjustMode="Auto"
|
||||
selectedItem={displayTabIndex}
|
||||
selected={(e: TabSelectedEvent) => setDisplayTabIndex(e.selectedIndex ?? 0)}
|
||||
>
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '⚙️ Defaults' }} content={DisplayDefaultsContent} />
|
||||
<TabItemDirective header={{ text: '🖥️ Clients' }} content={ClientsConfigContent} />
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
);
|
||||
|
||||
const MediaFilesTabs = () => (
|
||||
<TabComponent
|
||||
heightAdjustMode="Auto"
|
||||
selectedItem={mediaTabIndex}
|
||||
selected={(e: TabSelectedEvent) => setMediaTabIndex(e.selectedIndex ?? 0)}
|
||||
>
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '⬆️ Upload' }} content={UploadSettingsContent} />
|
||||
<TabItemDirective header={{ text: '🔄 Konvertierung' }} content={ConversionStatusContent} />
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
);
|
||||
|
||||
const EventsTabs = () => (
|
||||
<TabComponent
|
||||
heightAdjustMode="Auto"
|
||||
selectedItem={eventsTabIndex}
|
||||
selected={(e: TabSelectedEvent) => setEventsTabIndex(e.selectedIndex ?? 0)}
|
||||
>
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '📘 WebUntis' }} content={WebUntisSettingsContent} />
|
||||
<TabItemDirective header={{ text: '🖼️ Präsentation' }} content={PresentationsDefaultsContent} />
|
||||
<TabItemDirective header={{ text: '🌐 Webseiten' }} content={WebsitesDefaultsContent} />
|
||||
<TabItemDirective header={{ text: '🎬 Videos' }} content={VideosDefaultsContent} />
|
||||
<TabItemDirective header={{ text: '💬 Mitteilungen' }} content={MessagesDefaultsContent} />
|
||||
<TabItemDirective header={{ text: '🔧 Sonstige' }} content={OtherDefaultsContent} />
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
);
|
||||
|
||||
const UsersTabs = () => (
|
||||
<TabComponent
|
||||
heightAdjustMode="Auto"
|
||||
selectedItem={usersTabIndex}
|
||||
selected={(e: TabSelectedEvent) => setUsersTabIndex(e.selectedIndex ?? 0)}
|
||||
>
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '⚡ Schnellaktionen' }} content={UsersQuickActionsContent} />
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
);
|
||||
|
||||
const SystemTabs = () => (
|
||||
<TabComponent
|
||||
heightAdjustMode="Auto"
|
||||
selectedItem={systemTabIndex}
|
||||
selected={(e: TabSelectedEvent) => setSystemTabIndex(e.selectedIndex ?? 0)}
|
||||
>
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '🏢 Organisation' }} content={OrganizationInfoContent} />
|
||||
<TabItemDirective header={{ text: '🧩 Erweiterte Optionen' }} content={AdvancedConfigContent} />
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
);
|
||||
|
||||
// ---------- Top-level (root) tabs ----------
|
||||
return (
|
||||
<div style={{ padding: 20 }}>
|
||||
<ToastComponent ref={toastRef} position={{ X: 'Right', Y: 'Top' }} />
|
||||
|
||||
<h2 style={{ marginBottom: 20, fontSize: '24px', fontWeight: 600 }}>Einstellungen</h2>
|
||||
|
||||
<TabComponent heightAdjustMode="Auto">
|
||||
<TabItemsDirective>
|
||||
<TabItemDirective header={{ text: '📅 Akademischer Kalender' }} content={AcademicCalendarTabs} />
|
||||
{isAdmin && (
|
||||
<TabItemDirective header={{ text: '🖥️ Anzeige & Clients' }} content={DisplayClientsTabs} />
|
||||
)}
|
||||
{isAdmin && (
|
||||
<TabItemDirective header={{ text: '🎬 Medien & Dateien' }} content={MediaFilesTabs} />
|
||||
)}
|
||||
{isAdmin && (
|
||||
<TabItemDirective header={{ text: '🗓️ Events' }} content={EventsTabs} />
|
||||
)}
|
||||
{isAdmin && (
|
||||
<TabItemDirective header={{ text: '👥 Benutzer' }} content={UsersTabs} />
|
||||
)}
|
||||
{isSuperadmin && (
|
||||
<TabItemDirective header={{ text: '⚙️ System' }} content={SystemTabs} />
|
||||
)}
|
||||
</TabItemsDirective>
|
||||
</TabComponent>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default Einstellungen;
|
||||
15
dashboard/src/theme-overrides.css
Normal file
15
dashboard/src/theme-overrides.css
Normal file
@@ -0,0 +1,15 @@
|
||||
/* FileManager icon size overrides (loaded after Syncfusion styles) */
|
||||
.e-filemanager.media-icons-xl .e-large-icons .e-list-icon {
|
||||
font-size: 40px; /* default ~24px */
|
||||
}
|
||||
|
||||
.e-filemanager.media-icons-xl .e-large-icons .e-fe-folder,
|
||||
.e-filemanager.media-icons-xl .e-large-icons .e-fe-file {
|
||||
font-size: 40px;
|
||||
}
|
||||
|
||||
/* Details (grid) view icons */
|
||||
.e-filemanager.media-icons-xl .e-fe-grid-icon .e-fe-folder,
|
||||
.e-filemanager.media-icons-xl .e-fe-grid-icon .e-fe-file {
|
||||
font-size: 24px;
|
||||
}
|
||||
145
dashboard/src/useAuth.tsx
Normal file
145
dashboard/src/useAuth.tsx
Normal file
@@ -0,0 +1,145 @@
|
||||
/**
|
||||
* Auth context and hook for managing current user state.
|
||||
*
|
||||
* Provides a React context and custom hook to access and manage
|
||||
* the current authenticated user throughout the application.
|
||||
*/
|
||||
|
||||
import { createContext, useContext, useState, useEffect } from 'react';
|
||||
import type { ReactNode } from 'react';
|
||||
import { fetchCurrentUser, login as apiLogin, logout as apiLogout } from './apiAuth';
|
||||
import type { User } from './apiAuth';
|
||||
|
||||
interface AuthContextType {
|
||||
user: User | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
login: (username: string, password: string) => Promise<void>;
|
||||
logout: () => Promise<void>;
|
||||
refreshUser: () => Promise<void>;
|
||||
isAuthenticated: boolean;
|
||||
}
|
||||
|
||||
const AuthContext = createContext<AuthContextType | undefined>(undefined);
|
||||
|
||||
interface AuthProviderProps {
|
||||
children: ReactNode;
|
||||
}
|
||||
|
||||
/**
|
||||
* Auth provider component to wrap the application.
|
||||
*
|
||||
* Usage:
|
||||
* <AuthProvider>
|
||||
* <App />
|
||||
* </AuthProvider>
|
||||
*/
|
||||
export function AuthProvider({ children }: AuthProviderProps) {
|
||||
const [user, setUser] = useState<User | null>(null);
|
||||
const [loading, setLoading] = useState<boolean>(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
// Fetch current user on mount
|
||||
useEffect(() => {
|
||||
refreshUser();
|
||||
}, []);
|
||||
|
||||
const refreshUser = async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
const currentUser = await fetchCurrentUser();
|
||||
setUser(currentUser);
|
||||
} catch (err) {
|
||||
// Not authenticated or error - this is okay
|
||||
setUser(null);
|
||||
// Only set error if it's not a 401 (not authenticated is expected)
|
||||
if (err instanceof Error && !err.message.includes('Not authenticated')) {
|
||||
setError(err.message);
|
||||
}
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const login = async (username: string, password: string) => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
const response = await apiLogin(username, password);
|
||||
setUser(response.user as User);
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Login failed';
|
||||
setError(errorMessage);
|
||||
throw err; // Re-throw so the caller can handle it
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const logout = async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
await apiLogout();
|
||||
setUser(null);
|
||||
} catch (err) {
|
||||
const errorMessage = err instanceof Error ? err.message : 'Logout failed';
|
||||
setError(errorMessage);
|
||||
throw err;
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const value: AuthContextType = {
|
||||
user,
|
||||
loading,
|
||||
error,
|
||||
login,
|
||||
logout,
|
||||
refreshUser,
|
||||
isAuthenticated: user !== null,
|
||||
};
|
||||
|
||||
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom hook to access auth context.
|
||||
*
|
||||
* Usage:
|
||||
* const { user, login, logout, isAuthenticated } = useAuth();
|
||||
*
|
||||
* @returns AuthContextType
|
||||
* @throws Error if used outside AuthProvider
|
||||
*/
|
||||
export function useAuth(): AuthContextType {
|
||||
const context = useContext(AuthContext);
|
||||
if (context === undefined) {
|
||||
throw new Error('useAuth must be used within an AuthProvider');
|
||||
}
|
||||
return context;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience hook to get just the current user.
|
||||
*
|
||||
* Usage:
|
||||
* const user = useCurrentUser();
|
||||
*/
|
||||
export function useCurrentUser(): User | null {
|
||||
const { user } = useAuth();
|
||||
return user;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience hook to check if user is authenticated.
|
||||
*
|
||||
* Usage:
|
||||
* const isAuthenticated = useIsAuthenticated();
|
||||
*/
|
||||
export function useIsAuthenticated(): boolean {
|
||||
const { isAuthenticated } = useAuth();
|
||||
return isAuthenticated;
|
||||
}
|
||||
820
dashboard/src/users.tsx
Normal file
820
dashboard/src/users.tsx
Normal file
@@ -0,0 +1,820 @@
|
||||
import React from 'react';
|
||||
import { useAuth } from './useAuth';
|
||||
import {
|
||||
GridComponent,
|
||||
ColumnsDirective,
|
||||
ColumnDirective,
|
||||
Page,
|
||||
Inject,
|
||||
Toolbar,
|
||||
Edit,
|
||||
CommandColumn,
|
||||
} from '@syncfusion/ej2-react-grids';
|
||||
import { ButtonComponent } from '@syncfusion/ej2-react-buttons';
|
||||
import { DialogComponent } from '@syncfusion/ej2-react-popups';
|
||||
import { ToastComponent } from '@syncfusion/ej2-react-notifications';
|
||||
import { TextBoxComponent } from '@syncfusion/ej2-react-inputs';
|
||||
import { DropDownListComponent } from '@syncfusion/ej2-react-dropdowns';
|
||||
import { CheckBoxComponent } from '@syncfusion/ej2-react-buttons';
|
||||
import {
|
||||
listUsers,
|
||||
createUser,
|
||||
updateUser,
|
||||
resetUserPassword,
|
||||
deleteUser,
|
||||
type UserData,
|
||||
} from './apiUsers';
|
||||
|
||||
const Benutzer: React.FC = () => {
|
||||
const { user: currentUser } = useAuth();
|
||||
const [users, setUsers] = React.useState<UserData[]>([]);
|
||||
const [loading, setLoading] = React.useState(true);
|
||||
|
||||
// Dialog states
|
||||
const [showCreateDialog, setShowCreateDialog] = React.useState(false);
|
||||
const [showEditDialog, setShowEditDialog] = React.useState(false);
|
||||
const [showPasswordDialog, setShowPasswordDialog] = React.useState(false);
|
||||
const [showDeleteDialog, setShowDeleteDialog] = React.useState(false);
|
||||
const [showDetailsDialog, setShowDetailsDialog] = React.useState(false);
|
||||
const [selectedUser, setSelectedUser] = React.useState<UserData | null>(null);
|
||||
|
||||
// Form states
|
||||
const [formUsername, setFormUsername] = React.useState('');
|
||||
const [formPassword, setFormPassword] = React.useState('');
|
||||
const [formRole, setFormRole] = React.useState<'user' | 'editor' | 'admin' | 'superadmin'>('user');
|
||||
const [formIsActive, setFormIsActive] = React.useState(true);
|
||||
const [formBusy, setFormBusy] = React.useState(false);
|
||||
|
||||
const toastRef = React.useRef<ToastComponent>(null);
|
||||
|
||||
const isSuperadmin = currentUser?.role === 'superadmin';
|
||||
|
||||
// Available roles based on current user's role
|
||||
const availableRoles = React.useMemo(() => {
|
||||
if (isSuperadmin) {
|
||||
return [
|
||||
{ value: 'user', text: 'Benutzer (Viewer)' },
|
||||
{ value: 'editor', text: 'Editor (Content Manager)' },
|
||||
{ value: 'admin', text: 'Administrator' },
|
||||
{ value: 'superadmin', text: 'Superadministrator' },
|
||||
];
|
||||
}
|
||||
return [
|
||||
{ value: 'user', text: 'Benutzer (Viewer)' },
|
||||
{ value: 'editor', text: 'Editor (Content Manager)' },
|
||||
{ value: 'admin', text: 'Administrator' },
|
||||
];
|
||||
}, [isSuperadmin]);
|
||||
|
||||
const showToast = (content: string, cssClass: string = 'e-toast-success') => {
|
||||
if (toastRef.current) {
|
||||
toastRef.current.show({
|
||||
content,
|
||||
cssClass,
|
||||
timeOut: 4000,
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
const loadUsers = React.useCallback(async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
const data = await listUsers();
|
||||
setUsers(data);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : 'Fehler beim Laden der Benutzer';
|
||||
showToast(message, 'e-toast-danger');
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, []);
|
||||
|
||||
React.useEffect(() => {
|
||||
loadUsers();
|
||||
}, [loadUsers]);
|
||||
|
||||
// Create user
|
||||
const handleCreateClick = () => {
|
||||
setFormUsername('');
|
||||
setFormPassword('');
|
||||
setFormRole('user');
|
||||
setFormIsActive(true);
|
||||
setShowCreateDialog(true);
|
||||
};
|
||||
|
||||
const handleCreateSubmit = async () => {
|
||||
if (!formUsername.trim()) {
|
||||
showToast('Benutzername ist erforderlich', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
if (formUsername.trim().length < 3) {
|
||||
showToast('Benutzername muss mindestens 3 Zeichen lang sein', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
if (!formPassword) {
|
||||
showToast('Passwort ist erforderlich', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
if (formPassword.length < 6) {
|
||||
showToast('Passwort muss mindestens 6 Zeichen lang sein', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
|
||||
setFormBusy(true);
|
||||
try {
|
||||
await createUser({
|
||||
username: formUsername.trim(),
|
||||
password: formPassword,
|
||||
role: formRole,
|
||||
isActive: formIsActive,
|
||||
});
|
||||
showToast('Benutzer erfolgreich erstellt', 'e-toast-success');
|
||||
setShowCreateDialog(false);
|
||||
loadUsers();
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : 'Fehler beim Erstellen des Benutzers';
|
||||
showToast(message, 'e-toast-danger');
|
||||
} finally {
|
||||
setFormBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Edit user
|
||||
const handleEditClick = (userData: UserData) => {
|
||||
setSelectedUser(userData);
|
||||
setFormUsername(userData.username);
|
||||
setFormRole(userData.role);
|
||||
setFormIsActive(userData.isActive);
|
||||
setShowEditDialog(true);
|
||||
};
|
||||
|
||||
const handleEditSubmit = async () => {
|
||||
if (!selectedUser) return;
|
||||
|
||||
if (!formUsername.trim()) {
|
||||
showToast('Benutzername ist erforderlich', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
if (formUsername.trim().length < 3) {
|
||||
showToast('Benutzername muss mindestens 3 Zeichen lang sein', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
|
||||
setFormBusy(true);
|
||||
try {
|
||||
await updateUser(selectedUser.id, {
|
||||
username: formUsername.trim(),
|
||||
role: formRole,
|
||||
isActive: formIsActive,
|
||||
});
|
||||
showToast('Benutzer erfolgreich aktualisiert', 'e-toast-success');
|
||||
setShowEditDialog(false);
|
||||
loadUsers();
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : 'Fehler beim Aktualisieren des Benutzers';
|
||||
showToast(message, 'e-toast-danger');
|
||||
} finally {
|
||||
setFormBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Reset password
|
||||
const handlePasswordClick = (userData: UserData) => {
|
||||
if (currentUser && userData.id === currentUser.id) {
|
||||
showToast('Bitte ändern Sie Ihr eigenes Passwort über das Benutzer-Menü (oben rechts).', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
setSelectedUser(userData);
|
||||
setFormPassword('');
|
||||
setShowPasswordDialog(true);
|
||||
};
|
||||
|
||||
const handlePasswordSubmit = async () => {
|
||||
if (!selectedUser) return;
|
||||
|
||||
if (!formPassword) {
|
||||
showToast('Passwort ist erforderlich', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
if (formPassword.length < 6) {
|
||||
showToast('Passwort muss mindestens 6 Zeichen lang sein', 'e-toast-warning');
|
||||
return;
|
||||
}
|
||||
|
||||
setFormBusy(true);
|
||||
try {
|
||||
await resetUserPassword(selectedUser.id, formPassword);
|
||||
showToast('Passwort erfolgreich zurückgesetzt', 'e-toast-success');
|
||||
setShowPasswordDialog(false);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : 'Fehler beim Zurücksetzen des Passworts';
|
||||
showToast(message, 'e-toast-danger');
|
||||
} finally {
|
||||
setFormBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
// Delete user
|
||||
const handleDeleteClick = (userData: UserData) => {
|
||||
setSelectedUser(userData);
|
||||
setShowDeleteDialog(true);
|
||||
};
|
||||
|
||||
const handleDeleteConfirm = async () => {
|
||||
if (!selectedUser) return;
|
||||
|
||||
setFormBusy(true);
|
||||
try {
|
||||
await deleteUser(selectedUser.id);
|
||||
showToast('Benutzer erfolgreich gelöscht', 'e-toast-success');
|
||||
setShowDeleteDialog(false);
|
||||
loadUsers();
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : 'Fehler beim Löschen des Benutzers';
|
||||
showToast(message, 'e-toast-danger');
|
||||
} finally {
|
||||
setFormBusy(false);
|
||||
}
|
||||
};
|
||||
|
||||
// View details
|
||||
const handleDetailsClick = (userData: UserData) => {
|
||||
setSelectedUser(userData);
|
||||
setShowDetailsDialog(true);
|
||||
};
|
||||
|
||||
// Format date-time
|
||||
const getRoleBadge = (role: string) => {
|
||||
const roleMap: Record<string, { text: string; color: string }> = {
|
||||
user: { text: 'Benutzer', color: '#6c757d' },
|
||||
editor: { text: 'Editor', color: '#0d6efd' },
|
||||
admin: { text: 'Admin', color: '#198754' },
|
||||
superadmin: { text: 'Superadmin', color: '#dc3545' },
|
||||
};
|
||||
const info = roleMap[role] || { text: role, color: '#6c757d' };
|
||||
return (
|
||||
<span
|
||||
style={{
|
||||
padding: '4px 12px',
|
||||
borderRadius: '12px',
|
||||
backgroundColor: info.color,
|
||||
color: 'white',
|
||||
fontSize: '12px',
|
||||
fontWeight: 500,
|
||||
display: 'inline-block',
|
||||
}}
|
||||
>
|
||||
{info.text}
|
||||
</span>
|
||||
);
|
||||
};
|
||||
|
||||
// Status badge
|
||||
const getStatusBadge = (isActive: boolean) => {
|
||||
return (
|
||||
<span
|
||||
style={{
|
||||
padding: '4px 12px',
|
||||
borderRadius: '12px',
|
||||
backgroundColor: isActive ? '#28a745' : '#dc3545',
|
||||
color: 'white',
|
||||
fontSize: '12px',
|
||||
fontWeight: 500,
|
||||
display: 'inline-block',
|
||||
}}
|
||||
>
|
||||
{isActive ? 'Aktiv' : 'Inaktiv'}
|
||||
</span>
|
||||
);
|
||||
};
|
||||
|
||||
// Grid commands - no longer needed with custom template
|
||||
// const commands: CommandModel[] = [...]
|
||||
|
||||
// Command click handler removed - using custom button template instead
|
||||
|
||||
// Format dates
|
||||
const formatDate = (dateStr?: string) => {
|
||||
if (!dateStr) return '-';
|
||||
try {
|
||||
const date = new Date(dateStr);
|
||||
return date.toLocaleDateString('de-DE', {
|
||||
year: 'numeric',
|
||||
month: '2-digit',
|
||||
day: '2-digit',
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
});
|
||||
} catch {
|
||||
return '-';
|
||||
}
|
||||
};
|
||||
|
||||
if (loading) {
|
||||
return (
|
||||
<div style={{ padding: 24 }}>
|
||||
<div style={{ textAlign: 'center', padding: 40 }}>Lade Benutzer...</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div style={{ padding: 24 }}>
|
||||
<ToastComponent ref={toastRef} position={{ X: 'Right', Y: 'Top' }} />
|
||||
|
||||
{/* Header */}
|
||||
<div style={{ marginBottom: 24, display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
|
||||
<div>
|
||||
<h2 style={{ margin: 0, fontSize: 24, fontWeight: 600 }}>Benutzerverwaltung</h2>
|
||||
<p style={{ margin: '8px 0 0 0', color: '#6c757d' }}>
|
||||
Verwalten Sie Benutzer und deren Rollen
|
||||
</p>
|
||||
</div>
|
||||
<ButtonComponent
|
||||
cssClass="e-success"
|
||||
iconCss="e-icons e-plus"
|
||||
onClick={handleCreateClick}
|
||||
>
|
||||
Neuer Benutzer
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
|
||||
{/* Statistics */}
|
||||
<div style={{ marginBottom: 24, display: 'flex', gap: 16 }}>
|
||||
<div className="e-card" style={{ flex: 1, padding: 16 }}>
|
||||
<div style={{ fontSize: 14, color: '#6c757d', marginBottom: 4 }}>Gesamt</div>
|
||||
<div style={{ fontSize: 28, fontWeight: 600 }}>{users.length}</div>
|
||||
</div>
|
||||
<div className="e-card" style={{ flex: 1, padding: 16 }}>
|
||||
<div style={{ fontSize: 14, color: '#6c757d', marginBottom: 4 }}>Aktiv</div>
|
||||
<div style={{ fontSize: 28, fontWeight: 600, color: '#28a745' }}>
|
||||
{users.filter(u => u.isActive).length}
|
||||
</div>
|
||||
</div>
|
||||
<div className="e-card" style={{ flex: 1, padding: 16 }}>
|
||||
<div style={{ fontSize: 14, color: '#6c757d', marginBottom: 4 }}>Inaktiv</div>
|
||||
<div style={{ fontSize: 28, fontWeight: 600, color: '#dc3545' }}>
|
||||
{users.filter(u => !u.isActive).length}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Users Grid */}
|
||||
<GridComponent
|
||||
dataSource={users}
|
||||
allowPaging={true}
|
||||
allowSorting={true}
|
||||
pageSettings={{ pageSize: 20, pageSizes: [10, 20, 50, 100] }}
|
||||
height="600"
|
||||
>
|
||||
<ColumnsDirective>
|
||||
<ColumnDirective field="id" headerText="ID" width="80" textAlign="Center" allowSorting={true} />
|
||||
<ColumnDirective
|
||||
field="username"
|
||||
headerText="Benutzername"
|
||||
width="200"
|
||||
allowSorting={true}
|
||||
/>
|
||||
<ColumnDirective
|
||||
field="role"
|
||||
headerText="Rolle"
|
||||
width="150"
|
||||
allowSorting={true}
|
||||
template={(props: UserData) => getRoleBadge(props.role)}
|
||||
/>
|
||||
<ColumnDirective
|
||||
field="isActive"
|
||||
headerText="Status"
|
||||
width="120"
|
||||
template={(props: UserData) => getStatusBadge(props.isActive)}
|
||||
/>
|
||||
<ColumnDirective
|
||||
field="createdAt"
|
||||
headerText="Erstellt"
|
||||
width="180"
|
||||
template={(props: UserData) => formatDate(props.createdAt)}
|
||||
/>
|
||||
<ColumnDirective
|
||||
headerText="Aktionen"
|
||||
width="280"
|
||||
template={(props: UserData) => (
|
||||
<div style={{ display: 'flex', gap: 4 }}>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat"
|
||||
onClick={() => handleDetailsClick(props)}
|
||||
>
|
||||
Details
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat e-primary"
|
||||
onClick={() => handleEditClick(props)}
|
||||
>
|
||||
Bearbeiten
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat e-info"
|
||||
onClick={() => handlePasswordClick(props)}
|
||||
>
|
||||
Passwort
|
||||
</ButtonComponent>
|
||||
{isSuperadmin && currentUser?.id !== props.id && (
|
||||
<ButtonComponent
|
||||
cssClass="e-flat e-danger"
|
||||
onClick={() => handleDeleteClick(props)}
|
||||
>
|
||||
Löschen
|
||||
</ButtonComponent>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
/>
|
||||
</ColumnsDirective>
|
||||
<Inject services={[Page, Toolbar, Edit, CommandColumn]} />
|
||||
</GridComponent>
|
||||
|
||||
{/* Create User Dialog */}
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={showCreateDialog}
|
||||
width="500px"
|
||||
header="Neuer Benutzer"
|
||||
showCloseIcon={true}
|
||||
close={() => setShowCreateDialog(false)}
|
||||
footerTemplate={() => (
|
||||
<div>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat"
|
||||
onClick={() => setShowCreateDialog(false)}
|
||||
disabled={formBusy}
|
||||
>
|
||||
Abbrechen
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={handleCreateSubmit}
|
||||
disabled={formBusy}
|
||||
>
|
||||
{formBusy ? 'Erstelle...' : 'Erstellen'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: 16 }}>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Benutzername *
|
||||
</label>
|
||||
<TextBoxComponent
|
||||
placeholder="Benutzername eingeben"
|
||||
value={formUsername}
|
||||
input={(e: any) => setFormUsername(e.value)}
|
||||
disabled={formBusy}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Passwort *
|
||||
</label>
|
||||
<TextBoxComponent
|
||||
type="password"
|
||||
placeholder="Mindestens 6 Zeichen"
|
||||
value={formPassword}
|
||||
input={(e: any) => setFormPassword(e.value)}
|
||||
disabled={formBusy}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Rolle *
|
||||
</label>
|
||||
<DropDownListComponent
|
||||
dataSource={availableRoles}
|
||||
fields={{ value: 'value', text: 'text' }}
|
||||
value={formRole}
|
||||
change={(e: any) => setFormRole(e.value)}
|
||||
disabled={formBusy}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 8 }}>
|
||||
<CheckBoxComponent
|
||||
label="Benutzer ist aktiv"
|
||||
checked={formIsActive}
|
||||
change={(e: any) => setFormIsActive(e.checked)}
|
||||
disabled={formBusy}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
|
||||
{/* Edit User Dialog */}
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={showEditDialog}
|
||||
width="500px"
|
||||
header={`Benutzer bearbeiten: ${selectedUser?.username}`}
|
||||
showCloseIcon={true}
|
||||
close={() => setShowEditDialog(false)}
|
||||
footerTemplate={() => (
|
||||
<div>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat"
|
||||
onClick={() => setShowEditDialog(false)}
|
||||
disabled={formBusy}
|
||||
>
|
||||
Abbrechen
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-primary"
|
||||
onClick={handleEditSubmit}
|
||||
disabled={formBusy}
|
||||
>
|
||||
{formBusy ? 'Speichere...' : 'Speichern'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: 16 }}>
|
||||
{selectedUser?.id === currentUser?.id && (
|
||||
<div
|
||||
style={{
|
||||
padding: 12,
|
||||
backgroundColor: '#fff3cd',
|
||||
border: '1px solid #ffc107',
|
||||
borderRadius: 4,
|
||||
marginBottom: 16,
|
||||
fontSize: 14,
|
||||
}}
|
||||
>
|
||||
⚠️ Sie bearbeiten Ihr eigenes Konto. Sie können Ihre eigene Rolle oder Ihren aktiven Status nicht ändern.
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Benutzername *
|
||||
</label>
|
||||
<TextBoxComponent
|
||||
placeholder="Benutzername eingeben"
|
||||
value={formUsername}
|
||||
input={(e: any) => setFormUsername(e.value)}
|
||||
disabled={formBusy}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Rolle *
|
||||
</label>
|
||||
<DropDownListComponent
|
||||
dataSource={availableRoles}
|
||||
fields={{ value: 'value', text: 'text' }}
|
||||
value={formRole}
|
||||
change={(e: any) => setFormRole(e.value)}
|
||||
disabled={formBusy || selectedUser?.id === currentUser?.id}
|
||||
/>
|
||||
{selectedUser?.id === currentUser?.id && (
|
||||
<div style={{ fontSize: 12, color: '#6c757d', marginTop: 4 }}>
|
||||
Sie können Ihre eigene Rolle nicht ändern
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 8 }}>
|
||||
<CheckBoxComponent
|
||||
label="Benutzer ist aktiv"
|
||||
checked={formIsActive}
|
||||
change={(e: any) => setFormIsActive(e.checked)}
|
||||
disabled={formBusy || selectedUser?.id === currentUser?.id}
|
||||
/>
|
||||
{selectedUser?.id === currentUser?.id && (
|
||||
<div style={{ fontSize: 12, color: '#6c757d', marginTop: 4 }}>
|
||||
Sie können Ihr eigenes Konto nicht deaktivieren
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
|
||||
{/* Reset Password Dialog */}
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={showPasswordDialog}
|
||||
width="500px"
|
||||
header={`Passwort zurücksetzen: ${selectedUser?.username}`}
|
||||
showCloseIcon={true}
|
||||
close={() => setShowPasswordDialog(false)}
|
||||
footerTemplate={() => (
|
||||
<div>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat"
|
||||
onClick={() => setShowPasswordDialog(false)}
|
||||
disabled={formBusy}
|
||||
>
|
||||
Abbrechen
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-warning"
|
||||
onClick={handlePasswordSubmit}
|
||||
disabled={formBusy}
|
||||
>
|
||||
{formBusy ? 'Setze zurück...' : 'Zurücksetzen'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: 16 }}>
|
||||
<div style={{ marginBottom: 16 }}>
|
||||
<label style={{ display: 'block', marginBottom: 8, fontWeight: 500 }}>
|
||||
Neues Passwort *
|
||||
</label>
|
||||
<TextBoxComponent
|
||||
type="password"
|
||||
placeholder="Mindestens 6 Zeichen"
|
||||
value={formPassword}
|
||||
input={(e: any) => setFormPassword(e.value)}
|
||||
disabled={formBusy}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<div
|
||||
style={{
|
||||
padding: 12,
|
||||
backgroundColor: '#d1ecf1',
|
||||
border: '1px solid #bee5eb',
|
||||
borderRadius: 4,
|
||||
fontSize: 14,
|
||||
}}
|
||||
>
|
||||
💡 Das neue Passwort wird sofort wirksam. Informieren Sie den Benutzer über das neue Passwort.
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
|
||||
{/* Delete User Dialog */}
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={showDeleteDialog}
|
||||
width="500px"
|
||||
header="Benutzer löschen"
|
||||
showCloseIcon={true}
|
||||
close={() => setShowDeleteDialog(false)}
|
||||
footerTemplate={() => (
|
||||
<div>
|
||||
<ButtonComponent
|
||||
cssClass="e-flat"
|
||||
onClick={() => setShowDeleteDialog(false)}
|
||||
disabled={formBusy}
|
||||
>
|
||||
Abbrechen
|
||||
</ButtonComponent>
|
||||
<ButtonComponent
|
||||
cssClass="e-danger"
|
||||
onClick={handleDeleteConfirm}
|
||||
disabled={formBusy}
|
||||
>
|
||||
{formBusy ? 'Lösche...' : 'Endgültig löschen'}
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: 16 }}>
|
||||
<div
|
||||
style={{
|
||||
padding: 16,
|
||||
backgroundColor: '#f8d7da',
|
||||
border: '1px solid #f5c6cb',
|
||||
borderRadius: 4,
|
||||
marginBottom: 16,
|
||||
}}
|
||||
>
|
||||
<strong>⚠️ Warnung: Diese Aktion kann nicht rückgängig gemacht werden!</strong>
|
||||
</div>
|
||||
|
||||
<p style={{ marginBottom: 16 }}>
|
||||
Möchten Sie den Benutzer <strong>{selectedUser?.username}</strong> wirklich endgültig löschen?
|
||||
</p>
|
||||
|
||||
<p style={{ margin: 0, fontSize: 14, color: '#6c757d' }}>
|
||||
Tipp: Statt zu löschen, können Sie den Benutzer auch deaktivieren, um das Konto zu sperren und
|
||||
gleichzeitig die Daten zu bewahren.
|
||||
</p>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
|
||||
{/* Details Dialog */}
|
||||
<DialogComponent
|
||||
isModal={true}
|
||||
visible={showDetailsDialog}
|
||||
width="600px"
|
||||
header={`Details: ${selectedUser?.username}`}
|
||||
showCloseIcon={true}
|
||||
close={() => setShowDetailsDialog(false)}
|
||||
footerTemplate={() => (
|
||||
<div>
|
||||
<ButtonComponent cssClass="e-flat" onClick={() => setShowDetailsDialog(false)}>
|
||||
Schließen
|
||||
</ButtonComponent>
|
||||
</div>
|
||||
)}
|
||||
>
|
||||
<div style={{ padding: 16, display: 'flex', flexDirection: 'column', gap: 20 }}>
|
||||
{/* Account Info */}
|
||||
<div>
|
||||
<h4 style={{ margin: '0 0 12px 0', fontSize: 14, fontWeight: 600, color: '#6c757d' }}>
|
||||
Kontoinformation
|
||||
</h4>
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr', gap: 12 }}>
|
||||
<div>
|
||||
<div style={{ fontSize: 12, color: '#6c757d', marginBottom: 4 }}>Benutzer-ID</div>
|
||||
<div style={{ fontSize: 14, fontWeight: 500 }}>{selectedUser?.id}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ fontSize: 12, color: '#6c757d', marginBottom: 4 }}>Benutzername</div>
|
||||
<div style={{ fontSize: 14, fontWeight: 500 }}>{selectedUser?.username}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ fontSize: 12, color: '#6c757d', marginBottom: 4 }}>Rolle</div>
|
||||
<div>{selectedUser ? getRoleBadge(selectedUser.role) : '-'}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ fontSize: 12, color: '#6c757d', marginBottom: 4 }}>Status</div>
|
||||
<div>{selectedUser ? getStatusBadge(selectedUser.isActive) : '-'}</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Security & Activity */}
|
||||
<div>
|
||||
<h4 style={{ margin: '0 0 12px 0', fontSize: 14, fontWeight: 600, color: '#6c757d' }}>
|
||||
Sicherheit & Aktivität
|
||||
</h4>
|
||||
<div style={{ display: 'flex', flexDirection: 'column', gap: 8 }}>
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '200px 1fr', gap: 8 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, color: '#333' }}>Letzter Login:</div>
|
||||
<div style={{ fontSize: 13, color: '#666' }}>
|
||||
{selectedUser?.lastLoginAt ? formatDate(selectedUser.lastLoginAt) : 'Nie'}
|
||||
</div>
|
||||
</div>
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '200px 1fr', gap: 8 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, color: '#333' }}>Passwort geändert:</div>
|
||||
<div style={{ fontSize: 13, color: '#666' }}>
|
||||
{selectedUser?.lastPasswordChangeAt ? formatDate(selectedUser.lastPasswordChangeAt) : 'Nie'}
|
||||
</div>
|
||||
</div>
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '200px 1fr', gap: 8 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, color: '#333' }}>Fehlgeschlagene Logins:</div>
|
||||
<div style={{ fontSize: 13, color: '#666' }}>
|
||||
{selectedUser?.failedLoginAttempts || 0}
|
||||
</div>
|
||||
</div>
|
||||
{selectedUser?.lastFailedLoginAt && (
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '200px 1fr', gap: 8 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, color: '#333' }}>Letzter Fehler:</div>
|
||||
<div style={{ fontSize: 13, color: '#666' }}>
|
||||
{formatDate(selectedUser.lastFailedLoginAt)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Deactivation Info (if applicable) */}
|
||||
{selectedUser && !selectedUser.isActive && selectedUser.deactivatedAt && (
|
||||
<div style={{ padding: 12, backgroundColor: '#fff3cd', border: '1px solid #ffc107', borderRadius: 4 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, marginBottom: 4 }}>Konto deaktiviert</div>
|
||||
<div style={{ fontSize: 12, color: '#856404' }}>
|
||||
am {formatDate(selectedUser.deactivatedAt)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Timestamps */}
|
||||
<div>
|
||||
<h4 style={{ margin: '0 0 12px 0', fontSize: 14, fontWeight: 600, color: '#6c757d' }}>
|
||||
Zeitleisten
|
||||
</h4>
|
||||
<div style={{ display: 'flex', flexDirection: 'column', gap: 8 }}>
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '200px 1fr', gap: 8 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, color: '#333' }}>Erstellt:</div>
|
||||
<div style={{ fontSize: 13, color: '#666' }}>
|
||||
{selectedUser?.createdAt ? formatDate(selectedUser.createdAt) : '-'}
|
||||
</div>
|
||||
</div>
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '200px 1fr', gap: 8 }}>
|
||||
<div style={{ fontSize: 13, fontWeight: 500, color: '#333' }}>Zuletzt geändert:</div>
|
||||
<div style={{ fontSize: 13, color: '#666' }}>
|
||||
{selectedUser?.updatedAt ? formatDate(selectedUser.updatedAt) : '-'}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</DialogComponent>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
export default Benutzer;
|
||||
@@ -19,9 +19,14 @@ export default defineConfig({
|
||||
include: [
|
||||
'@syncfusion/ej2-react-navigations',
|
||||
'@syncfusion/ej2-react-buttons',
|
||||
'@syncfusion/ej2-react-splitbuttons',
|
||||
'@syncfusion/ej2-react-grids',
|
||||
'@syncfusion/ej2-react-schedule',
|
||||
'@syncfusion/ej2-react-filemanager',
|
||||
'@syncfusion/ej2-base',
|
||||
'@syncfusion/ej2-navigations',
|
||||
'@syncfusion/ej2-buttons',
|
||||
'@syncfusion/ej2-splitbuttons',
|
||||
'@syncfusion/ej2-react-base',
|
||||
],
|
||||
// 🔧 NEU: Force dependency re-optimization
|
||||
|
||||
@@ -13,6 +13,8 @@ services:
|
||||
volumes:
|
||||
- ./nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
- ./certs:/etc/nginx/certs:ro
|
||||
# Mount host media folder into nginx so it can serve uploaded media
|
||||
- ./server/media/:/opt/infoscreen/server/media/:ro
|
||||
depends_on:
|
||||
- server
|
||||
- dashboard
|
||||
@@ -75,9 +77,12 @@ services:
|
||||
DB_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
|
||||
DB_HOST: db
|
||||
FLASK_ENV: production
|
||||
FLASK_SECRET_KEY: ${FLASK_SECRET_KEY}
|
||||
MQTT_BROKER_URL: mqtt://mqtt:1883
|
||||
MQTT_USER: ${MQTT_USER}
|
||||
MQTT_PASSWORD: ${MQTT_PASSWORD}
|
||||
DEFAULT_SUPERADMIN_USERNAME: ${DEFAULT_SUPERADMIN_USERNAME:-superadmin}
|
||||
DEFAULT_SUPERADMIN_PASSWORD: ${DEFAULT_SUPERADMIN_PASSWORD}
|
||||
networks:
|
||||
- infoscreen-net
|
||||
healthcheck:
|
||||
|
||||
@@ -18,6 +18,11 @@ services:
|
||||
environment:
|
||||
- DB_CONN=mysql+pymysql://${DB_USER}:${DB_PASSWORD}@db/${DB_NAME}
|
||||
- DB_URL=mysql+pymysql://${DB_USER}:${DB_PASSWORD}@db/${DB_NAME}
|
||||
- API_BASE_URL=http://server:8000
|
||||
- ENV=${ENV:-development}
|
||||
- FLASK_SECRET_KEY=${FLASK_SECRET_KEY:-dev-secret-key-change-in-production}
|
||||
- DEFAULT_SUPERADMIN_USERNAME=${DEFAULT_SUPERADMIN_USERNAME:-superadmin}
|
||||
- DEFAULT_SUPERADMIN_PASSWORD=${DEFAULT_SUPERADMIN_PASSWORD}
|
||||
# 🔧 ENTFERNT: Volume-Mount ist nur für die Entwicklung
|
||||
networks:
|
||||
- infoscreen-net
|
||||
@@ -31,6 +36,8 @@ services:
|
||||
volumes:
|
||||
- ./nginx.conf:/etc/nginx/nginx.conf:ro # 🔧 GEÄNDERT: Relativer Pfad
|
||||
- ./certs:/etc/nginx/certs:ro # 🔧 GEÄNDERT: Relativer Pfad
|
||||
# Mount media volume so nginx can serve uploaded files
|
||||
- media-data:/opt/infoscreen/server/media:ro
|
||||
depends_on:
|
||||
- server
|
||||
- dashboard
|
||||
|
||||
80
exclude.txt
Normal file
80
exclude.txt
Normal file
@@ -0,0 +1,80 @@
|
||||
# OS/Editor
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
desktop.ini
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*.bak
|
||||
*.tmp
|
||||
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
*.pdb
|
||||
*.egg-info/
|
||||
*.eggs/
|
||||
.pytest_cache/
|
||||
*.mypy_cache/
|
||||
*.hypothesis/
|
||||
*.coverage
|
||||
.coverage.*
|
||||
*.cache
|
||||
instance/
|
||||
|
||||
# Virtual environments
|
||||
venv/
|
||||
env/
|
||||
.venv/
|
||||
.env/
|
||||
|
||||
# Environment files
|
||||
# .env
|
||||
# .env.local
|
||||
|
||||
# Logs and databases
|
||||
*.log
|
||||
*.log.1
|
||||
*.sqlite3
|
||||
*.db
|
||||
|
||||
# Node.js
|
||||
node_modules/
|
||||
dashboard/node_modules/
|
||||
dashboard/.vite/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
.pnpm-store/
|
||||
|
||||
# Docker
|
||||
*.pid
|
||||
*.tar
|
||||
docker-compose.override.yml
|
||||
docker-compose.override.*.yml
|
||||
docker-compose.override.*.yaml
|
||||
|
||||
# Devcontainer
|
||||
.devcontainer/
|
||||
|
||||
# Project-specific
|
||||
received_screenshots/
|
||||
screenshots/
|
||||
media/
|
||||
mosquitto/
|
||||
certs/
|
||||
alte/
|
||||
sync.ffs_db
|
||||
dashboard/manitine_test.py
|
||||
dashboard/pages/test.py
|
||||
dashboard/sidebar_test.py
|
||||
dashboard/assets/responsive-sidebar.css
|
||||
dashboard/src/nested_tabs.js
|
||||
|
||||
# Git
|
||||
.git/
|
||||
.gitignore
|
||||
@@ -2,47 +2,544 @@ import os
|
||||
import json
|
||||
import logging
|
||||
import datetime
|
||||
import base64
|
||||
import re
|
||||
import requests
|
||||
import paho.mqtt.client as mqtt
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from models.models import Client
|
||||
from models.models import Client, ClientLog, LogLevel, ProcessStatus, ScreenHealthStatus
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s [%(levelname)s] %(message)s')
|
||||
|
||||
if os.getenv("ENV", "development") == "development":
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv(".env")
|
||||
# Load .env only when not already configured by Docker (API_BASE_URL not set by compose means we're outside a container)
|
||||
_api_already_set = bool(os.environ.get("API_BASE_URL"))
|
||||
if not _api_already_set and os.getenv("ENV", "development") == "development":
|
||||
try:
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv(".env")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# ENV-abhängige Konfiguration
|
||||
# ENV-dependent configuration
|
||||
ENV = os.getenv("ENV", "development")
|
||||
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO" if ENV == "production" else "DEBUG")
|
||||
DB_URL = os.environ.get(
|
||||
"DB_CONN", "mysql+pymysql://user:password@db/infoscreen")
|
||||
|
||||
# Logging
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
format='%(asctime)s [%(levelname)s] %(message)s')
|
||||
DB_URL = os.environ.get("DB_CONN", "mysql+pymysql://user:password@db/infoscreen")
|
||||
|
||||
# DB-Konfiguration
|
||||
engine = create_engine(DB_URL)
|
||||
Session = sessionmaker(bind=engine)
|
||||
|
||||
# API configuration
|
||||
API_BASE_URL = os.getenv("API_BASE_URL", "http://server:8000")
|
||||
|
||||
# Dashboard payload migration observability
|
||||
DASHBOARD_METRICS_LOG_EVERY = int(os.getenv("DASHBOARD_METRICS_LOG_EVERY", "5"))
|
||||
DASHBOARD_PARSE_METRICS = {
|
||||
"v2_success": 0,
|
||||
"parse_failures": 0,
|
||||
}
|
||||
|
||||
|
||||
def normalize_process_status(value):
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, ProcessStatus):
|
||||
return value
|
||||
|
||||
normalized = str(value).strip().lower()
|
||||
if not normalized:
|
||||
return None
|
||||
|
||||
try:
|
||||
return ProcessStatus(normalized)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def normalize_event_id(value):
|
||||
if value is None or isinstance(value, bool):
|
||||
return None
|
||||
if isinstance(value, int):
|
||||
return value
|
||||
if isinstance(value, float):
|
||||
return int(value)
|
||||
|
||||
normalized = str(value).strip()
|
||||
if not normalized:
|
||||
return None
|
||||
if normalized.isdigit():
|
||||
return int(normalized)
|
||||
|
||||
match = re.search(r"(\d+)$", normalized)
|
||||
if match:
|
||||
return int(match.group(1))
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def parse_timestamp(value):
|
||||
if not value:
|
||||
return None
|
||||
if isinstance(value, (int, float)):
|
||||
try:
|
||||
ts_value = float(value)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
return datetime.datetime.fromtimestamp(ts_value, datetime.UTC)
|
||||
except (TypeError, ValueError, OverflowError):
|
||||
return None
|
||||
try:
|
||||
value_str = str(value).strip()
|
||||
if value_str.isdigit():
|
||||
ts_value = float(value_str)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
return datetime.datetime.fromtimestamp(ts_value, datetime.UTC)
|
||||
|
||||
parsed = datetime.datetime.fromisoformat(value_str.replace('Z', '+00:00'))
|
||||
if parsed.tzinfo is None:
|
||||
return parsed.replace(tzinfo=datetime.UTC)
|
||||
return parsed.astimezone(datetime.UTC)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def infer_screen_health_status(payload_data):
|
||||
explicit = payload_data.get('screen_health_status')
|
||||
if explicit:
|
||||
try:
|
||||
return ScreenHealthStatus[str(explicit).strip().upper()]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
metrics = payload_data.get('health_metrics') or {}
|
||||
if metrics.get('screen_on') is False:
|
||||
return ScreenHealthStatus.BLACK
|
||||
|
||||
last_frame_update = parse_timestamp(metrics.get('last_frame_update'))
|
||||
if last_frame_update:
|
||||
age_seconds = (datetime.datetime.now(datetime.UTC) - last_frame_update).total_seconds()
|
||||
if age_seconds > 30:
|
||||
return ScreenHealthStatus.FROZEN
|
||||
return ScreenHealthStatus.OK
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def apply_monitoring_update(client_obj, *, event_id=None, process_name=None, process_pid=None,
|
||||
process_status=None, last_seen=None, screen_health_status=None,
|
||||
last_screenshot_analyzed=None):
|
||||
if last_seen:
|
||||
client_obj.last_alive = last_seen
|
||||
|
||||
normalized_event_id = normalize_event_id(event_id)
|
||||
if normalized_event_id is not None:
|
||||
client_obj.current_event_id = normalized_event_id
|
||||
|
||||
if process_name is not None:
|
||||
client_obj.current_process = process_name
|
||||
|
||||
if process_pid is not None:
|
||||
client_obj.process_pid = process_pid
|
||||
|
||||
normalized_status = normalize_process_status(process_status)
|
||||
if normalized_status is not None:
|
||||
client_obj.process_status = normalized_status
|
||||
|
||||
if screen_health_status is not None:
|
||||
client_obj.screen_health_status = screen_health_status
|
||||
|
||||
if last_screenshot_analyzed is not None:
|
||||
existing = client_obj.last_screenshot_analyzed
|
||||
if existing is not None and existing.tzinfo is None:
|
||||
existing = existing.replace(tzinfo=datetime.UTC)
|
||||
|
||||
candidate = last_screenshot_analyzed
|
||||
if candidate.tzinfo is None:
|
||||
candidate = candidate.replace(tzinfo=datetime.UTC)
|
||||
|
||||
if existing is None or candidate >= existing:
|
||||
client_obj.last_screenshot_analyzed = candidate
|
||||
|
||||
|
||||
def _normalize_screenshot_type(raw_type):
|
||||
if raw_type is None:
|
||||
return None
|
||||
|
||||
normalized = str(raw_type).strip().lower()
|
||||
if normalized in ("periodic", "event_start", "event_stop"):
|
||||
return normalized
|
||||
return None
|
||||
|
||||
|
||||
def _classify_dashboard_payload(data):
|
||||
"""
|
||||
Classify dashboard payload into migration categories for observability.
|
||||
"""
|
||||
if not isinstance(data, dict):
|
||||
return "parse_failures", None
|
||||
|
||||
message_obj = data.get("message") if isinstance(data.get("message"), dict) else None
|
||||
content_obj = data.get("content") if isinstance(data.get("content"), dict) else None
|
||||
metadata_obj = data.get("metadata") if isinstance(data.get("metadata"), dict) else None
|
||||
schema_version = metadata_obj.get("schema_version") if metadata_obj else None
|
||||
|
||||
# v2 detection: grouped blocks available with metadata.
|
||||
if message_obj is not None and content_obj is not None and metadata_obj is not None:
|
||||
return "v2_success", schema_version
|
||||
|
||||
return "parse_failures", schema_version
|
||||
|
||||
|
||||
def _record_dashboard_parse_metric(mode, uuid, schema_version=None, reason=None):
|
||||
if mode not in DASHBOARD_PARSE_METRICS:
|
||||
mode = "parse_failures"
|
||||
|
||||
DASHBOARD_PARSE_METRICS[mode] += 1
|
||||
total = sum(DASHBOARD_PARSE_METRICS.values())
|
||||
|
||||
if mode == "v2_success":
|
||||
if schema_version is None:
|
||||
logging.warning(f"Dashboard payload from {uuid}: missing metadata.schema_version for grouped payload")
|
||||
else:
|
||||
version_text = str(schema_version).strip()
|
||||
if not version_text.startswith("2"):
|
||||
logging.warning(f"Dashboard payload from {uuid}: unknown schema_version={version_text}")
|
||||
|
||||
if mode == "parse_failures":
|
||||
if reason:
|
||||
logging.warning(f"Dashboard payload parse failure for {uuid}: {reason}")
|
||||
else:
|
||||
logging.warning(f"Dashboard payload parse failure for {uuid}")
|
||||
|
||||
if DASHBOARD_METRICS_LOG_EVERY > 0 and total % DASHBOARD_METRICS_LOG_EVERY == 0:
|
||||
logging.info(
|
||||
"Dashboard payload metrics: "
|
||||
f"total={total}, "
|
||||
f"v2_success={DASHBOARD_PARSE_METRICS['v2_success']}, "
|
||||
f"parse_failures={DASHBOARD_PARSE_METRICS['parse_failures']}"
|
||||
)
|
||||
|
||||
|
||||
def _validate_v2_required_fields(data, uuid):
|
||||
"""
|
||||
Soft validation of required v2 fields for grouped dashboard payloads.
|
||||
Logs a WARNING for each missing field. Never drops the message.
|
||||
"""
|
||||
message_obj = data.get("message") if isinstance(data.get("message"), dict) else {}
|
||||
metadata_obj = data.get("metadata") if isinstance(data.get("metadata"), dict) else {}
|
||||
capture_obj = metadata_obj.get("capture") if isinstance(metadata_obj.get("capture"), dict) else {}
|
||||
|
||||
missing = []
|
||||
if not message_obj.get("client_id"):
|
||||
missing.append("message.client_id")
|
||||
if not message_obj.get("status"):
|
||||
missing.append("message.status")
|
||||
if not metadata_obj.get("schema_version"):
|
||||
missing.append("metadata.schema_version")
|
||||
if not capture_obj.get("type"):
|
||||
missing.append("metadata.capture.type")
|
||||
|
||||
if missing:
|
||||
logging.warning(
|
||||
f"Dashboard v2 payload from {uuid} missing required fields: {', '.join(missing)}"
|
||||
)
|
||||
|
||||
|
||||
def _extract_dashboard_payload_fields(data):
|
||||
"""
|
||||
Parse dashboard payload fields from the grouped v2 schema only.
|
||||
"""
|
||||
if not isinstance(data, dict):
|
||||
return {
|
||||
"image": None,
|
||||
"timestamp": None,
|
||||
"screenshot_type": None,
|
||||
"status": None,
|
||||
"process_health": {},
|
||||
}
|
||||
|
||||
# v2 grouped payload blocks
|
||||
message_obj = data.get("message") if isinstance(data.get("message"), dict) else None
|
||||
content_obj = data.get("content") if isinstance(data.get("content"), dict) else None
|
||||
runtime_obj = data.get("runtime") if isinstance(data.get("runtime"), dict) else None
|
||||
metadata_obj = data.get("metadata") if isinstance(data.get("metadata"), dict) else None
|
||||
|
||||
screenshot_obj = None
|
||||
if isinstance(content_obj, dict) and isinstance(content_obj.get("screenshot"), dict):
|
||||
screenshot_obj = content_obj.get("screenshot")
|
||||
|
||||
capture_obj = metadata_obj.get("capture") if metadata_obj and isinstance(metadata_obj.get("capture"), dict) else None
|
||||
|
||||
# Screenshot type comes from v2 metadata.capture.type.
|
||||
screenshot_type = _normalize_screenshot_type(capture_obj.get("type") if capture_obj else None)
|
||||
|
||||
# Image from v2 content.screenshot.
|
||||
image_value = None
|
||||
for container in (screenshot_obj,):
|
||||
if not isinstance(container, dict):
|
||||
continue
|
||||
for key in ("data", "image"):
|
||||
value = container.get(key)
|
||||
if isinstance(value, str) and value:
|
||||
image_value = value
|
||||
break
|
||||
if image_value is not None:
|
||||
break
|
||||
|
||||
# Timestamp precedence: v2 screenshot.timestamp -> capture.captured_at -> metadata.published_at
|
||||
timestamp_value = None
|
||||
timestamp_candidates = [
|
||||
screenshot_obj.get("timestamp") if screenshot_obj else None,
|
||||
capture_obj.get("captured_at") if capture_obj else None,
|
||||
metadata_obj.get("published_at") if metadata_obj else None,
|
||||
]
|
||||
|
||||
for value in timestamp_candidates:
|
||||
if value is not None:
|
||||
timestamp_value = value
|
||||
break
|
||||
|
||||
# Monitoring fields from v2 message/runtime.
|
||||
status_value = (message_obj or {}).get("status")
|
||||
process_health = (runtime_obj or {}).get("process_health")
|
||||
if not isinstance(process_health, dict):
|
||||
process_health = {}
|
||||
|
||||
return {
|
||||
"image": image_value,
|
||||
"timestamp": timestamp_value,
|
||||
"screenshot_type": screenshot_type,
|
||||
"status": status_value,
|
||||
"process_health": process_health,
|
||||
}
|
||||
|
||||
|
||||
def handle_screenshot(uuid, payload):
|
||||
"""
|
||||
Handle screenshot data received via MQTT and forward to API.
|
||||
Payload can be either raw binary image data or JSON with base64-encoded image.
|
||||
"""
|
||||
try:
|
||||
# Try to parse as JSON first
|
||||
try:
|
||||
data = json.loads(payload.decode())
|
||||
extracted = _extract_dashboard_payload_fields(data)
|
||||
image_b64 = extracted["image"]
|
||||
timestamp_value = extracted["timestamp"]
|
||||
screenshot_type = extracted["screenshot_type"]
|
||||
if image_b64:
|
||||
# Payload is JSON with base64 image
|
||||
api_payload = {"image": image_b64}
|
||||
if timestamp_value is not None:
|
||||
api_payload["timestamp"] = timestamp_value
|
||||
if screenshot_type:
|
||||
api_payload["screenshot_type"] = screenshot_type
|
||||
headers = {"Content-Type": "application/json"}
|
||||
logging.debug(f"Forwarding base64 screenshot from {uuid} to API")
|
||||
else:
|
||||
logging.warning(f"Screenshot JSON from {uuid} missing image/data field")
|
||||
return
|
||||
except (json.JSONDecodeError, UnicodeDecodeError):
|
||||
# Payload is raw binary image data - encode to base64 for API
|
||||
image_b64 = base64.b64encode(payload).decode('utf-8')
|
||||
api_payload = {"image": image_b64}
|
||||
headers = {"Content-Type": "application/json"}
|
||||
logging.debug(f"Forwarding binary screenshot from {uuid} to API (encoded as base64)")
|
||||
|
||||
# Forward to API endpoint
|
||||
api_url = f"{API_BASE_URL}/api/clients/{uuid}/screenshot"
|
||||
response = requests.post(api_url, json=api_payload, headers=headers, timeout=10)
|
||||
|
||||
if response.status_code == 200:
|
||||
logging.info(f"Screenshot von {uuid} erfolgreich an API weitergeleitet")
|
||||
else:
|
||||
logging.error(f"API returned status {response.status_code} for screenshot from {uuid}: {response.text}")
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
logging.error(f"Failed to forward screenshot from {uuid} to API: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error handling screenshot from {uuid}: {e}")
|
||||
|
||||
|
||||
def on_connect(client, userdata, flags, reasonCode, properties):
|
||||
"""Callback for when client connects or reconnects (API v2)."""
|
||||
try:
|
||||
# Subscribe on every (re)connect so we don't miss heartbeats after broker restarts
|
||||
client.subscribe("infoscreen/discovery")
|
||||
client.subscribe("infoscreen/+/heartbeat")
|
||||
client.subscribe("infoscreen/+/screenshot")
|
||||
client.subscribe("infoscreen/+/dashboard")
|
||||
|
||||
# Subscribe to monitoring topics
|
||||
client.subscribe("infoscreen/+/logs/error")
|
||||
client.subscribe("infoscreen/+/logs/warn")
|
||||
client.subscribe("infoscreen/+/logs/info")
|
||||
client.subscribe("infoscreen/+/health")
|
||||
|
||||
logging.info(f"MQTT connected (reasonCode: {reasonCode}); (re)subscribed to discovery, heartbeats, screenshots, dashboards, logs, and health")
|
||||
except Exception as e:
|
||||
logging.error(f"Subscribe failed on connect: {e}")
|
||||
|
||||
|
||||
def on_message(client, userdata, msg):
|
||||
topic = msg.topic
|
||||
logging.debug(f"Empfangene Nachricht auf Topic: {topic}")
|
||||
|
||||
try:
|
||||
# Dashboard-Handling (nested screenshot payload)
|
||||
if topic.startswith("infoscreen/") and topic.endswith("/dashboard"):
|
||||
uuid = topic.split("/")[1]
|
||||
try:
|
||||
payload_text = msg.payload.decode()
|
||||
data = json.loads(payload_text)
|
||||
parse_mode, schema_version = _classify_dashboard_payload(data)
|
||||
_record_dashboard_parse_metric(parse_mode, uuid, schema_version=schema_version)
|
||||
if parse_mode == "v2_success":
|
||||
_validate_v2_required_fields(data, uuid)
|
||||
|
||||
extracted = _extract_dashboard_payload_fields(data)
|
||||
image_b64 = extracted["image"]
|
||||
ts_value = extracted["timestamp"]
|
||||
screenshot_type = extracted["screenshot_type"]
|
||||
if image_b64:
|
||||
logging.debug(f"Dashboard enthält Screenshot für {uuid}; Weiterleitung an API")
|
||||
# Forward original v2 payload so handle_screenshot can parse grouped fields.
|
||||
handle_screenshot(uuid, msg.payload)
|
||||
# Update last_alive if status present
|
||||
if extracted["status"] == "alive":
|
||||
session = Session()
|
||||
client_obj = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if client_obj:
|
||||
process_health = extracted["process_health"]
|
||||
apply_monitoring_update(
|
||||
client_obj,
|
||||
last_seen=datetime.datetime.now(datetime.UTC),
|
||||
event_id=process_health.get('event_id'),
|
||||
process_name=process_health.get('current_process') or process_health.get('process'),
|
||||
process_pid=process_health.get('process_pid') or process_health.get('pid'),
|
||||
process_status=process_health.get('process_status') or process_health.get('status'),
|
||||
)
|
||||
session.commit()
|
||||
session.close()
|
||||
except Exception as e:
|
||||
_record_dashboard_parse_metric("parse_failures", uuid, reason=str(e))
|
||||
logging.error(f"Fehler beim Verarbeiten des Dashboard-Payloads von {uuid}: {e}")
|
||||
return
|
||||
|
||||
# Screenshot-Handling
|
||||
if topic.startswith("infoscreen/") and topic.endswith("/screenshot"):
|
||||
uuid = topic.split("/")[1]
|
||||
handle_screenshot(uuid, msg.payload)
|
||||
return
|
||||
|
||||
# Heartbeat-Handling
|
||||
if topic.startswith("infoscreen/") and topic.endswith("/heartbeat"):
|
||||
uuid = topic.split("/")[1]
|
||||
try:
|
||||
# Parse payload to get optional health data
|
||||
payload_data = json.loads(msg.payload.decode())
|
||||
except (json.JSONDecodeError, UnicodeDecodeError):
|
||||
payload_data = {}
|
||||
|
||||
session = Session()
|
||||
client_obj = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if client_obj:
|
||||
client_obj.last_alive = datetime.datetime.now(datetime.UTC)
|
||||
apply_monitoring_update(
|
||||
client_obj,
|
||||
last_seen=datetime.datetime.now(datetime.UTC),
|
||||
event_id=payload_data.get('current_event_id'),
|
||||
process_name=payload_data.get('current_process'),
|
||||
process_pid=payload_data.get('process_pid'),
|
||||
process_status=payload_data.get('process_status'),
|
||||
)
|
||||
session.commit()
|
||||
logging.info(
|
||||
f"Heartbeat von {uuid} empfangen, last_alive (UTC) aktualisiert.")
|
||||
logging.info(f"Heartbeat von {uuid} empfangen, last_alive (UTC) aktualisiert.")
|
||||
session.close()
|
||||
return
|
||||
|
||||
# Log-Handling (ERROR, WARN, INFO)
|
||||
if topic.startswith("infoscreen/") and "/logs/" in topic:
|
||||
parts = topic.split("/")
|
||||
if len(parts) >= 4:
|
||||
uuid = parts[1]
|
||||
level_str = parts[3].upper() # 'error', 'warn', 'info' -> 'ERROR', 'WARN', 'INFO'
|
||||
|
||||
try:
|
||||
payload_data = json.loads(msg.payload.decode())
|
||||
message = payload_data.get('message', '')
|
||||
timestamp_str = payload_data.get('timestamp')
|
||||
context = payload_data.get('context', {})
|
||||
|
||||
# Parse timestamp or use current time
|
||||
if timestamp_str:
|
||||
try:
|
||||
log_timestamp = datetime.datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
|
||||
if log_timestamp.tzinfo is None:
|
||||
log_timestamp = log_timestamp.replace(tzinfo=datetime.UTC)
|
||||
except ValueError:
|
||||
log_timestamp = datetime.datetime.now(datetime.UTC)
|
||||
else:
|
||||
log_timestamp = datetime.datetime.now(datetime.UTC)
|
||||
|
||||
# Store in database
|
||||
session = Session()
|
||||
try:
|
||||
log_level = LogLevel[level_str]
|
||||
log_entry = ClientLog(
|
||||
client_uuid=uuid,
|
||||
timestamp=log_timestamp,
|
||||
level=log_level,
|
||||
message=message,
|
||||
context=json.dumps(context) if context else None
|
||||
)
|
||||
session.add(log_entry)
|
||||
session.commit()
|
||||
logging.info(f"[{level_str}] {uuid}: {message}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error saving log from {uuid}: {e}")
|
||||
session.rollback()
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
logging.error(f"Could not parse log payload from {uuid}: {e}")
|
||||
return
|
||||
|
||||
# Health-Handling
|
||||
if topic.startswith("infoscreen/") and topic.endswith("/health"):
|
||||
uuid = topic.split("/")[1]
|
||||
try:
|
||||
payload_data = json.loads(msg.payload.decode())
|
||||
|
||||
session = Session()
|
||||
client_obj = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if client_obj:
|
||||
# Update expected state
|
||||
expected = payload_data.get('expected_state', {})
|
||||
|
||||
# Update actual state
|
||||
actual = payload_data.get('actual_state', {})
|
||||
screen_health_status = infer_screen_health_status(payload_data)
|
||||
apply_monitoring_update(
|
||||
client_obj,
|
||||
last_seen=datetime.datetime.now(datetime.UTC),
|
||||
event_id=expected.get('event_id'),
|
||||
process_name=actual.get('process'),
|
||||
process_pid=actual.get('pid'),
|
||||
process_status=actual.get('status'),
|
||||
screen_health_status=screen_health_status,
|
||||
last_screenshot_analyzed=parse_timestamp((payload_data.get('health_metrics') or {}).get('last_frame_update')),
|
||||
)
|
||||
session.commit()
|
||||
logging.debug(f"Health update from {uuid}: {actual.get('process')} ({actual.get('status')})")
|
||||
session.close()
|
||||
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
logging.error(f"Could not parse health payload from {uuid}: {e}")
|
||||
except Exception as e:
|
||||
logging.error(f"Error processing health from {uuid}: {e}")
|
||||
return
|
||||
|
||||
# Discovery-Handling
|
||||
if topic == "infoscreen/discovery":
|
||||
@@ -87,14 +584,14 @@ def on_message(client, userdata, msg):
|
||||
|
||||
|
||||
def main():
|
||||
mqtt_client = mqtt.Client(protocol=mqtt.MQTTv311, callback_api_version=2)
|
||||
mqtt_client = mqtt.Client(protocol=mqtt.MQTTv311, callback_api_version=mqtt.CallbackAPIVersion.VERSION2)
|
||||
mqtt_client.on_message = on_message
|
||||
mqtt_client.on_connect = on_connect
|
||||
# Set an exponential reconnect delay to survive broker restarts
|
||||
mqtt_client.reconnect_delay_set(min_delay=1, max_delay=60)
|
||||
mqtt_client.connect("mqtt", 1883)
|
||||
mqtt_client.subscribe("infoscreen/discovery")
|
||||
mqtt_client.subscribe("infoscreen/+/heartbeat")
|
||||
|
||||
logging.info(
|
||||
"Listener gestartet und abonniert auf infoscreen/discovery und infoscreen/+/heartbeat")
|
||||
logging.info("Listener gestartet; warte auf MQTT-Verbindung und Nachrichten")
|
||||
mqtt_client.loop_forever()
|
||||
|
||||
|
||||
|
||||
@@ -2,3 +2,4 @@ paho-mqtt>=2.0
|
||||
SQLAlchemy>=2.0
|
||||
pymysql
|
||||
python-dotenv
|
||||
requests>=2.31.0
|
||||
|
||||
378
listener/test_listener_parser.py
Normal file
378
listener/test_listener_parser.py
Normal file
@@ -0,0 +1,378 @@
|
||||
"""
|
||||
Mixed-format integration tests for the dashboard payload parser.
|
||||
|
||||
Tests cover:
|
||||
- Legacy top-level payload is rejected (v2-only mode)
|
||||
- v2 grouped payload: periodic capture
|
||||
- v2 grouped payload: event_start capture
|
||||
- v2 grouped payload: event_stop capture
|
||||
- Classification into v2_success / parse_failures
|
||||
- Soft required-field validation (v2 only, never drops message)
|
||||
- Edge cases: missing image, missing status, non-dict payload
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import logging
|
||||
import importlib.util
|
||||
|
||||
# listener/ has no __init__.py — load the module directly from its file path
|
||||
os.environ.setdefault("DB_CONN", "sqlite:///:memory:") # prevent DB engine errors on import
|
||||
_LISTENER_PATH = os.path.join(os.path.dirname(__file__), "listener.py")
|
||||
_spec = importlib.util.spec_from_file_location("listener_module", _LISTENER_PATH)
|
||||
_mod = importlib.util.module_from_spec(_spec)
|
||||
_spec.loader.exec_module(_mod)
|
||||
|
||||
_extract_dashboard_payload_fields = _mod._extract_dashboard_payload_fields
|
||||
_classify_dashboard_payload = _mod._classify_dashboard_payload
|
||||
_validate_v2_required_fields = _mod._validate_v2_required_fields
|
||||
_normalize_screenshot_type = _mod._normalize_screenshot_type
|
||||
DASHBOARD_PARSE_METRICS = _mod.DASHBOARD_PARSE_METRICS
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
IMAGE_B64 = "aGVsbG8=" # base64("hello")
|
||||
|
||||
LEGACY_PAYLOAD = {
|
||||
"client_id": "uuid-legacy",
|
||||
"status": "alive",
|
||||
"screenshot": {
|
||||
"data": IMAGE_B64,
|
||||
"timestamp": "2026-03-30T10:00:00+00:00",
|
||||
},
|
||||
"screenshot_type": "periodic",
|
||||
"process_health": {
|
||||
"current_process": "impressive",
|
||||
"process_pid": 1234,
|
||||
"process_status": "running",
|
||||
"event_id": 42,
|
||||
},
|
||||
}
|
||||
|
||||
def _make_v2(capture_type):
|
||||
return {
|
||||
"message": {
|
||||
"client_id": "uuid-v2",
|
||||
"status": "alive",
|
||||
},
|
||||
"content": {
|
||||
"screenshot": {
|
||||
"filename": "latest.jpg",
|
||||
"data": IMAGE_B64,
|
||||
"timestamp": "2026-03-30T10:15:41.123456+00:00",
|
||||
"size": 6,
|
||||
}
|
||||
},
|
||||
"runtime": {
|
||||
"system_info": {
|
||||
"hostname": "pi-display-01",
|
||||
"ip": "192.168.1.42",
|
||||
"uptime": 12345.0,
|
||||
},
|
||||
"process_health": {
|
||||
"event_id": "evt-7",
|
||||
"event_type": "presentation",
|
||||
"current_process": "impressive",
|
||||
"process_pid": 4123,
|
||||
"process_status": "running",
|
||||
"restart_count": 0,
|
||||
},
|
||||
},
|
||||
"metadata": {
|
||||
"schema_version": "2.0",
|
||||
"producer": "simclient",
|
||||
"published_at": "2026-03-30T10:15:42.004321+00:00",
|
||||
"capture": {
|
||||
"type": capture_type,
|
||||
"captured_at": "2026-03-30T10:15:41.123456+00:00",
|
||||
"age_s": 0.9,
|
||||
"triggered": capture_type != "periodic",
|
||||
"send_immediately": capture_type != "periodic",
|
||||
},
|
||||
"transport": {"qos": 0, "publisher": "simclient"},
|
||||
},
|
||||
}
|
||||
|
||||
V2_PERIODIC = _make_v2("periodic")
|
||||
V2_EVT_START = _make_v2("event_start")
|
||||
V2_EVT_STOP = _make_v2("event_stop")
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def assert_eq(label, actual, expected):
|
||||
assert actual == expected, f"FAIL [{label}]: expected {expected!r}, got {actual!r}"
|
||||
|
||||
def assert_not_none(label, actual):
|
||||
assert actual is not None, f"FAIL [{label}]: expected non-None, got None"
|
||||
|
||||
def assert_none(label, actual):
|
||||
assert actual is None, f"FAIL [{label}]: expected None, got {actual!r}"
|
||||
|
||||
def assert_warns(label, fn, substring):
|
||||
"""Assert that fn() emits a logging.WARNING containing substring."""
|
||||
records = []
|
||||
handler = logging.handlers_collector(records)
|
||||
logger = logging.getLogger()
|
||||
logger.addHandler(handler)
|
||||
try:
|
||||
fn()
|
||||
finally:
|
||||
logger.removeHandler(handler)
|
||||
warnings = [r.getMessage() for r in records if r.levelno == logging.WARNING]
|
||||
assert any(substring in w for w in warnings), (
|
||||
f"FAIL [{label}]: no WARNING containing {substring!r} found in {warnings}"
|
||||
)
|
||||
|
||||
|
||||
class _CapturingHandler(logging.Handler):
|
||||
def __init__(self, records):
|
||||
super().__init__()
|
||||
self._records = records
|
||||
|
||||
def emit(self, record):
|
||||
self._records.append(record)
|
||||
|
||||
|
||||
def capture_warnings(fn):
|
||||
"""Run fn(), return list of WARNING message strings."""
|
||||
records = []
|
||||
handler = _CapturingHandler(records)
|
||||
logger = logging.getLogger()
|
||||
logger.addHandler(handler)
|
||||
try:
|
||||
fn()
|
||||
finally:
|
||||
logger.removeHandler(handler)
|
||||
return [r.getMessage() for r in records if r.levelno == logging.WARNING]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _normalize_screenshot_type
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_normalize_known_types():
|
||||
for t in ("periodic", "event_start", "event_stop"):
|
||||
assert_eq(f"normalize_{t}", _normalize_screenshot_type(t), t)
|
||||
assert_eq(f"normalize_{t}_upper", _normalize_screenshot_type(t.upper()), t)
|
||||
|
||||
def test_normalize_unknown_returns_none():
|
||||
assert_none("normalize_unknown", _normalize_screenshot_type("unknown"))
|
||||
assert_none("normalize_none", _normalize_screenshot_type(None))
|
||||
assert_none("normalize_empty", _normalize_screenshot_type(""))
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _classify_dashboard_payload
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_classify_legacy():
|
||||
mode, ver = _classify_dashboard_payload(LEGACY_PAYLOAD)
|
||||
assert_eq("classify_legacy_mode", mode, "parse_failures")
|
||||
assert_none("classify_legacy_version", ver)
|
||||
|
||||
def test_classify_v2_periodic():
|
||||
mode, ver = _classify_dashboard_payload(V2_PERIODIC)
|
||||
assert_eq("classify_v2_periodic_mode", mode, "v2_success")
|
||||
assert_eq("classify_v2_periodic_version", ver, "2.0")
|
||||
|
||||
def test_classify_v2_event_start():
|
||||
mode, ver = _classify_dashboard_payload(V2_EVT_START)
|
||||
assert_eq("classify_v2_event_start_mode", mode, "v2_success")
|
||||
|
||||
def test_classify_v2_event_stop():
|
||||
mode, ver = _classify_dashboard_payload(V2_EVT_STOP)
|
||||
assert_eq("classify_v2_event_stop_mode", mode, "v2_success")
|
||||
|
||||
def test_classify_non_dict():
|
||||
mode, ver = _classify_dashboard_payload("not a dict")
|
||||
assert_eq("classify_non_dict", mode, "parse_failures")
|
||||
|
||||
def test_classify_empty_dict():
|
||||
mode, ver = _classify_dashboard_payload({})
|
||||
assert_eq("classify_empty_dict", mode, "parse_failures")
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _extract_dashboard_payload_fields — legacy payload rejected in v2-only mode
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_legacy_image_not_extracted():
|
||||
r = _extract_dashboard_payload_fields(LEGACY_PAYLOAD)
|
||||
assert_none("legacy_image", r["image"])
|
||||
|
||||
def test_legacy_screenshot_type_missing():
|
||||
r = _extract_dashboard_payload_fields(LEGACY_PAYLOAD)
|
||||
assert_none("legacy_screenshot_type", r["screenshot_type"])
|
||||
|
||||
def test_legacy_status_missing():
|
||||
r = _extract_dashboard_payload_fields(LEGACY_PAYLOAD)
|
||||
assert_none("legacy_status", r["status"])
|
||||
|
||||
def test_legacy_process_health_empty():
|
||||
r = _extract_dashboard_payload_fields(LEGACY_PAYLOAD)
|
||||
assert_eq("legacy_process_health", r["process_health"], {})
|
||||
|
||||
def test_legacy_timestamp_missing():
|
||||
r = _extract_dashboard_payload_fields(LEGACY_PAYLOAD)
|
||||
assert_none("legacy_timestamp", r["timestamp"])
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _extract_dashboard_payload_fields — v2 periodic
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_v2_periodic_image():
|
||||
r = _extract_dashboard_payload_fields(V2_PERIODIC)
|
||||
assert_eq("v2_periodic_image", r["image"], IMAGE_B64)
|
||||
|
||||
def test_v2_periodic_screenshot_type():
|
||||
r = _extract_dashboard_payload_fields(V2_PERIODIC)
|
||||
assert_eq("v2_periodic_type", r["screenshot_type"], "periodic")
|
||||
|
||||
def test_v2_periodic_status():
|
||||
r = _extract_dashboard_payload_fields(V2_PERIODIC)
|
||||
assert_eq("v2_periodic_status", r["status"], "alive")
|
||||
|
||||
def test_v2_periodic_process_health():
|
||||
r = _extract_dashboard_payload_fields(V2_PERIODIC)
|
||||
assert_eq("v2_periodic_pid", r["process_health"]["process_pid"], 4123)
|
||||
assert_eq("v2_periodic_process", r["process_health"]["current_process"], "impressive")
|
||||
|
||||
def test_v2_periodic_timestamp_prefers_screenshot():
|
||||
r = _extract_dashboard_payload_fields(V2_PERIODIC)
|
||||
# screenshot.timestamp must take precedence over capture.captured_at / published_at
|
||||
assert_eq("v2_periodic_ts", r["timestamp"], "2026-03-30T10:15:41.123456+00:00")
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _extract_dashboard_payload_fields — v2 event_start
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_v2_event_start_type():
|
||||
r = _extract_dashboard_payload_fields(V2_EVT_START)
|
||||
assert_eq("v2_event_start_type", r["screenshot_type"], "event_start")
|
||||
|
||||
def test_v2_event_start_image():
|
||||
r = _extract_dashboard_payload_fields(V2_EVT_START)
|
||||
assert_eq("v2_event_start_image", r["image"], IMAGE_B64)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _extract_dashboard_payload_fields — v2 event_stop
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_v2_event_stop_type():
|
||||
r = _extract_dashboard_payload_fields(V2_EVT_STOP)
|
||||
assert_eq("v2_event_stop_type", r["screenshot_type"], "event_stop")
|
||||
|
||||
def test_v2_event_stop_image():
|
||||
r = _extract_dashboard_payload_fields(V2_EVT_STOP)
|
||||
assert_eq("v2_event_stop_image", r["image"], IMAGE_B64)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _extract_dashboard_payload_fields — edge cases
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_non_dict_returns_nulls():
|
||||
r = _extract_dashboard_payload_fields("bad")
|
||||
assert_none("non_dict_image", r["image"])
|
||||
assert_none("non_dict_type", r["screenshot_type"])
|
||||
assert_none("non_dict_status", r["status"])
|
||||
|
||||
def test_missing_image_returns_none():
|
||||
payload = {**V2_PERIODIC, "content": {"screenshot": {"timestamp": "2026-03-30T10:00:00+00:00"}}}
|
||||
r = _extract_dashboard_payload_fields(payload)
|
||||
assert_none("missing_image", r["image"])
|
||||
|
||||
def test_missing_process_health_returns_empty_dict():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["runtime"]["process_health"]
|
||||
r = _extract_dashboard_payload_fields(payload)
|
||||
assert_eq("missing_ph", r["process_health"], {})
|
||||
|
||||
def test_timestamp_fallback_to_captured_at_when_no_screenshot_ts():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["content"]["screenshot"]["timestamp"]
|
||||
r = _extract_dashboard_payload_fields(payload)
|
||||
assert_eq("ts_fallback_captured_at", r["timestamp"], "2026-03-30T10:15:41.123456+00:00")
|
||||
|
||||
def test_timestamp_fallback_to_published_at_when_no_capture_ts():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["content"]["screenshot"]["timestamp"]
|
||||
del payload["metadata"]["capture"]["captured_at"]
|
||||
r = _extract_dashboard_payload_fields(payload)
|
||||
assert_eq("ts_fallback_published_at", r["timestamp"], "2026-03-30T10:15:42.004321+00:00")
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests: _validate_v2_required_fields (soft — never raises)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def test_v2_valid_payload_no_warnings():
|
||||
warnings = capture_warnings(lambda: _validate_v2_required_fields(V2_PERIODIC, "uuid-v2"))
|
||||
assert warnings == [], f"FAIL: unexpected warnings for valid payload: {warnings}"
|
||||
|
||||
def test_v2_missing_client_id_warns():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["message"]["client_id"]
|
||||
warnings = capture_warnings(lambda: _validate_v2_required_fields(payload, "uuid-v2"))
|
||||
assert any("message.client_id" in w for w in warnings), f"FAIL: {warnings}"
|
||||
|
||||
def test_v2_missing_status_warns():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["message"]["status"]
|
||||
warnings = capture_warnings(lambda: _validate_v2_required_fields(payload, "uuid-v2"))
|
||||
assert any("message.status" in w for w in warnings), f"FAIL: {warnings}"
|
||||
|
||||
def test_v2_missing_schema_version_warns():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["metadata"]["schema_version"]
|
||||
warnings = capture_warnings(lambda: _validate_v2_required_fields(payload, "uuid-v2"))
|
||||
assert any("metadata.schema_version" in w for w in warnings), f"FAIL: {warnings}"
|
||||
|
||||
def test_v2_missing_capture_type_warns():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["metadata"]["capture"]["type"]
|
||||
warnings = capture_warnings(lambda: _validate_v2_required_fields(payload, "uuid-v2"))
|
||||
assert any("metadata.capture.type" in w for w in warnings), f"FAIL: {warnings}"
|
||||
|
||||
def test_v2_multiple_missing_fields_all_reported():
|
||||
import copy
|
||||
payload = copy.deepcopy(V2_PERIODIC)
|
||||
del payload["message"]["client_id"]
|
||||
del payload["metadata"]["capture"]["type"]
|
||||
warnings = capture_warnings(lambda: _validate_v2_required_fields(payload, "uuid-v2"))
|
||||
assert len(warnings) == 1, f"FAIL: expected 1 combined warning, got {warnings}"
|
||||
assert "message.client_id" in warnings[0], f"FAIL: {warnings}"
|
||||
assert "metadata.capture.type" in warnings[0], f"FAIL: {warnings}"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Runner
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def run_all():
|
||||
tests = {k: v for k, v in globals().items() if k.startswith("test_") and callable(v)}
|
||||
passed = failed = 0
|
||||
for name, fn in sorted(tests.items()):
|
||||
try:
|
||||
fn()
|
||||
print(f" PASS {name}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f" FAIL {name}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f" ERROR {name}: {type(e).__name__}: {e}")
|
||||
failed += 1
|
||||
print(f"\n{passed} passed, {failed} failed out of {passed + failed} tests")
|
||||
return failed == 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
ok = run_all()
|
||||
sys.exit(0 if ok else 1)
|
||||
@@ -10,6 +10,7 @@ Base = declarative_base()
|
||||
|
||||
class UserRole(enum.Enum):
|
||||
user = "user"
|
||||
editor = "editor"
|
||||
admin = "admin"
|
||||
superadmin = "superadmin"
|
||||
|
||||
@@ -20,6 +21,27 @@ class AcademicPeriodType(enum.Enum):
|
||||
trimester = "trimester"
|
||||
|
||||
|
||||
class LogLevel(enum.Enum):
|
||||
ERROR = "ERROR"
|
||||
WARN = "WARN"
|
||||
INFO = "INFO"
|
||||
DEBUG = "DEBUG"
|
||||
|
||||
|
||||
class ProcessStatus(enum.Enum):
|
||||
running = "running"
|
||||
crashed = "crashed"
|
||||
starting = "starting"
|
||||
stopped = "stopped"
|
||||
|
||||
|
||||
class ScreenHealthStatus(enum.Enum):
|
||||
OK = "OK"
|
||||
BLACK = "BLACK"
|
||||
FROZEN = "FROZEN"
|
||||
UNKNOWN = "UNKNOWN"
|
||||
|
||||
|
||||
class User(Base):
|
||||
__tablename__ = 'users'
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
@@ -27,6 +49,13 @@ class User(Base):
|
||||
password_hash = Column(String(128), nullable=False)
|
||||
role = Column(Enum(UserRole), nullable=False, default=UserRole.user)
|
||||
is_active = Column(Boolean, default=True, nullable=False)
|
||||
last_login_at = Column(TIMESTAMP(timezone=True), nullable=True)
|
||||
last_password_change_at = Column(TIMESTAMP(timezone=True), nullable=True)
|
||||
last_failed_login_at = Column(TIMESTAMP(timezone=True), nullable=True)
|
||||
failed_login_attempts = Column(Integer, nullable=False, default=0, server_default="0")
|
||||
locked_until = Column(TIMESTAMP(timezone=True), nullable=True)
|
||||
deactivated_at = Column(TIMESTAMP(timezone=True), nullable=True)
|
||||
deactivated_by = Column(Integer, ForeignKey('users.id', ondelete='SET NULL'), nullable=True)
|
||||
created_at = Column(TIMESTAMP(timezone=True),
|
||||
server_default=func.current_timestamp())
|
||||
updated_at = Column(TIMESTAMP(timezone=True), server_default=func.current_timestamp(
|
||||
@@ -98,6 +127,31 @@ class Client(Base):
|
||||
is_active = Column(Boolean, default=True, nullable=False)
|
||||
group_id = Column(Integer, ForeignKey(
|
||||
'client_groups.id'), nullable=False, default=1)
|
||||
|
||||
# Health monitoring fields
|
||||
current_event_id = Column(Integer, nullable=True)
|
||||
current_process = Column(String(50), nullable=True) # 'vlc', 'chromium', 'pdf_viewer'
|
||||
process_status = Column(Enum(ProcessStatus), nullable=True)
|
||||
process_pid = Column(Integer, nullable=True)
|
||||
last_screenshot_analyzed = Column(TIMESTAMP(timezone=True), nullable=True)
|
||||
screen_health_status = Column(Enum(ScreenHealthStatus), nullable=True, server_default='UNKNOWN')
|
||||
last_screenshot_hash = Column(String(32), nullable=True)
|
||||
|
||||
|
||||
class ClientLog(Base):
|
||||
__tablename__ = 'client_logs'
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
client_uuid = Column(String(36), ForeignKey('clients.uuid', ondelete='CASCADE'), nullable=False, index=True)
|
||||
timestamp = Column(TIMESTAMP(timezone=True), nullable=False, index=True)
|
||||
level = Column(Enum(LogLevel), nullable=False, index=True)
|
||||
message = Column(Text, nullable=False)
|
||||
context = Column(Text, nullable=True) # JSON stored as text
|
||||
created_at = Column(TIMESTAMP(timezone=True), server_default=func.current_timestamp(), nullable=False)
|
||||
|
||||
__table_args__ = (
|
||||
Index('ix_client_logs_client_timestamp', 'client_uuid', 'timestamp'),
|
||||
Index('ix_client_logs_level_timestamp', 'level', 'timestamp'),
|
||||
)
|
||||
|
||||
|
||||
class EventType(enum.Enum):
|
||||
@@ -154,7 +208,10 @@ class Event(Base):
|
||||
autoplay = Column(Boolean, nullable=True) # NEU
|
||||
loop = Column(Boolean, nullable=True) # NEU
|
||||
volume = Column(Float, nullable=True) # NEU
|
||||
muted = Column(Boolean, nullable=True) # NEU: Video mute
|
||||
slideshow_interval = Column(Integer, nullable=True) # NEU
|
||||
page_progress = Column(Boolean, nullable=True) # NEU: Seitenfortschritt (Page-Progress)
|
||||
auto_progress = Column(Boolean, nullable=True) # NEU: Präsentationsfortschritt (Auto-Progress)
|
||||
# Recurrence fields
|
||||
recurrence_rule = Column(String(255), nullable=True, index=True) # iCalendar RRULE string
|
||||
recurrence_end = Column(TIMESTAMP(timezone=True), nullable=True, index=True) # When recurrence ends
|
||||
@@ -284,3 +341,23 @@ class Conversion(Base):
|
||||
UniqueConstraint('source_event_media_id', 'target_format',
|
||||
'file_hash', name='uq_conv_source_target_hash'),
|
||||
)
|
||||
|
||||
|
||||
# --- SystemSetting: Flexible key-value store for system-wide configuration ---
|
||||
class SystemSetting(Base):
|
||||
__tablename__ = 'system_settings'
|
||||
|
||||
key = Column(String(100), primary_key=True, nullable=False)
|
||||
value = Column(Text, nullable=True)
|
||||
description = Column(String(255), nullable=True)
|
||||
updated_at = Column(TIMESTAMP(timezone=True),
|
||||
server_default=func.current_timestamp(),
|
||||
onupdate=func.current_timestamp())
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
"key": self.key,
|
||||
"value": self.value,
|
||||
"description": self.description,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
|
||||
28
nginx.conf
28
nginx.conf
@@ -9,6 +9,11 @@ http {
|
||||
server {
|
||||
listen 80;
|
||||
server_name _;
|
||||
# Allow larger uploads (match Flask MAX_CONTENT_LENGTH); adjust as needed
|
||||
client_max_body_size 1G;
|
||||
# Increase proxy timeouts for long uploads on slow connections
|
||||
proxy_read_timeout 3600s;
|
||||
proxy_send_timeout 3600s;
|
||||
|
||||
# Leitet /api/ und /screenshots/ an den API-Server weiter
|
||||
location /api/ {
|
||||
@@ -17,6 +22,29 @@ http {
|
||||
location /screenshots/ {
|
||||
proxy_pass http://infoscreen_api/screenshots/;
|
||||
}
|
||||
# Public direct serving (optional)
|
||||
location /files/ {
|
||||
alias /opt/infoscreen/server/media/;
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
types {
|
||||
video/mp4 mp4;
|
||||
video/webm webm;
|
||||
video/ogg ogg;
|
||||
}
|
||||
add_header Accept-Ranges bytes;
|
||||
add_header Cache-Control "public, max-age=3600";
|
||||
}
|
||||
|
||||
# Internal location for X-Accel-Redirect (protected)
|
||||
location /internal_media/ {
|
||||
internal;
|
||||
alias /opt/infoscreen/server/media/;
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
add_header Accept-Ranges bytes;
|
||||
add_header Cache-Control "private, max-age=0, s-maxage=3600";
|
||||
}
|
||||
# Alles andere geht ans Frontend
|
||||
location / {
|
||||
proxy_pass http://dashboard;
|
||||
|
||||
46
rsync-to-samba.sh
Executable file
46
rsync-to-samba.sh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash
|
||||
# Rsync to Samba share using permanent fstab mount
|
||||
# Usage: ./rsync-to-samba.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Local source directory
|
||||
SOURCE="./infoscreen_server_2025"
|
||||
|
||||
# Destination parent mount from fstab
|
||||
DEST_PARENT="/mnt/nas_share"
|
||||
DEST_SUBDIR="infoscreen_server_2025"
|
||||
DEST_PATH="$DEST_PARENT/$DEST_SUBDIR"
|
||||
|
||||
# Exclude file (allows override via env)
|
||||
EXCLUDE_FILE="${EXCLUDE_FILE:-exclude.txt}"
|
||||
|
||||
# Basic validations
|
||||
if [ ! -d "$SOURCE" ]; then
|
||||
echo "Source directory not found: $SOURCE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$EXCLUDE_FILE" ]; then
|
||||
echo "Exclude file not found: $EXCLUDE_FILE (expected in repo root)." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Ensure the fstab-backed mount is active; don't unmount after sync
|
||||
if ! mountpoint -q "$DEST_PARENT"; then
|
||||
echo "Mount point $DEST_PARENT is not mounted. Attempting to mount via fstab..."
|
||||
if ! sudo mount "$DEST_PARENT"; then
|
||||
echo "Failed to mount $DEST_PARENT. Check your /etc/fstab entry and /root/.nas-credentials." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Ensure destination directory exists
|
||||
mkdir -p "$DEST_PATH"
|
||||
|
||||
echo "Syncing files to $DEST_PATH ..."
|
||||
rsync -avz --progress \
|
||||
--exclude-from="$EXCLUDE_FILE" \
|
||||
"$SOURCE/" "$DEST_PATH/"
|
||||
|
||||
echo "Sync completed successfully."
|
||||
@@ -2,22 +2,46 @@
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
from datetime import datetime
|
||||
import logging
|
||||
from sqlalchemy.orm import sessionmaker, joinedload
|
||||
from sqlalchemy import create_engine
|
||||
from models.models import Event, EventMedia, EventException
|
||||
from sqlalchemy import create_engine, or_, and_, text
|
||||
from models.models import Event, EventMedia, EventException, SystemSetting
|
||||
from dateutil.rrule import rrulestr
|
||||
from urllib.request import Request, urlopen
|
||||
from datetime import timezone
|
||||
|
||||
load_dotenv('/workspace/.env')
|
||||
# Load .env only in development to mirror server/database.py behavior
|
||||
if os.getenv("ENV", "development") == "development":
|
||||
# Expect .env at workspace root
|
||||
load_dotenv('/workspace/.env')
|
||||
|
||||
# DB-URL aus Umgebungsvariable oder Fallback
|
||||
DB_CONN = os.environ.get("DB_CONN", "mysql+pymysql://user:password@db/dbname")
|
||||
engine = create_engine(DB_CONN)
|
||||
# DB-URL aus Umgebungsvariable oder Fallback wie im Server
|
||||
DB_URL = os.environ.get("DB_CONN")
|
||||
if not DB_URL:
|
||||
DB_USER = os.environ.get("DB_USER", "infoscreen_admin")
|
||||
DB_PASSWORD = os.environ.get("DB_PASSWORD", "KqtpM7wmNd&mFKs")
|
||||
DB_HOST = os.environ.get("DB_HOST", "db")
|
||||
DB_NAME = os.environ.get("DB_NAME", "infoscreen_by_taa")
|
||||
DB_URL = f"mysql+pymysql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}/{DB_NAME}"
|
||||
|
||||
print(f"[Scheduler] Using DB_URL: {DB_URL}")
|
||||
engine = create_engine(DB_URL)
|
||||
# Proactive connectivity check to surface errors early
|
||||
try:
|
||||
with engine.connect() as conn:
|
||||
conn.execute(text("SELECT 1"))
|
||||
print("[Scheduler] DB connectivity OK")
|
||||
except Exception as db_exc:
|
||||
print(f"[Scheduler] DB connectivity FAILED: {db_exc}")
|
||||
Session = sessionmaker(bind=engine)
|
||||
|
||||
# Base URL from .env for file URLs
|
||||
API_BASE_URL = os.environ.get("API_BASE_URL", "http://server:8000")
|
||||
|
||||
# Cache conversion decisions per media to avoid repeated lookups/logs within the scheduler lifetime
|
||||
_media_conversion_cache = {} # media_id -> pdf_url or None
|
||||
_media_decision_logged = set() # media_id(s) already logged
|
||||
|
||||
|
||||
def get_active_events(start: datetime, end: datetime, group_id: int = None):
|
||||
session = Session()
|
||||
@@ -28,21 +52,83 @@ def get_active_events(start: datetime, end: datetime, group_id: int = None):
|
||||
).filter(Event.is_active == True)
|
||||
|
||||
if start and end:
|
||||
query = query.filter(Event.start < end, Event.end > start)
|
||||
# Include:
|
||||
# 1) Non-recurring events that overlap [start, end]
|
||||
# 2) Recurring events whose recurrence window intersects [start, end]
|
||||
# We consider dtstart (Event.start) <= end and (recurrence_end is NULL or >= start)
|
||||
non_recurring_overlap = and_(
|
||||
Event.recurrence_rule == None,
|
||||
Event.start < end,
|
||||
Event.end > start,
|
||||
)
|
||||
recurring_window = and_(
|
||||
Event.recurrence_rule != None,
|
||||
Event.start <= end,
|
||||
or_(Event.recurrence_end == None, Event.recurrence_end >= start),
|
||||
)
|
||||
query = query.filter(or_(non_recurring_overlap, recurring_window))
|
||||
if group_id:
|
||||
query = query.filter(Event.group_id == group_id)
|
||||
|
||||
# Log base event count before expansion
|
||||
try:
|
||||
base_count = query.count()
|
||||
# Additional diagnostics: split counts
|
||||
non_rec_q = session.query(Event.id).filter(Event.is_active == True)
|
||||
rec_q = session.query(Event.id).filter(Event.is_active == True)
|
||||
if start and end:
|
||||
non_rec_q = non_rec_q.filter(non_recurring_overlap)
|
||||
rec_q = rec_q.filter(recurring_window)
|
||||
if group_id:
|
||||
non_rec_q = non_rec_q.filter(Event.group_id == group_id)
|
||||
rec_q = rec_q.filter(Event.group_id == group_id)
|
||||
non_rec_count = non_rec_q.count()
|
||||
rec_count = rec_q.count()
|
||||
logging.debug(f"[Scheduler] Base events total={base_count} non_recurring_overlap={non_rec_count} recurring_window={rec_count}")
|
||||
except Exception:
|
||||
base_count = None
|
||||
events = query.all()
|
||||
logging.debug(f"[Scheduler] Base events fetched: {len(events)} (count={base_count})")
|
||||
if len(events) == 0:
|
||||
# Quick probe: are there any active events at all?
|
||||
try:
|
||||
any_active = session.query(Event).filter(Event.is_active == True).count()
|
||||
logging.info(f"[Scheduler] Active events in DB (any group, any time): {any_active}")
|
||||
except Exception as e:
|
||||
logging.warning(f"[Scheduler] Could not count active events: {e}")
|
||||
|
||||
formatted_events = []
|
||||
for event in events:
|
||||
# If event has RRULE, expand into instances within [start, end]
|
||||
if event.recurrence_rule:
|
||||
try:
|
||||
r = rrulestr(event.recurrence_rule, dtstart=event.start)
|
||||
# Ensure dtstart is timezone-aware (UTC if naive)
|
||||
dtstart = event.start
|
||||
if dtstart.tzinfo is None:
|
||||
dtstart = dtstart.replace(tzinfo=timezone.utc)
|
||||
|
||||
r = rrulestr(event.recurrence_rule, dtstart=dtstart)
|
||||
|
||||
# Ensure query bounds are timezone-aware
|
||||
query_start = start if start.tzinfo else start.replace(tzinfo=timezone.utc)
|
||||
query_end = end if end.tzinfo else end.replace(tzinfo=timezone.utc)
|
||||
|
||||
# Clamp by recurrence_end if present
|
||||
if getattr(event, "recurrence_end", None):
|
||||
rec_end = event.recurrence_end
|
||||
if rec_end and rec_end.tzinfo is None:
|
||||
rec_end = rec_end.replace(tzinfo=timezone.utc)
|
||||
if rec_end and rec_end < query_end:
|
||||
query_end = rec_end
|
||||
|
||||
# iterate occurrences within range
|
||||
occ_starts = r.between(start, end, inc=True)
|
||||
# Use a lookback equal to the event's duration to catch occurrences that started
|
||||
# before query_start but are still running within the window.
|
||||
duration = (event.end - event.start) if (event.end and event.start) else None
|
||||
lookback_start = query_start
|
||||
if duration:
|
||||
lookback_start = query_start - duration
|
||||
occ_starts = r.between(lookback_start, query_end, inc=True)
|
||||
for occ_start in occ_starts:
|
||||
occ_end = (occ_start + duration) if duration else occ_start
|
||||
# Apply exceptions
|
||||
@@ -57,13 +143,22 @@ def get_active_events(start: datetime, end: datetime, group_id: int = None):
|
||||
occ_start = exc.override_start
|
||||
if exc.override_end:
|
||||
occ_end = exc.override_end
|
||||
# Filter out instances that do not overlap [start, end]
|
||||
if not (occ_start < end and occ_end > start):
|
||||
continue
|
||||
inst = format_event_with_media(event)
|
||||
# Apply overrides to title/description if provided
|
||||
if exc and exc.override_title:
|
||||
inst["title"] = exc.override_title
|
||||
if exc and exc.override_description:
|
||||
inst["description"] = exc.override_description
|
||||
inst["start"] = occ_start.isoformat()
|
||||
inst["end"] = occ_end.isoformat()
|
||||
inst["occurrence_of_id"] = event.id
|
||||
formatted_events.append(inst)
|
||||
except Exception:
|
||||
except Exception as e:
|
||||
# On parse error, fall back to single event formatting
|
||||
logging.warning(f"Failed to parse recurrence rule for event {event.id}: {e}")
|
||||
formatted_events.append(format_event_with_media(event))
|
||||
else:
|
||||
formatted_events.append(format_event_with_media(event))
|
||||
@@ -73,6 +168,22 @@ def get_active_events(start: datetime, end: datetime, group_id: int = None):
|
||||
session.close()
|
||||
|
||||
|
||||
def get_system_setting_value(key: str, default: str | None = None) -> str | None:
|
||||
"""Fetch a system setting value by key from DB.
|
||||
|
||||
Returns the setting's string value or the provided default if missing.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
setting = session.query(SystemSetting).filter_by(key=key).first()
|
||||
return setting.value if setting else default
|
||||
except Exception as e:
|
||||
logging.debug(f"[Scheduler] Failed to read system setting '{key}': {e}")
|
||||
return default
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
def format_event_with_media(event):
|
||||
"""Transform Event + EventMedia into client-expected format"""
|
||||
event_dict = {
|
||||
@@ -81,13 +192,13 @@ def format_event_with_media(event):
|
||||
"start": str(event.start),
|
||||
"end": str(event.end),
|
||||
"group_id": event.group_id,
|
||||
"event_type": event.event_type.value if event.event_type else None,
|
||||
# Carry recurrence metadata for consumers if needed
|
||||
"recurrence_rule": getattr(event, "recurrence_rule", None),
|
||||
"recurrence_end": (event.recurrence_end.isoformat() if getattr(event, "recurrence_end", None) else None),
|
||||
}
|
||||
|
||||
# Now you can directly access event.event_media
|
||||
import logging
|
||||
if event.event_media:
|
||||
media = event.event_media
|
||||
|
||||
@@ -96,31 +207,36 @@ def format_event_with_media(event):
|
||||
"type": "slideshow",
|
||||
"files": [],
|
||||
"slide_interval": event.slideshow_interval or 5000,
|
||||
"auto_advance": True
|
||||
"auto_advance": True,
|
||||
"page_progress": getattr(event, "page_progress", True),
|
||||
"auto_progress": getattr(event, "auto_progress", True)
|
||||
}
|
||||
|
||||
# Debug: log media_type
|
||||
logging.debug(
|
||||
f"[Scheduler] EventMedia id={media.id} media_type={getattr(media.media_type, 'value', str(media.media_type))}")
|
||||
# Avoid per-call media-type debug to reduce log noise
|
||||
|
||||
# Check for PDF conversion for ppt/pptx/odp
|
||||
from sqlalchemy.orm import scoped_session
|
||||
from models.models import Conversion, ConversionStatus
|
||||
session = scoped_session(Session)
|
||||
pdf_url = None
|
||||
if getattr(media.media_type, 'value', str(media.media_type)) in ("ppt", "pptx", "odp"):
|
||||
conversion = session.query(Conversion).filter_by(
|
||||
source_event_media_id=media.id,
|
||||
target_format="pdf",
|
||||
status=ConversionStatus.ready
|
||||
).order_by(Conversion.completed_at.desc()).first()
|
||||
logging.debug(
|
||||
f"[Scheduler] Conversion lookup for media_id={media.id}: found={bool(conversion)}, path={getattr(conversion, 'target_path', None) if conversion else None}")
|
||||
if conversion and conversion.target_path:
|
||||
# Serve via /api/files/converted/<path>
|
||||
pdf_url = f"{API_BASE_URL}/api/files/converted/{conversion.target_path}"
|
||||
session.remove()
|
||||
# Decide file URL with caching to avoid repeated DB lookups/logs
|
||||
pdf_url = _media_conversion_cache.get(media.id, None)
|
||||
|
||||
if pdf_url is None and getattr(media.media_type, 'value', str(media.media_type)) in ("ppt", "pptx", "odp"):
|
||||
from sqlalchemy.orm import scoped_session
|
||||
from models.models import Conversion, ConversionStatus
|
||||
session = scoped_session(Session)
|
||||
try:
|
||||
conversion = session.query(Conversion).filter_by(
|
||||
source_event_media_id=media.id,
|
||||
target_format="pdf",
|
||||
status=ConversionStatus.ready
|
||||
).order_by(Conversion.completed_at.desc()).first()
|
||||
logging.debug(
|
||||
f"[Scheduler] Conversion lookup for media_id={media.id}: found={bool(conversion)}, path={getattr(conversion, 'target_path', None) if conversion else None}")
|
||||
if conversion and conversion.target_path:
|
||||
pdf_url = f"{API_BASE_URL}/api/files/converted/{conversion.target_path}"
|
||||
finally:
|
||||
session.remove()
|
||||
# Cache the decision (even if None) to avoid repeated lookups in the same run
|
||||
_media_conversion_cache[media.id] = pdf_url
|
||||
|
||||
# Build file entry and log decision only once per media
|
||||
if pdf_url:
|
||||
filename = os.path.basename(pdf_url)
|
||||
event_dict["presentation"]["files"].append({
|
||||
@@ -129,8 +245,10 @@ def format_event_with_media(event):
|
||||
"checksum": None,
|
||||
"size": None
|
||||
})
|
||||
logging.info(
|
||||
f"[Scheduler] Using converted PDF for event_media_id={media.id}: {pdf_url}")
|
||||
if media.id not in _media_decision_logged:
|
||||
logging.debug(
|
||||
f"[Scheduler] Using converted PDF for event_media_id={media.id}: {pdf_url}")
|
||||
_media_decision_logged.add(media.id)
|
||||
elif media.file_path:
|
||||
filename = os.path.basename(media.file_path)
|
||||
event_dict["presentation"]["files"].append({
|
||||
@@ -139,9 +257,73 @@ def format_event_with_media(event):
|
||||
"checksum": None,
|
||||
"size": None
|
||||
})
|
||||
logging.info(
|
||||
f"[Scheduler] Using original file for event_media_id={media.id}: {filename}")
|
||||
if media.id not in _media_decision_logged:
|
||||
logging.debug(
|
||||
f"[Scheduler] Using original file for event_media_id={media.id}: {filename}")
|
||||
_media_decision_logged.add(media.id)
|
||||
|
||||
# Add other event types...
|
||||
# Handle website and webuntis events (both display a website)
|
||||
elif event.event_type.value in ("website", "webuntis"):
|
||||
event_dict["website"] = {
|
||||
"type": "browser",
|
||||
"url": media.url if media.url else None
|
||||
}
|
||||
if media.id not in _media_decision_logged:
|
||||
logging.debug(
|
||||
f"[Scheduler] Using website URL for event_media_id={media.id} (type={event.event_type.value}): {media.url}")
|
||||
_media_decision_logged.add(media.id)
|
||||
|
||||
# Handle video events
|
||||
elif event.event_type.value == "video":
|
||||
filename = os.path.basename(media.file_path) if media.file_path else "video"
|
||||
# Use streaming endpoint for better video playback support
|
||||
stream_url = f"{API_BASE_URL}/api/eventmedia/stream/{media.id}/{filename}"
|
||||
|
||||
# Best-effort: probe the streaming endpoint for cheap metadata (HEAD request)
|
||||
mime_type = None
|
||||
size = None
|
||||
accept_ranges = False
|
||||
try:
|
||||
req = Request(stream_url, method='HEAD')
|
||||
with urlopen(req, timeout=2) as resp:
|
||||
# getheader returns None if missing
|
||||
mime_type = resp.getheader('Content-Type')
|
||||
length = resp.getheader('Content-Length')
|
||||
if length:
|
||||
try:
|
||||
size = int(length)
|
||||
except Exception:
|
||||
size = None
|
||||
accept_ranges = (resp.getheader('Accept-Ranges') or '').lower() == 'bytes'
|
||||
except Exception as e:
|
||||
# Don't fail the scheduler for probe errors; log once per media
|
||||
if media.id not in _media_decision_logged:
|
||||
logging.debug(f"[Scheduler] HEAD probe for media_id={media.id} failed: {e}")
|
||||
|
||||
event_dict["video"] = {
|
||||
"type": "media",
|
||||
"url": stream_url,
|
||||
"autoplay": getattr(event, "autoplay", True),
|
||||
"loop": getattr(event, "loop", False),
|
||||
"volume": getattr(event, "volume", 0.8),
|
||||
"muted": getattr(event, "muted", False),
|
||||
# Best-effort metadata to help clients decide how to stream
|
||||
"mime_type": mime_type,
|
||||
"size": size,
|
||||
"accept_ranges": accept_ranges,
|
||||
# Optional richer info (may be null if not available): duration (seconds), resolution, bitrate
|
||||
"duration": None,
|
||||
"resolution": None,
|
||||
"bitrate": None,
|
||||
"qualities": [],
|
||||
"thumbnails": [],
|
||||
"checksum": None,
|
||||
}
|
||||
if media.id not in _media_decision_logged:
|
||||
logging.debug(
|
||||
f"[Scheduler] Using video streaming URL for event_media_id={media.id}: {filename}")
|
||||
_media_decision_logged.add(media.id)
|
||||
|
||||
# Add other event types (message, etc.) here as needed...
|
||||
|
||||
return event_dict
|
||||
|
||||
@@ -2,25 +2,26 @@
|
||||
|
||||
import os
|
||||
import logging
|
||||
from .db_utils import get_active_events
|
||||
from .db_utils import get_active_events, get_system_setting_value
|
||||
import paho.mqtt.client as mqtt
|
||||
import json
|
||||
import datetime
|
||||
import time
|
||||
|
||||
# Logging-Konfiguration
|
||||
ENV = os.getenv("ENV", "development")
|
||||
LOG_LEVEL = os.getenv("LOG_LEVEL", "DEBUG" if ENV == "development" else "INFO")
|
||||
from logging.handlers import RotatingFileHandler
|
||||
LOG_PATH = os.path.join(os.path.dirname(__file__), "scheduler.log")
|
||||
os.makedirs(os.path.dirname(LOG_PATH), exist_ok=True)
|
||||
log_handlers = []
|
||||
if ENV == "production":
|
||||
from logging.handlers import RotatingFileHandler
|
||||
log_handlers.append(RotatingFileHandler(
|
||||
LOG_PATH, maxBytes=2*1024*1024, backupCount=5, encoding="utf-8"))
|
||||
else:
|
||||
log_handlers.append(logging.FileHandler(LOG_PATH, encoding="utf-8"))
|
||||
if os.getenv("DEBUG_MODE", "1" if ENV == "development" else "0") in ("1", "true", "True"):
|
||||
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO")
|
||||
log_handlers = [
|
||||
RotatingFileHandler(
|
||||
LOG_PATH,
|
||||
maxBytes=10*1024*1024, # 10 MB
|
||||
backupCount=2, # 1 current + 2 backups = 3 files total
|
||||
encoding="utf-8"
|
||||
)
|
||||
]
|
||||
if os.getenv("DEBUG_MODE", "0") in ("1", "true", "True"):
|
||||
log_handlers.append(logging.StreamHandler())
|
||||
logging.basicConfig(
|
||||
level=getattr(logging, LOG_LEVEL.upper(), logging.INFO),
|
||||
@@ -36,7 +37,14 @@ def main():
|
||||
|
||||
POLL_INTERVAL = 30 # Sekunden, Empfehlung für seltene Änderungen
|
||||
# 0 = aus; z.B. 600 für alle 10 Min
|
||||
REFRESH_SECONDS = int(os.getenv("REFRESH_SECONDS", "0"))
|
||||
# initial value from DB or fallback to env
|
||||
try:
|
||||
db_val = get_system_setting_value("refresh_seconds", None)
|
||||
REFRESH_SECONDS = int(db_val) if db_val is not None else int(os.getenv("REFRESH_SECONDS", "0"))
|
||||
except Exception:
|
||||
REFRESH_SECONDS = int(os.getenv("REFRESH_SECONDS", "0"))
|
||||
# Konfigurierbares Zeitfenster in Tagen (Standard: 7)
|
||||
WINDOW_DAYS = int(os.getenv("EVENTS_WINDOW_DAYS", "7"))
|
||||
last_payloads = {} # group_id -> payload
|
||||
last_published_at = {} # group_id -> epoch seconds
|
||||
|
||||
@@ -55,18 +63,55 @@ def main():
|
||||
|
||||
while True:
|
||||
now = datetime.datetime.now(datetime.timezone.utc)
|
||||
# refresh interval can change at runtime (superadmin settings)
|
||||
try:
|
||||
db_val = get_system_setting_value("refresh_seconds", None)
|
||||
REFRESH_SECONDS = int(db_val) if db_val is not None else REFRESH_SECONDS
|
||||
except Exception:
|
||||
pass
|
||||
# Query window: next N days to capture upcoming events and recurring instances
|
||||
# Clients need to know what's coming, not just what's active right now
|
||||
end_window = now + datetime.timedelta(days=WINDOW_DAYS)
|
||||
logging.debug(f"Fetching events window start={now.isoformat()} end={end_window.isoformat()} (days={WINDOW_DAYS})")
|
||||
# Hole alle aktiven Events (bereits formatierte Dictionaries)
|
||||
events = get_active_events(now, now)
|
||||
try:
|
||||
events = get_active_events(now, end_window)
|
||||
logging.debug(f"Fetched {len(events)} events for publishing window")
|
||||
except Exception as e:
|
||||
logging.exception(f"Error while fetching events: {e}")
|
||||
events = []
|
||||
|
||||
# Gruppiere Events nach group_id
|
||||
groups = {}
|
||||
|
||||
# Filter: Only include events active at 'now'
|
||||
active_events = []
|
||||
for event in events:
|
||||
start = event.get("start")
|
||||
end = event.get("end")
|
||||
# Parse ISO strings to datetime
|
||||
try:
|
||||
start_dt = datetime.datetime.fromisoformat(start)
|
||||
end_dt = datetime.datetime.fromisoformat(end)
|
||||
# Make both tz-aware (UTC) if naive
|
||||
if start_dt.tzinfo is None:
|
||||
start_dt = start_dt.replace(tzinfo=datetime.timezone.utc)
|
||||
if end_dt.tzinfo is None:
|
||||
end_dt = end_dt.replace(tzinfo=datetime.timezone.utc)
|
||||
except Exception:
|
||||
continue
|
||||
if start_dt <= now < end_dt:
|
||||
active_events.append(event)
|
||||
|
||||
# Gruppiere nur aktive Events nach group_id
|
||||
groups = {}
|
||||
for event in active_events:
|
||||
gid = event.get("group_id")
|
||||
if gid not in groups:
|
||||
groups[gid] = []
|
||||
# Event ist bereits ein Dictionary im gewünschten Format
|
||||
groups[gid].append(event)
|
||||
|
||||
if not groups:
|
||||
logging.debug("No events grouped for any client group in current window")
|
||||
|
||||
# Sende pro Gruppe die Eventliste als retained Message, nur bei Änderung
|
||||
for gid, event_list in groups.items():
|
||||
# stabile Reihenfolge, um unnötige Publishes zu vermeiden
|
||||
@@ -87,23 +132,23 @@ def main():
|
||||
logging.error(
|
||||
f"Fehler beim Publish für Gruppe {gid}: {mqtt.error_string(result.rc)}")
|
||||
else:
|
||||
logging.info(f"Events für Gruppe {gid} gesendet")
|
||||
logging.info(f"Events für Gruppe {gid} gesendet (count={len(event_list)})")
|
||||
last_payloads[gid] = payload
|
||||
last_published_at[gid] = time.time()
|
||||
|
||||
# Entferne Gruppen, die nicht mehr existieren (leere retained Message senden)
|
||||
for gid in list(last_payloads.keys()):
|
||||
if gid not in groups:
|
||||
topic = f"infoscreen/events/{gid}"
|
||||
result = client.publish(topic, payload="[]", retain=True)
|
||||
if result.rc != mqtt.MQTT_ERR_SUCCESS:
|
||||
logging.error(
|
||||
f"Fehler beim Entfernen für Gruppe {gid}: {mqtt.error_string(result.rc)}")
|
||||
else:
|
||||
logging.info(
|
||||
f"Events für Gruppe {gid} entfernt (leere retained Message gesendet)")
|
||||
del last_payloads[gid]
|
||||
last_published_at.pop(gid, None)
|
||||
inactive_gids = set(last_payloads.keys()) - set(groups.keys())
|
||||
for gid in inactive_gids:
|
||||
topic = f"infoscreen/events/{gid}"
|
||||
result = client.publish(topic, payload="[]", retain=True)
|
||||
if result.rc != mqtt.MQTT_ERR_SUCCESS:
|
||||
logging.error(
|
||||
f"Fehler beim Entfernen für Gruppe {gid}: {mqtt.error_string(result.rc)}")
|
||||
else:
|
||||
logging.info(
|
||||
f"Events für Gruppe {gid} entfernt (leere retained Message gesendet)")
|
||||
del last_payloads[gid]
|
||||
last_published_at.pop(gid, None)
|
||||
|
||||
time.sleep(POLL_INTERVAL)
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ FROM python:3.13-slim
|
||||
# verbindet (gemäß devcontainer.json). Sie schaden aber nicht.
|
||||
ARG USER_ID=1000
|
||||
ARG GROUP_ID=1000
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends locales curl git \
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends locales curl git docker.io \
|
||||
&& groupadd -g ${GROUP_ID} infoscreen_taa \
|
||||
&& useradd -u ${USER_ID} -g ${GROUP_ID} --shell /bin/bash --create-home infoscreen_taa \
|
||||
&& sed -i 's/# de_DE.UTF-8 UTF-8/de_DE.UTF-8 UTF-8/' /etc/locale.gen \
|
||||
|
||||
@@ -0,0 +1,37 @@
|
||||
"""add_system_settings_table
|
||||
|
||||
Revision ID: 045626c9719a
|
||||
Revises: 488ce87c28ae
|
||||
Create Date: 2025-10-16 18:38:47.415244
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = '045626c9719a'
|
||||
down_revision: Union[str, None] = '488ce87c28ae'
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
op.create_table(
|
||||
'system_settings',
|
||||
sa.Column('key', sa.String(100), nullable=False),
|
||||
sa.Column('value', sa.Text(), nullable=True),
|
||||
sa.Column('description', sa.String(255), nullable=True),
|
||||
sa.Column('updated_at', sa.TIMESTAMP(timezone=True),
|
||||
server_default=sa.func.current_timestamp(),
|
||||
nullable=True),
|
||||
sa.PrimaryKeyConstraint('key')
|
||||
)
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
op.drop_table('system_settings')
|
||||
30
server/alembic/versions/21226a449037_add_muted_to_events.py
Normal file
30
server/alembic/versions/21226a449037_add_muted_to_events.py
Normal file
@@ -0,0 +1,30 @@
|
||||
"""add_muted_to_events
|
||||
|
||||
Revision ID: 21226a449037
|
||||
Revises: 910951fd300a
|
||||
Create Date: 2025-11-05 17:24:29.168692
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = '21226a449037'
|
||||
down_revision: Union[str, None] = '910951fd300a'
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
# Add muted column to events table for video mute control
|
||||
op.add_column('events', sa.Column('muted', sa.Boolean(), nullable=True))
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
# Remove muted column
|
||||
op.drop_column('events', 'muted')
|
||||
@@ -0,0 +1,28 @@
|
||||
"""Merge all heads before user role migration
|
||||
|
||||
Revision ID: 488ce87c28ae
|
||||
Revises: 12ab34cd56ef, 15c357c0cf31, add_userrole_editor_and_column
|
||||
Create Date: 2025-10-15 05:46:17.984934
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = '488ce87c28ae'
|
||||
down_revision: Union[str, None] = ('12ab34cd56ef', '15c357c0cf31', 'add_userrole_editor_and_column')
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
pass
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
pass
|
||||
@@ -0,0 +1,52 @@
|
||||
"""add user audit fields
|
||||
|
||||
Revision ID: 4f0b8a3e5c20
|
||||
Revises: 21226a449037
|
||||
Create Date: 2025-12-29 00:00:00.000000
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = '4f0b8a3e5c20'
|
||||
down_revision: Union[str, None] = '21226a449037'
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
op.add_column('users', sa.Column('last_login_at', sa.TIMESTAMP(timezone=True), nullable=True))
|
||||
op.add_column('users', sa.Column('last_password_change_at', sa.TIMESTAMP(timezone=True), nullable=True))
|
||||
op.add_column('users', sa.Column('last_failed_login_at', sa.TIMESTAMP(timezone=True), nullable=True))
|
||||
op.add_column(
|
||||
'users',
|
||||
sa.Column('failed_login_attempts', sa.Integer(), nullable=False, server_default='0')
|
||||
)
|
||||
op.add_column('users', sa.Column('locked_until', sa.TIMESTAMP(timezone=True), nullable=True))
|
||||
op.add_column('users', sa.Column('deactivated_at', sa.TIMESTAMP(timezone=True), nullable=True))
|
||||
op.add_column('users', sa.Column('deactivated_by', sa.Integer(), nullable=True))
|
||||
op.create_foreign_key(
|
||||
'fk_users_deactivated_by_users',
|
||||
'users',
|
||||
'users',
|
||||
['deactivated_by'],
|
||||
['id'],
|
||||
ondelete='SET NULL',
|
||||
)
|
||||
# Optional: keep server_default for failed_login_attempts; remove if you prefer no default after backfill
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
op.drop_constraint('fk_users_deactivated_by_users', 'users', type_='foreignkey')
|
||||
op.drop_column('users', 'deactivated_by')
|
||||
op.drop_column('users', 'deactivated_at')
|
||||
op.drop_column('users', 'locked_until')
|
||||
op.drop_column('users', 'failed_login_attempts')
|
||||
op.drop_column('users', 'last_failed_login_at')
|
||||
op.drop_column('users', 'last_password_change_at')
|
||||
op.drop_column('users', 'last_login_at')
|
||||
@@ -0,0 +1,34 @@
|
||||
"""Add page_progress and auto_progress to Event
|
||||
|
||||
Revision ID: 910951fd300a
|
||||
Revises: 045626c9719a
|
||||
Create Date: 2025-10-18 11:59:25.224813
|
||||
|
||||
"""
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = '910951fd300a'
|
||||
down_revision: Union[str, None] = '045626c9719a'
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade schema."""
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
op.add_column('events', sa.Column('page_progress', sa.Boolean(), nullable=True))
|
||||
op.add_column('events', sa.Column('auto_progress', sa.Boolean(), nullable=True))
|
||||
# ### end Alembic commands ###
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade schema."""
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
op.drop_column('events', 'auto_progress')
|
||||
op.drop_column('events', 'page_progress')
|
||||
# ### end Alembic commands ###
|
||||
40
server/alembic/versions/add_userrole_editor_and_column.py
Normal file
40
server/alembic/versions/add_userrole_editor_and_column.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""
|
||||
Add editor role to UserRole enum and ensure role column exists on users table
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
import enum
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = 'add_userrole_editor_and_column'
|
||||
down_revision = None # Set this to the latest revision in your repo
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
# Define the new enum including 'editor'
|
||||
class userrole_enum(enum.Enum):
|
||||
user = "user"
|
||||
editor = "editor"
|
||||
admin = "admin"
|
||||
superadmin = "superadmin"
|
||||
|
||||
def upgrade():
|
||||
# MySQL: check if 'role' column exists
|
||||
conn = op.get_bind()
|
||||
insp = sa.inspect(conn)
|
||||
columns = [col['name'] for col in insp.get_columns('users')]
|
||||
if 'role' not in columns:
|
||||
with op.batch_alter_table('users') as batch_op:
|
||||
batch_op.add_column(sa.Column('role', sa.Enum('user', 'editor', 'admin', 'superadmin', name='userrole'), nullable=False, server_default='user'))
|
||||
else:
|
||||
# If the column exists, alter the ENUM to add 'editor' if not present
|
||||
# MySQL: ALTER TABLE users MODIFY COLUMN role ENUM(...)
|
||||
conn.execute(sa.text(
|
||||
"ALTER TABLE users MODIFY COLUMN role ENUM('user','editor','admin','superadmin') NOT NULL DEFAULT 'user'"
|
||||
))
|
||||
|
||||
def downgrade():
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
with op.batch_alter_table('users') as batch_op:
|
||||
batch_op.drop_column('role')
|
||||
# ### end Alembic commands ###
|
||||
@@ -0,0 +1,84 @@
|
||||
"""add client monitoring tables and columns
|
||||
|
||||
Revision ID: c1d2e3f4g5h6
|
||||
Revises: 4f0b8a3e5c20
|
||||
Create Date: 2026-03-09 21:08:38.000000
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = 'c1d2e3f4g5h6'
|
||||
down_revision = '4f0b8a3e5c20'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
|
||||
def upgrade():
|
||||
bind = op.get_bind()
|
||||
inspector = sa.inspect(bind)
|
||||
|
||||
# 1. Add health monitoring columns to clients table (safe on rerun)
|
||||
existing_client_columns = {c['name'] for c in inspector.get_columns('clients')}
|
||||
if 'current_event_id' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('current_event_id', sa.Integer(), nullable=True))
|
||||
if 'current_process' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('current_process', sa.String(50), nullable=True))
|
||||
if 'process_status' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('process_status', sa.Enum('running', 'crashed', 'starting', 'stopped', name='processstatus'), nullable=True))
|
||||
if 'process_pid' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('process_pid', sa.Integer(), nullable=True))
|
||||
if 'last_screenshot_analyzed' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('last_screenshot_analyzed', sa.TIMESTAMP(timezone=True), nullable=True))
|
||||
if 'screen_health_status' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('screen_health_status', sa.Enum('OK', 'BLACK', 'FROZEN', 'UNKNOWN', name='screenhealthstatus'), nullable=True, server_default='UNKNOWN'))
|
||||
if 'last_screenshot_hash' not in existing_client_columns:
|
||||
op.add_column('clients', sa.Column('last_screenshot_hash', sa.String(32), nullable=True))
|
||||
|
||||
# 2. Create client_logs table (safe on rerun)
|
||||
if not inspector.has_table('client_logs'):
|
||||
op.create_table('client_logs',
|
||||
sa.Column('id', sa.Integer(), autoincrement=True, nullable=False),
|
||||
sa.Column('client_uuid', sa.String(36), nullable=False),
|
||||
sa.Column('timestamp', sa.TIMESTAMP(timezone=True), nullable=False),
|
||||
sa.Column('level', sa.Enum('ERROR', 'WARN', 'INFO', 'DEBUG', name='loglevel'), nullable=False),
|
||||
sa.Column('message', sa.Text(), nullable=False),
|
||||
sa.Column('context', sa.JSON(), nullable=True),
|
||||
sa.Column('created_at', sa.TIMESTAMP(timezone=True), server_default=sa.func.current_timestamp(), nullable=False),
|
||||
sa.PrimaryKeyConstraint('id'),
|
||||
sa.ForeignKeyConstraint(['client_uuid'], ['clients.uuid'], ondelete='CASCADE'),
|
||||
mysql_charset='utf8mb4',
|
||||
mysql_collate='utf8mb4_unicode_ci',
|
||||
mysql_engine='InnoDB'
|
||||
)
|
||||
|
||||
# 3. Create indexes for efficient querying (safe on rerun)
|
||||
client_log_indexes = {idx['name'] for idx in inspector.get_indexes('client_logs')} if inspector.has_table('client_logs') else set()
|
||||
client_indexes = {idx['name'] for idx in inspector.get_indexes('clients')}
|
||||
|
||||
if 'ix_client_logs_client_timestamp' not in client_log_indexes:
|
||||
op.create_index('ix_client_logs_client_timestamp', 'client_logs', ['client_uuid', 'timestamp'])
|
||||
if 'ix_client_logs_level_timestamp' not in client_log_indexes:
|
||||
op.create_index('ix_client_logs_level_timestamp', 'client_logs', ['level', 'timestamp'])
|
||||
if 'ix_clients_process_status' not in client_indexes:
|
||||
op.create_index('ix_clients_process_status', 'clients', ['process_status'])
|
||||
|
||||
|
||||
def downgrade():
|
||||
# Drop indexes
|
||||
op.drop_index('ix_clients_process_status', table_name='clients')
|
||||
op.drop_index('ix_client_logs_level_timestamp', table_name='client_logs')
|
||||
op.drop_index('ix_client_logs_client_timestamp', table_name='client_logs')
|
||||
|
||||
# Drop table
|
||||
op.drop_table('client_logs')
|
||||
|
||||
# Drop columns from clients
|
||||
op.drop_column('clients', 'last_screenshot_hash')
|
||||
op.drop_column('clients', 'screen_health_status')
|
||||
op.drop_column('clients', 'last_screenshot_analyzed')
|
||||
op.drop_column('clients', 'process_pid')
|
||||
op.drop_column('clients', 'process_status')
|
||||
op.drop_column('clients', 'current_process')
|
||||
op.drop_column('clients', 'current_event_id')
|
||||
@@ -14,7 +14,9 @@ if not DB_URL:
|
||||
# Dev: DB-URL aus Einzelwerten bauen
|
||||
DB_USER = os.getenv("DB_USER", "infoscreen_admin")
|
||||
DB_PASSWORD = os.getenv("DB_PASSWORD", "KqtpM7wmNd&mFKs")
|
||||
DB_HOST = os.getenv("DB_HOST", "db") # IMMER 'db' als Host im Container!
|
||||
# Dev container: use host.docker.internal or localhost if db container isn't on same network
|
||||
# Docker Compose: use 'db' service name
|
||||
DB_HOST = os.getenv("DB_HOST", "db") # Default to db for Docker Compose
|
||||
DB_NAME = os.getenv("DB_NAME", "infoscreen_by_taa")
|
||||
DB_URL = f"mysql+pymysql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}/{DB_NAME}"
|
||||
|
||||
|
||||
@@ -3,10 +3,22 @@ import os
|
||||
from dotenv import load_dotenv
|
||||
import bcrypt
|
||||
|
||||
# .env laden
|
||||
load_dotenv()
|
||||
# .env laden (nur in Dev)
|
||||
if os.getenv("ENV", "development") == "development":
|
||||
load_dotenv()
|
||||
|
||||
DB_URL = f"mysql+pymysql://{os.getenv('DB_USER')}:{os.getenv('DB_PASSWORD')}@{os.getenv('DB_HOST')}:3306/{os.getenv('DB_NAME')}"
|
||||
# Use same logic as database.py: prefer DB_CONN, fallback to individual vars
|
||||
DB_URL = os.getenv("DB_CONN")
|
||||
if not DB_URL:
|
||||
DB_USER = os.getenv("DB_USER", "infoscreen_admin")
|
||||
DB_PASSWORD = os.getenv("DB_PASSWORD")
|
||||
# In Docker Compose: DB_HOST will be 'db' from env
|
||||
# In dev container: will be 'localhost' from .env
|
||||
DB_HOST = os.getenv("DB_HOST", "db") # Default to 'db' for Docker Compose
|
||||
DB_NAME = os.getenv("DB_NAME", "infoscreen_by_taa")
|
||||
DB_URL = f"mysql+pymysql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:3306/{DB_NAME}"
|
||||
|
||||
print(f"init_defaults.py connecting to: {DB_URL.split('@')[1] if '@' in DB_URL else DB_URL}")
|
||||
engine = create_engine(DB_URL, isolation_level="AUTOCOMMIT")
|
||||
|
||||
with engine.connect() as conn:
|
||||
@@ -20,19 +32,57 @@ with engine.connect() as conn:
|
||||
)
|
||||
print("✅ Default-Gruppe mit id=1 angelegt.")
|
||||
|
||||
# Admin-Benutzer anlegen, falls nicht vorhanden
|
||||
admin_user = os.getenv("DEFAULT_ADMIN_USERNAME", "infoscreen_admin")
|
||||
admin_pw = os.getenv("DEFAULT_ADMIN_PASSWORD", "Info_screen_admin25!")
|
||||
# Superadmin-Benutzer anlegen, falls nicht vorhanden
|
||||
admin_user = os.getenv("DEFAULT_SUPERADMIN_USERNAME", "superadmin")
|
||||
admin_pw = os.getenv("DEFAULT_SUPERADMIN_PASSWORD")
|
||||
|
||||
if not admin_pw:
|
||||
print("⚠️ DEFAULT_SUPERADMIN_PASSWORD nicht gesetzt. Superadmin wird nicht erstellt.")
|
||||
else:
|
||||
# Passwort hashen mit bcrypt
|
||||
hashed_pw = bcrypt.hashpw(admin_pw.encode(
|
||||
'utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||
# Prüfen, ob User existiert
|
||||
result = conn.execute(text(
|
||||
"SELECT COUNT(*) FROM users WHERE username=:username"), {"username": admin_user})
|
||||
if result.scalar() == 0:
|
||||
# Rolle: 1 = Admin (ggf. anpassen je nach Modell)
|
||||
conn.execute(
|
||||
text("INSERT INTO users (username, password_hash, role, is_active) VALUES (:username, :password_hash, 1, 1)"),
|
||||
{"username": admin_user, "password_hash": hashed_pw}
|
||||
)
|
||||
print(f"✅ Admin-Benutzer '{admin_user}' angelegt.")
|
||||
hashed_pw = bcrypt.hashpw(admin_pw.encode(
|
||||
'utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||
# Prüfen, ob User existiert
|
||||
result = conn.execute(text(
|
||||
"SELECT COUNT(*) FROM users WHERE username=:username"), {"username": admin_user})
|
||||
if result.scalar() == 0:
|
||||
# Rolle: 'superadmin' gemäß UserRole enum
|
||||
conn.execute(
|
||||
text("INSERT INTO users (username, password_hash, role, is_active) VALUES (:username, :password_hash, 'superadmin', 1)"),
|
||||
{"username": admin_user, "password_hash": hashed_pw}
|
||||
)
|
||||
print(f"✅ Superadmin-Benutzer '{admin_user}' angelegt.")
|
||||
else:
|
||||
print(f"ℹ️ Superadmin-Benutzer '{admin_user}' existiert bereits.")
|
||||
|
||||
# Default System Settings anlegen
|
||||
default_settings = [
|
||||
('supplement_table_url', '', 'URL für Vertretungsplan / WebUntis (Stundenplan-Änderungstabelle)'),
|
||||
('supplement_table_enabled', 'false', 'Ob Vertretungsplan aktiviert ist'),
|
||||
('presentation_interval', '10', 'Standard Intervall für Präsentationen (Sekunden)'),
|
||||
('presentation_page_progress', 'true', 'Seitenfortschrift anzeigen (Page-Progress) für Präsentationen'),
|
||||
('presentation_auto_progress', 'true', 'Automatischer Fortschritt (Auto-Progress) für Präsentationen'),
|
||||
('video_autoplay', 'true', 'Autoplay (automatisches Abspielen) für Videos'),
|
||||
('video_loop', 'true', 'Loop (Wiederholung) für Videos'),
|
||||
('video_volume', '0.8', 'Standard Lautstärke für Videos (0.0 - 1.0)'),
|
||||
('holiday_banner_enabled', 'true', 'Ferienstatus-Banner auf Dashboard anzeigen'),
|
||||
('organization_name', '', 'Name der Organisation (wird im Header angezeigt)'),
|
||||
('refresh_seconds', '0', 'Scheduler Republish-Intervall (Sekunden; 0 deaktiviert)'),
|
||||
('group_order', '[]', 'Benutzerdefinierte Reihenfolge der Raumgruppen (JSON-Array mit Group-IDs)'),
|
||||
]
|
||||
|
||||
for key, value, description in default_settings:
|
||||
result = conn.execute(
|
||||
text("SELECT COUNT(*) FROM system_settings WHERE `key`=:key"),
|
||||
{"key": key}
|
||||
)
|
||||
if result.scalar() == 0:
|
||||
conn.execute(
|
||||
text("INSERT INTO system_settings (`key`, value, description) VALUES (:key, :value, :description)"),
|
||||
{"key": key, "value": value, "description": description}
|
||||
)
|
||||
print(f"✅ System-Einstellung '{key}' angelegt.")
|
||||
else:
|
||||
print(f"ℹ️ System-Einstellung '{key}' existiert bereits.")
|
||||
|
||||
print("✅ Initialisierung abgeschlossen.")
|
||||
|
||||
176
server/permissions.py
Normal file
176
server/permissions.py
Normal file
@@ -0,0 +1,176 @@
|
||||
"""
|
||||
Permission decorators for role-based access control.
|
||||
|
||||
This module provides decorators to protect Flask routes based on user roles.
|
||||
"""
|
||||
|
||||
from functools import wraps
|
||||
from flask import session, jsonify
|
||||
import os
|
||||
from models.models import UserRole
|
||||
|
||||
|
||||
def require_auth(f):
|
||||
"""
|
||||
Require user to be authenticated.
|
||||
|
||||
Usage:
|
||||
@app.route('/protected')
|
||||
@require_auth
|
||||
def protected_route():
|
||||
return "You are logged in"
|
||||
"""
|
||||
@wraps(f)
|
||||
def decorated_function(*args, **kwargs):
|
||||
user_id = session.get('user_id')
|
||||
if not user_id:
|
||||
return jsonify({"error": "Authentication required"}), 401
|
||||
return f(*args, **kwargs)
|
||||
return decorated_function
|
||||
|
||||
|
||||
def require_role(*allowed_roles):
|
||||
"""
|
||||
Require user to have one of the specified roles.
|
||||
|
||||
Args:
|
||||
*allowed_roles: Variable number of role strings or UserRole enum values
|
||||
|
||||
Usage:
|
||||
@app.route('/admin-only')
|
||||
@require_role('admin', 'superadmin')
|
||||
def admin_route():
|
||||
return "Admin access"
|
||||
|
||||
# Or using enum:
|
||||
@require_role(UserRole.admin, UserRole.superadmin)
|
||||
def admin_route():
|
||||
return "Admin access"
|
||||
"""
|
||||
# Convert all roles to strings for comparison
|
||||
allowed_role_strings = set()
|
||||
for role in allowed_roles:
|
||||
if isinstance(role, UserRole):
|
||||
allowed_role_strings.add(role.value)
|
||||
elif isinstance(role, str):
|
||||
allowed_role_strings.add(role)
|
||||
else:
|
||||
raise ValueError(f"Invalid role type: {type(role)}")
|
||||
|
||||
def decorator(f):
|
||||
@wraps(f)
|
||||
def decorated_function(*args, **kwargs):
|
||||
user_id = session.get('user_id')
|
||||
user_role = session.get('role')
|
||||
|
||||
if not user_id or not user_role:
|
||||
return jsonify({"error": "Authentication required"}), 401
|
||||
|
||||
# In development, allow superadmin to bypass all checks to prevent blocking
|
||||
env = os.environ.get('ENV', 'production').lower()
|
||||
if env in ('development', 'dev') and user_role == UserRole.superadmin.value:
|
||||
return f(*args, **kwargs)
|
||||
|
||||
if user_role not in allowed_role_strings:
|
||||
return jsonify({
|
||||
"error": "Insufficient permissions",
|
||||
"required_roles": list(allowed_role_strings),
|
||||
"your_role": user_role
|
||||
}), 403
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return decorated_function
|
||||
return decorator
|
||||
|
||||
|
||||
def require_any_role(*allowed_roles):
|
||||
"""
|
||||
Alias for require_role for better readability.
|
||||
Require user to have ANY of the specified roles.
|
||||
|
||||
Usage:
|
||||
@require_any_role('editor', 'admin', 'superadmin')
|
||||
def edit_route():
|
||||
return "Can edit"
|
||||
"""
|
||||
return require_role(*allowed_roles)
|
||||
|
||||
|
||||
def require_all_roles(*required_roles):
|
||||
"""
|
||||
Require user to have ALL of the specified roles.
|
||||
Note: This is typically not needed since users only have one role,
|
||||
but included for completeness.
|
||||
|
||||
Usage:
|
||||
@require_all_roles('admin')
|
||||
def strict_route():
|
||||
return "Must have all roles"
|
||||
"""
|
||||
# Convert all roles to strings
|
||||
required_role_strings = set()
|
||||
for role in required_roles:
|
||||
if isinstance(role, UserRole):
|
||||
required_role_strings.add(role.value)
|
||||
elif isinstance(role, str):
|
||||
required_role_strings.add(role)
|
||||
|
||||
def decorator(f):
|
||||
@wraps(f)
|
||||
def decorated_function(*args, **kwargs):
|
||||
user_id = session.get('user_id')
|
||||
user_role = session.get('role')
|
||||
|
||||
if not user_id or not user_role:
|
||||
return jsonify({"error": "Authentication required"}), 401
|
||||
|
||||
# For single-role systems, check if user role is in required set
|
||||
if user_role not in required_role_strings:
|
||||
return jsonify({
|
||||
"error": "Insufficient permissions",
|
||||
"required_roles": list(required_role_strings),
|
||||
"your_role": user_role
|
||||
}), 403
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return decorated_function
|
||||
return decorator
|
||||
|
||||
|
||||
def superadmin_only(f):
|
||||
"""
|
||||
Convenience decorator for superadmin-only routes.
|
||||
|
||||
Usage:
|
||||
@app.route('/critical-settings')
|
||||
@superadmin_only
|
||||
def critical_settings():
|
||||
return "Superadmin only"
|
||||
"""
|
||||
return require_role(UserRole.superadmin)(f)
|
||||
|
||||
|
||||
def admin_or_higher(f):
|
||||
"""
|
||||
Convenience decorator for admin and superadmin routes.
|
||||
|
||||
Usage:
|
||||
@app.route('/settings')
|
||||
@admin_or_higher
|
||||
def settings():
|
||||
return "Admin or superadmin"
|
||||
"""
|
||||
return require_role(UserRole.admin, UserRole.superadmin)(f)
|
||||
|
||||
|
||||
def editor_or_higher(f):
|
||||
"""
|
||||
Convenience decorator for editor, admin, and superadmin routes.
|
||||
|
||||
Usage:
|
||||
@app.route('/events', methods=['POST'])
|
||||
@editor_or_higher
|
||||
def create_event():
|
||||
return "Can create events"
|
||||
"""
|
||||
return require_role(UserRole.editor, UserRole.admin, UserRole.superadmin)(f)
|
||||
@@ -1,4 +1,5 @@
|
||||
from flask import Blueprint, jsonify, request
|
||||
from server.permissions import admin_or_higher
|
||||
from server.database import Session
|
||||
from models.models import AcademicPeriod
|
||||
from datetime import datetime
|
||||
@@ -61,6 +62,7 @@ def get_period_for_date():
|
||||
|
||||
|
||||
@academic_periods_bp.route('/active', methods=['POST'])
|
||||
@admin_or_higher
|
||||
def set_active_academic_period():
|
||||
data = request.get_json(silent=True) or {}
|
||||
period_id = data.get('id')
|
||||
|
||||
280
server/routes/auth.py
Normal file
280
server/routes/auth.py
Normal file
@@ -0,0 +1,280 @@
|
||||
"""
|
||||
Authentication and user management routes.
|
||||
|
||||
This module provides endpoints for user authentication and role information.
|
||||
Currently implements a basic session-based auth that can be extended with
|
||||
JWT or Flask-Login later.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, request, jsonify, session
|
||||
import os
|
||||
from server.database import Session
|
||||
from models.models import User, UserRole
|
||||
from server.permissions import require_auth
|
||||
import bcrypt
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
sys.path.append('/workspace')
|
||||
|
||||
auth_bp = Blueprint("auth", __name__, url_prefix="/api/auth")
|
||||
|
||||
|
||||
@auth_bp.route("/login", methods=["POST"])
|
||||
def login():
|
||||
"""
|
||||
Authenticate a user and create a session.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"username": "string",
|
||||
"password": "string"
|
||||
}
|
||||
|
||||
Returns:
|
||||
200: {
|
||||
"message": "Login successful",
|
||||
"user": {
|
||||
"id": int,
|
||||
"username": "string",
|
||||
"role": "string"
|
||||
}
|
||||
}
|
||||
401: {"error": "Invalid credentials"}
|
||||
400: {"error": "Username and password required"}
|
||||
"""
|
||||
data = request.get_json()
|
||||
|
||||
if not data:
|
||||
return jsonify({"error": "Request body required"}), 400
|
||||
|
||||
username = data.get("username")
|
||||
password = data.get("password")
|
||||
|
||||
if not username or not password:
|
||||
return jsonify({"error": "Username and password required"}), 400
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
# Find user by username
|
||||
user = db_session.query(User).filter_by(username=username).first()
|
||||
|
||||
if not user:
|
||||
return jsonify({"error": "Invalid credentials"}), 401
|
||||
|
||||
# Check if user is active
|
||||
if not user.is_active:
|
||||
return jsonify({"error": "Account is disabled"}), 401
|
||||
|
||||
# Verify password
|
||||
if not bcrypt.checkpw(password.encode('utf-8'), user.password_hash.encode('utf-8')):
|
||||
# Track failed login attempt
|
||||
user.last_failed_login_at = datetime.now(timezone.utc)
|
||||
user.failed_login_attempts = (user.failed_login_attempts or 0) + 1
|
||||
db_session.commit()
|
||||
return jsonify({"error": "Invalid credentials"}), 401
|
||||
|
||||
# Successful login: update last_login_at and reset failed attempts
|
||||
user.last_login_at = datetime.now(timezone.utc)
|
||||
user.failed_login_attempts = 0
|
||||
db_session.commit()
|
||||
|
||||
# Create session
|
||||
session['user_id'] = user.id
|
||||
session['username'] = user.username
|
||||
session['role'] = user.role.value
|
||||
# Persist session across browser restarts (uses PERMANENT_SESSION_LIFETIME)
|
||||
session.permanent = True
|
||||
|
||||
return jsonify({
|
||||
"message": "Login successful",
|
||||
"user": {
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"role": user.role.value
|
||||
}
|
||||
}), 200
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@auth_bp.route("/logout", methods=["POST"])
|
||||
def logout():
|
||||
"""
|
||||
End the current user session.
|
||||
|
||||
Returns:
|
||||
200: {"message": "Logout successful"}
|
||||
"""
|
||||
session.clear()
|
||||
return jsonify({"message": "Logout successful"}), 200
|
||||
|
||||
|
||||
@auth_bp.route("/me", methods=["GET"])
|
||||
def get_current_user():
|
||||
"""
|
||||
Get the current authenticated user's information.
|
||||
|
||||
Returns:
|
||||
200: {
|
||||
"id": int,
|
||||
"username": "string",
|
||||
"role": "string",
|
||||
"is_active": bool
|
||||
}
|
||||
401: {"error": "Not authenticated"}
|
||||
"""
|
||||
user_id = session.get('user_id')
|
||||
|
||||
if not user_id:
|
||||
return jsonify({"error": "Not authenticated"}), 401
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
user = db_session.query(User).filter_by(id=user_id).first()
|
||||
|
||||
if not user:
|
||||
# Session is stale, user was deleted
|
||||
session.clear()
|
||||
return jsonify({"error": "Not authenticated"}), 401
|
||||
|
||||
if not user.is_active:
|
||||
# User was deactivated
|
||||
session.clear()
|
||||
return jsonify({"error": "Account is disabled"}), 401
|
||||
|
||||
# For SQLAlchemy Enum(UserRole), ensure we return the string value
|
||||
role_value = user.role.value if isinstance(user.role, UserRole) else str(user.role)
|
||||
|
||||
return jsonify({
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"role": role_value,
|
||||
"is_active": user.is_active
|
||||
}), 200
|
||||
|
||||
except Exception as e:
|
||||
# Avoid naked 500s; return a JSON error with minimal info (safe in dev)
|
||||
env = os.environ.get("ENV", "production").lower()
|
||||
msg = str(e) if env in ("development", "dev") else "Internal server error"
|
||||
return jsonify({"error": msg}), 500
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@auth_bp.route("/check", methods=["GET"])
|
||||
def check_auth():
|
||||
"""
|
||||
Quick check if user is authenticated (lighter than /me).
|
||||
|
||||
Returns:
|
||||
200: {"authenticated": true, "role": "string"}
|
||||
200: {"authenticated": false}
|
||||
"""
|
||||
user_id = session.get('user_id')
|
||||
role = session.get('role')
|
||||
|
||||
if user_id and role:
|
||||
return jsonify({
|
||||
"authenticated": True,
|
||||
"role": role
|
||||
}), 200
|
||||
|
||||
return jsonify({"authenticated": False}), 200
|
||||
|
||||
|
||||
@auth_bp.route("/change-password", methods=["PUT"])
|
||||
@require_auth
|
||||
def change_password():
|
||||
"""
|
||||
Allow the authenticated user to change their own password.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"current_password": "string",
|
||||
"new_password": "string"
|
||||
}
|
||||
|
||||
Returns:
|
||||
200: {"message": "Password changed successfully"}
|
||||
400: {"error": "Validation error"}
|
||||
401: {"error": "Invalid current password"}
|
||||
404: {"error": "User not found"}
|
||||
"""
|
||||
data = request.get_json() or {}
|
||||
current_password = data.get("current_password", "")
|
||||
new_password = data.get("new_password", "")
|
||||
|
||||
if not current_password or not new_password:
|
||||
return jsonify({"error": "Current password and new password are required"}), 400
|
||||
|
||||
if len(new_password) < 6:
|
||||
return jsonify({"error": "New password must be at least 6 characters"}), 400
|
||||
|
||||
user_id = session.get('user_id')
|
||||
db_session = Session()
|
||||
try:
|
||||
user = db_session.query(User).filter_by(id=user_id).first()
|
||||
if not user:
|
||||
session.clear()
|
||||
return jsonify({"error": "User not found"}), 404
|
||||
|
||||
# Verify current password
|
||||
if not bcrypt.checkpw(current_password.encode('utf-8'), user.password_hash.encode('utf-8')):
|
||||
return jsonify({"error": "Current password is incorrect"}), 401
|
||||
|
||||
# Update password hash and timestamp
|
||||
new_hash = bcrypt.hashpw(new_password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||
user.password_hash = new_hash
|
||||
user.last_password_change_at = datetime.now(timezone.utc)
|
||||
db_session.commit()
|
||||
|
||||
return jsonify({"message": "Password changed successfully"}), 200
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@auth_bp.route("/dev-login-superadmin", methods=["POST"])
|
||||
def dev_login_superadmin():
|
||||
"""
|
||||
Development-only endpoint to quickly establish a superadmin session without a password.
|
||||
|
||||
Enabled only when ENV is 'development' or 'dev'. Returns 404 otherwise.
|
||||
"""
|
||||
env = os.environ.get("ENV", "production").lower()
|
||||
if env not in ("development", "dev"):
|
||||
# Pretend the route does not exist in non-dev environments
|
||||
return jsonify({"error": "Not found"}), 404
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
# Prefer explicit username from env, else pick any superadmin
|
||||
preferred_username = os.environ.get("DEFAULT_SUPERADMIN_USERNAME", "superadmin")
|
||||
user = (
|
||||
db_session.query(User)
|
||||
.filter((User.username == preferred_username) | (User.role == UserRole.superadmin))
|
||||
.order_by(User.id.asc())
|
||||
.first()
|
||||
)
|
||||
if not user:
|
||||
return jsonify({
|
||||
"error": "No superadmin user found. Seed a superadmin first (DEFAULT_SUPERADMIN_PASSWORD)."
|
||||
}), 404
|
||||
|
||||
# Establish session
|
||||
session['user_id'] = user.id
|
||||
session['username'] = user.username
|
||||
session['role'] = user.role.value
|
||||
session.permanent = True
|
||||
|
||||
return jsonify({
|
||||
"message": "Dev login successful (superadmin)",
|
||||
"user": {
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"role": user.role.value
|
||||
}
|
||||
}), 200
|
||||
finally:
|
||||
db_session.close()
|
||||
491
server/routes/client_logs.py
Normal file
491
server/routes/client_logs.py
Normal file
@@ -0,0 +1,491 @@
|
||||
from flask import Blueprint, jsonify, request
|
||||
from server.database import Session
|
||||
from server.permissions import admin_or_higher, superadmin_only
|
||||
from models.models import ClientLog, Client, ClientGroup, LogLevel
|
||||
from sqlalchemy import desc, func
|
||||
from datetime import datetime, timedelta, timezone
|
||||
import json
|
||||
import os
|
||||
import glob
|
||||
|
||||
from server.serializers import dict_to_camel_case
|
||||
|
||||
client_logs_bp = Blueprint("client_logs", __name__, url_prefix="/api/client-logs")
|
||||
PRIORITY_SCREENSHOT_TTL_SECONDS = int(os.environ.get("PRIORITY_SCREENSHOT_TTL_SECONDS", "120"))
|
||||
|
||||
|
||||
def _grace_period_seconds():
|
||||
env = os.environ.get("ENV", "production").lower()
|
||||
if env in ("development", "dev"):
|
||||
return int(os.environ.get("HEARTBEAT_GRACE_PERIOD_DEV", "180"))
|
||||
return int(os.environ.get("HEARTBEAT_GRACE_PERIOD_PROD", "170"))
|
||||
|
||||
|
||||
def _to_utc(dt):
|
||||
if dt is None:
|
||||
return None
|
||||
if dt.tzinfo is None:
|
||||
return dt.replace(tzinfo=timezone.utc)
|
||||
return dt.astimezone(timezone.utc)
|
||||
|
||||
|
||||
def _is_client_alive(last_alive, is_active):
|
||||
if not last_alive or not is_active:
|
||||
return False
|
||||
return (datetime.now(timezone.utc) - _to_utc(last_alive)) <= timedelta(seconds=_grace_period_seconds())
|
||||
|
||||
|
||||
def _safe_context(raw_context):
|
||||
if not raw_context:
|
||||
return {}
|
||||
try:
|
||||
return json.loads(raw_context)
|
||||
except (TypeError, json.JSONDecodeError):
|
||||
return {"raw": raw_context}
|
||||
|
||||
|
||||
def _serialize_log_entry(log, include_client_uuid=False):
|
||||
if not log:
|
||||
return None
|
||||
|
||||
entry = {
|
||||
"id": log.id,
|
||||
"timestamp": log.timestamp.isoformat() if log.timestamp else None,
|
||||
"level": log.level.value if log.level else None,
|
||||
"message": log.message,
|
||||
"context": _safe_context(log.context),
|
||||
}
|
||||
if include_client_uuid:
|
||||
entry["client_uuid"] = log.client_uuid
|
||||
return entry
|
||||
|
||||
|
||||
def _determine_client_status(is_alive, process_status, screen_health_status, log_counts):
|
||||
if not is_alive:
|
||||
return "offline"
|
||||
if process_status == "crashed" or screen_health_status in ("BLACK", "FROZEN"):
|
||||
return "critical"
|
||||
if log_counts.get("ERROR", 0) > 0:
|
||||
return "critical"
|
||||
if process_status in ("starting", "stopped") or log_counts.get("WARN", 0) > 0:
|
||||
return "warning"
|
||||
return "healthy"
|
||||
|
||||
|
||||
def _infer_last_screenshot_ts(client_uuid):
|
||||
screenshots_dir = os.path.join(os.path.dirname(__file__), "..", "screenshots")
|
||||
|
||||
candidate_files = []
|
||||
latest_file = os.path.join(screenshots_dir, f"{client_uuid}.jpg")
|
||||
if os.path.exists(latest_file):
|
||||
candidate_files.append(latest_file)
|
||||
|
||||
candidate_files.extend(glob.glob(os.path.join(screenshots_dir, f"{client_uuid}_*.jpg")))
|
||||
if not candidate_files:
|
||||
return None
|
||||
|
||||
try:
|
||||
newest_path = max(candidate_files, key=os.path.getmtime)
|
||||
return datetime.fromtimestamp(os.path.getmtime(newest_path), timezone.utc)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _load_screenshot_metadata(client_uuid):
|
||||
screenshots_dir = os.path.join(os.path.dirname(__file__), "..", "screenshots")
|
||||
metadata_path = os.path.join(screenshots_dir, f"{client_uuid}_meta.json")
|
||||
if not os.path.exists(metadata_path):
|
||||
return {}
|
||||
|
||||
try:
|
||||
with open(metadata_path, "r", encoding="utf-8") as metadata_file:
|
||||
data = json.load(metadata_file)
|
||||
return data if isinstance(data, dict) else {}
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def _is_priority_screenshot_active(priority_received_at):
|
||||
if not priority_received_at:
|
||||
return False
|
||||
|
||||
try:
|
||||
normalized = str(priority_received_at).replace("Z", "+00:00")
|
||||
parsed = datetime.fromisoformat(normalized)
|
||||
parsed_utc = _to_utc(parsed)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
return (datetime.now(timezone.utc) - parsed_utc) <= timedelta(seconds=PRIORITY_SCREENSHOT_TTL_SECONDS)
|
||||
|
||||
|
||||
@client_logs_bp.route("/test", methods=["GET"])
|
||||
def test_client_logs():
|
||||
"""Test endpoint to verify logging infrastructure (no auth required)"""
|
||||
session = Session()
|
||||
try:
|
||||
# Count total logs
|
||||
total_logs = session.query(func.count(ClientLog.id)).scalar()
|
||||
|
||||
# Count by level
|
||||
error_count = session.query(func.count(ClientLog.id)).filter_by(level=LogLevel.ERROR).scalar()
|
||||
warn_count = session.query(func.count(ClientLog.id)).filter_by(level=LogLevel.WARN).scalar()
|
||||
info_count = session.query(func.count(ClientLog.id)).filter_by(level=LogLevel.INFO).scalar()
|
||||
|
||||
# Get last 5 logs
|
||||
recent_logs = session.query(ClientLog).order_by(desc(ClientLog.timestamp)).limit(5).all()
|
||||
|
||||
recent = []
|
||||
for log in recent_logs:
|
||||
recent.append({
|
||||
"client_uuid": log.client_uuid,
|
||||
"level": log.level.value if log.level else None,
|
||||
"message": log.message,
|
||||
"timestamp": log.timestamp.isoformat() if log.timestamp else None
|
||||
})
|
||||
|
||||
session.close()
|
||||
return jsonify({
|
||||
"status": "ok",
|
||||
"infrastructure": "working",
|
||||
"total_logs": total_logs,
|
||||
"counts": {
|
||||
"ERROR": error_count,
|
||||
"WARN": warn_count,
|
||||
"INFO": info_count
|
||||
},
|
||||
"recent_5": recent
|
||||
})
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"status": "error", "message": str(e)}), 500
|
||||
|
||||
|
||||
@client_logs_bp.route("/<uuid>/logs", methods=["GET"])
|
||||
@admin_or_higher
|
||||
def get_client_logs(uuid):
|
||||
"""
|
||||
Get logs for a specific client
|
||||
Query params:
|
||||
- level: ERROR, WARN, INFO, DEBUG (optional)
|
||||
- limit: number of entries (default 50, max 500)
|
||||
- since: ISO timestamp (optional)
|
||||
|
||||
Example: /api/client-logs/abc-123/logs?level=ERROR&limit=100
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
# Verify client exists
|
||||
client = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if not client:
|
||||
session.close()
|
||||
return jsonify({"error": "Client not found"}), 404
|
||||
|
||||
# Parse query parameters
|
||||
level_param = request.args.get('level')
|
||||
limit = min(int(request.args.get('limit', 50)), 500)
|
||||
since_param = request.args.get('since')
|
||||
|
||||
# Build query
|
||||
query = session.query(ClientLog).filter_by(client_uuid=uuid)
|
||||
|
||||
# Filter by log level
|
||||
if level_param:
|
||||
try:
|
||||
level_enum = LogLevel[level_param.upper()]
|
||||
query = query.filter_by(level=level_enum)
|
||||
except KeyError:
|
||||
session.close()
|
||||
return jsonify({"error": f"Invalid level: {level_param}. Must be ERROR, WARN, INFO, or DEBUG"}), 400
|
||||
|
||||
# Filter by timestamp
|
||||
if since_param:
|
||||
try:
|
||||
# Handle both with and without 'Z' suffix
|
||||
since_str = since_param.replace('Z', '+00:00')
|
||||
since_dt = datetime.fromisoformat(since_str)
|
||||
if since_dt.tzinfo is None:
|
||||
since_dt = since_dt.replace(tzinfo=timezone.utc)
|
||||
query = query.filter(ClientLog.timestamp >= since_dt)
|
||||
except ValueError:
|
||||
session.close()
|
||||
return jsonify({"error": "Invalid timestamp format. Use ISO 8601"}), 400
|
||||
|
||||
# Execute query
|
||||
logs = query.order_by(desc(ClientLog.timestamp)).limit(limit).all()
|
||||
|
||||
# Format results
|
||||
result = []
|
||||
for log in logs:
|
||||
result.append(_serialize_log_entry(log))
|
||||
|
||||
session.close()
|
||||
return jsonify({
|
||||
"client_uuid": uuid,
|
||||
"logs": result,
|
||||
"count": len(result),
|
||||
"limit": limit
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"error": f"Server error: {str(e)}"}), 500
|
||||
|
||||
|
||||
@client_logs_bp.route("/summary", methods=["GET"])
|
||||
@admin_or_higher
|
||||
def get_logs_summary():
|
||||
"""
|
||||
Get summary of errors/warnings across all clients in last 24 hours
|
||||
Returns count of ERROR, WARN, INFO logs per client
|
||||
|
||||
Example response:
|
||||
{
|
||||
"summary": {
|
||||
"client-uuid-1": {"ERROR": 5, "WARN": 12, "INFO": 45},
|
||||
"client-uuid-2": {"ERROR": 0, "WARN": 3, "INFO": 20}
|
||||
},
|
||||
"period_hours": 24,
|
||||
"timestamp": "2026-03-09T21:00:00Z"
|
||||
}
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
# Get hours parameter (default 24, max 168 = 1 week)
|
||||
hours = min(int(request.args.get('hours', 24)), 168)
|
||||
since = datetime.now(timezone.utc) - timedelta(hours=hours)
|
||||
|
||||
# Query log counts grouped by client and level
|
||||
stats = session.query(
|
||||
ClientLog.client_uuid,
|
||||
ClientLog.level,
|
||||
func.count(ClientLog.id).label('count')
|
||||
).filter(
|
||||
ClientLog.timestamp >= since
|
||||
).group_by(
|
||||
ClientLog.client_uuid,
|
||||
ClientLog.level
|
||||
).all()
|
||||
|
||||
# Build summary dictionary
|
||||
summary = {}
|
||||
for stat in stats:
|
||||
uuid = stat.client_uuid
|
||||
if uuid not in summary:
|
||||
# Initialize all levels to 0
|
||||
summary[uuid] = {
|
||||
"ERROR": 0,
|
||||
"WARN": 0,
|
||||
"INFO": 0,
|
||||
"DEBUG": 0
|
||||
}
|
||||
|
||||
summary[uuid][stat.level.value] = stat.count
|
||||
|
||||
# Get client info for enrichment
|
||||
clients = session.query(Client.uuid, Client.hostname, Client.description).all()
|
||||
client_info = {c.uuid: {"hostname": c.hostname, "description": c.description} for c in clients}
|
||||
|
||||
# Enrich summary with client info
|
||||
enriched_summary = {}
|
||||
for uuid, counts in summary.items():
|
||||
enriched_summary[uuid] = {
|
||||
"counts": counts,
|
||||
"info": client_info.get(uuid, {})
|
||||
}
|
||||
|
||||
session.close()
|
||||
return jsonify({
|
||||
"summary": enriched_summary,
|
||||
"period_hours": hours,
|
||||
"since": since.isoformat(),
|
||||
"timestamp": datetime.now(timezone.utc).isoformat()
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"error": f"Server error: {str(e)}"}), 500
|
||||
|
||||
|
||||
@client_logs_bp.route("/monitoring-overview", methods=["GET"])
|
||||
@superadmin_only
|
||||
def get_monitoring_overview():
|
||||
"""Return a dashboard-friendly monitoring overview for all clients."""
|
||||
session = Session()
|
||||
try:
|
||||
hours = min(int(request.args.get("hours", 24)), 168)
|
||||
since = datetime.now(timezone.utc) - timedelta(hours=hours)
|
||||
|
||||
clients = (
|
||||
session.query(Client, ClientGroup.name.label("group_name"))
|
||||
.outerjoin(ClientGroup, Client.group_id == ClientGroup.id)
|
||||
.order_by(ClientGroup.name.asc(), Client.description.asc(), Client.hostname.asc(), Client.uuid.asc())
|
||||
.all()
|
||||
)
|
||||
|
||||
log_stats = (
|
||||
session.query(
|
||||
ClientLog.client_uuid,
|
||||
ClientLog.level,
|
||||
func.count(ClientLog.id).label("count"),
|
||||
)
|
||||
.filter(ClientLog.timestamp >= since)
|
||||
.group_by(ClientLog.client_uuid, ClientLog.level)
|
||||
.all()
|
||||
)
|
||||
|
||||
counts_by_client = {}
|
||||
for stat in log_stats:
|
||||
if stat.client_uuid not in counts_by_client:
|
||||
counts_by_client[stat.client_uuid] = {
|
||||
"ERROR": 0,
|
||||
"WARN": 0,
|
||||
"INFO": 0,
|
||||
"DEBUG": 0,
|
||||
}
|
||||
counts_by_client[stat.client_uuid][stat.level.value] = stat.count
|
||||
|
||||
clients_payload = []
|
||||
summary_counts = {
|
||||
"total_clients": 0,
|
||||
"online_clients": 0,
|
||||
"offline_clients": 0,
|
||||
"healthy_clients": 0,
|
||||
"warning_clients": 0,
|
||||
"critical_clients": 0,
|
||||
"error_logs": 0,
|
||||
"warn_logs": 0,
|
||||
"active_priority_screenshots": 0,
|
||||
}
|
||||
|
||||
for client, group_name in clients:
|
||||
log_counts = counts_by_client.get(
|
||||
client.uuid,
|
||||
{"ERROR": 0, "WARN": 0, "INFO": 0, "DEBUG": 0},
|
||||
)
|
||||
is_alive = _is_client_alive(client.last_alive, client.is_active)
|
||||
process_status = client.process_status.value if client.process_status else None
|
||||
screen_health_status = client.screen_health_status.value if client.screen_health_status else None
|
||||
status = _determine_client_status(is_alive, process_status, screen_health_status, log_counts)
|
||||
|
||||
latest_log = (
|
||||
session.query(ClientLog)
|
||||
.filter_by(client_uuid=client.uuid)
|
||||
.order_by(desc(ClientLog.timestamp))
|
||||
.first()
|
||||
)
|
||||
latest_error = (
|
||||
session.query(ClientLog)
|
||||
.filter_by(client_uuid=client.uuid, level=LogLevel.ERROR)
|
||||
.order_by(desc(ClientLog.timestamp))
|
||||
.first()
|
||||
)
|
||||
|
||||
screenshot_ts = client.last_screenshot_analyzed or _infer_last_screenshot_ts(client.uuid)
|
||||
screenshot_meta = _load_screenshot_metadata(client.uuid)
|
||||
latest_screenshot_type = screenshot_meta.get("latest_screenshot_type") or "periodic"
|
||||
priority_screenshot_type = screenshot_meta.get("last_priority_screenshot_type")
|
||||
priority_screenshot_received_at = screenshot_meta.get("last_priority_received_at")
|
||||
has_active_priority = _is_priority_screenshot_active(priority_screenshot_received_at)
|
||||
screenshot_url = f"/screenshots/{client.uuid}/priority" if has_active_priority else f"/screenshots/{client.uuid}"
|
||||
|
||||
clients_payload.append({
|
||||
"uuid": client.uuid,
|
||||
"hostname": client.hostname,
|
||||
"description": client.description,
|
||||
"ip": client.ip,
|
||||
"model": client.model,
|
||||
"group_id": client.group_id,
|
||||
"group_name": group_name,
|
||||
"registration_time": client.registration_time.isoformat() if client.registration_time else None,
|
||||
"last_alive": client.last_alive.isoformat() if client.last_alive else None,
|
||||
"is_alive": is_alive,
|
||||
"status": status,
|
||||
"current_event_id": client.current_event_id,
|
||||
"current_process": client.current_process,
|
||||
"process_status": process_status,
|
||||
"process_pid": client.process_pid,
|
||||
"screen_health_status": screen_health_status,
|
||||
"last_screenshot_analyzed": screenshot_ts.isoformat() if screenshot_ts else None,
|
||||
"last_screenshot_hash": client.last_screenshot_hash,
|
||||
"latest_screenshot_type": latest_screenshot_type,
|
||||
"priority_screenshot_type": priority_screenshot_type,
|
||||
"priority_screenshot_received_at": priority_screenshot_received_at,
|
||||
"has_active_priority_screenshot": has_active_priority,
|
||||
"screenshot_url": screenshot_url,
|
||||
"log_counts_24h": {
|
||||
"error": log_counts["ERROR"],
|
||||
"warn": log_counts["WARN"],
|
||||
"info": log_counts["INFO"],
|
||||
"debug": log_counts["DEBUG"],
|
||||
},
|
||||
"latest_log": _serialize_log_entry(latest_log),
|
||||
"latest_error": _serialize_log_entry(latest_error),
|
||||
})
|
||||
|
||||
summary_counts["total_clients"] += 1
|
||||
summary_counts["error_logs"] += log_counts["ERROR"]
|
||||
summary_counts["warn_logs"] += log_counts["WARN"]
|
||||
if has_active_priority:
|
||||
summary_counts["active_priority_screenshots"] += 1
|
||||
if is_alive:
|
||||
summary_counts["online_clients"] += 1
|
||||
else:
|
||||
summary_counts["offline_clients"] += 1
|
||||
if status == "healthy":
|
||||
summary_counts["healthy_clients"] += 1
|
||||
elif status == "warning":
|
||||
summary_counts["warning_clients"] += 1
|
||||
elif status == "critical":
|
||||
summary_counts["critical_clients"] += 1
|
||||
|
||||
payload = {
|
||||
"summary": summary_counts,
|
||||
"period_hours": hours,
|
||||
"grace_period_seconds": _grace_period_seconds(),
|
||||
"since": since.isoformat(),
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"clients": clients_payload,
|
||||
}
|
||||
session.close()
|
||||
return jsonify(dict_to_camel_case(payload))
|
||||
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"error": f"Server error: {str(e)}"}), 500
|
||||
|
||||
|
||||
@client_logs_bp.route("/recent-errors", methods=["GET"])
|
||||
@admin_or_higher
|
||||
def get_recent_errors():
|
||||
"""
|
||||
Get recent ERROR logs across all clients
|
||||
Query params:
|
||||
- limit: number of entries (default 20, max 100)
|
||||
|
||||
Useful for system-wide error monitoring
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
limit = min(int(request.args.get('limit', 20)), 100)
|
||||
|
||||
# Get recent errors from all clients
|
||||
logs = session.query(ClientLog).filter_by(
|
||||
level=LogLevel.ERROR
|
||||
).order_by(
|
||||
desc(ClientLog.timestamp)
|
||||
).limit(limit).all()
|
||||
|
||||
result = []
|
||||
for log in logs:
|
||||
result.append(_serialize_log_entry(log, include_client_uuid=True))
|
||||
|
||||
session.close()
|
||||
return jsonify({
|
||||
"errors": result,
|
||||
"count": len(result)
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
session.close()
|
||||
return jsonify({"error": f"Server error: {str(e)}"}), 500
|
||||
@@ -1,14 +1,64 @@
|
||||
from server.database import Session
|
||||
from models.models import Client, ClientGroup
|
||||
from flask import Blueprint, request, jsonify
|
||||
from server.permissions import admin_or_higher
|
||||
from server.mqtt_helper import publish_client_group, delete_client_group_message, publish_multiple_client_groups
|
||||
import sys
|
||||
import os
|
||||
import glob
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
sys.path.append('/workspace')
|
||||
|
||||
clients_bp = Blueprint("clients", __name__, url_prefix="/api/clients")
|
||||
|
||||
VALID_SCREENSHOT_TYPES = {"periodic", "event_start", "event_stop"}
|
||||
|
||||
|
||||
def _normalize_screenshot_type(raw_type):
|
||||
if raw_type is None:
|
||||
return "periodic"
|
||||
normalized = str(raw_type).strip().lower()
|
||||
if normalized in VALID_SCREENSHOT_TYPES:
|
||||
return normalized
|
||||
return "periodic"
|
||||
|
||||
|
||||
def _parse_screenshot_timestamp(raw_timestamp):
|
||||
if raw_timestamp is None:
|
||||
return None
|
||||
try:
|
||||
if isinstance(raw_timestamp, (int, float)):
|
||||
ts_value = float(raw_timestamp)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
return datetime.fromtimestamp(ts_value, timezone.utc)
|
||||
|
||||
if isinstance(raw_timestamp, str):
|
||||
ts = raw_timestamp.strip()
|
||||
if not ts:
|
||||
return None
|
||||
if ts.isdigit():
|
||||
ts_value = float(ts)
|
||||
if ts_value > 1e12:
|
||||
ts_value = ts_value / 1000.0
|
||||
return datetime.fromtimestamp(ts_value, timezone.utc)
|
||||
|
||||
ts_normalized = ts.replace("Z", "+00:00") if ts.endswith("Z") else ts
|
||||
parsed = datetime.fromisoformat(ts_normalized)
|
||||
if parsed.tzinfo is None:
|
||||
return parsed.replace(tzinfo=timezone.utc)
|
||||
return parsed.astimezone(timezone.utc)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
|
||||
@clients_bp.route("/sync-all-groups", methods=["POST"])
|
||||
@admin_or_higher
|
||||
def sync_all_client_groups():
|
||||
"""
|
||||
Administrative Route: Synchronisiert alle bestehenden Client-Gruppenzuordnungen mit MQTT
|
||||
@@ -73,6 +123,7 @@ def get_clients_without_description():
|
||||
|
||||
|
||||
@clients_bp.route("/<uuid>/description", methods=["PUT"])
|
||||
@admin_or_higher
|
||||
def set_client_description(uuid):
|
||||
data = request.get_json()
|
||||
description = data.get("description", "").strip()
|
||||
@@ -127,6 +178,7 @@ def get_clients():
|
||||
|
||||
|
||||
@clients_bp.route("/group", methods=["PUT"])
|
||||
@admin_or_higher
|
||||
def update_clients_group():
|
||||
data = request.get_json()
|
||||
client_ids = data.get("client_ids", [])
|
||||
@@ -178,6 +230,7 @@ def update_clients_group():
|
||||
|
||||
|
||||
@clients_bp.route("/<uuid>", methods=["PATCH"])
|
||||
@admin_or_higher
|
||||
def update_client(uuid):
|
||||
data = request.get_json()
|
||||
session = Session()
|
||||
@@ -234,6 +287,7 @@ def get_clients_with_alive_status():
|
||||
|
||||
|
||||
@clients_bp.route("/<uuid>/restart", methods=["POST"])
|
||||
@admin_or_higher
|
||||
def restart_client(uuid):
|
||||
"""
|
||||
Route to restart a specific client by UUID.
|
||||
@@ -267,7 +321,125 @@ def restart_client(uuid):
|
||||
return jsonify({"error": f"Failed to send MQTT message: {str(e)}"}), 500
|
||||
|
||||
|
||||
@clients_bp.route("/<uuid>/screenshot", methods=["POST"])
|
||||
def upload_screenshot(uuid):
|
||||
"""
|
||||
Route to receive and store a screenshot from a client.
|
||||
Expected payload: base64-encoded image data in JSON or binary image data.
|
||||
Screenshots are stored as {uuid}.jpg in the screenshots folder.
|
||||
Keeps last 20 screenshots per client (auto-cleanup).
|
||||
"""
|
||||
session = Session()
|
||||
client = session.query(Client).filter_by(uuid=uuid).first()
|
||||
if not client:
|
||||
session.close()
|
||||
return jsonify({"error": "Client nicht gefunden"}), 404
|
||||
|
||||
try:
|
||||
screenshot_timestamp = None
|
||||
screenshot_type = "periodic"
|
||||
|
||||
# Handle JSON payload with base64-encoded image
|
||||
if request.is_json:
|
||||
data = request.get_json()
|
||||
if "image" not in data:
|
||||
return jsonify({"error": "Missing 'image' field in JSON payload"}), 400
|
||||
|
||||
screenshot_timestamp = _parse_screenshot_timestamp(data.get("timestamp"))
|
||||
screenshot_type = _normalize_screenshot_type(data.get("screenshot_type") or data.get("screenshotType"))
|
||||
|
||||
# Decode base64 image
|
||||
image_data = base64.b64decode(data["image"])
|
||||
else:
|
||||
# Handle raw binary image data
|
||||
image_data = request.get_data()
|
||||
|
||||
if not image_data:
|
||||
return jsonify({"error": "No image data received"}), 400
|
||||
|
||||
# Ensure screenshots directory exists
|
||||
screenshots_dir = os.path.join(os.path.dirname(__file__), "..", "screenshots")
|
||||
os.makedirs(screenshots_dir, exist_ok=True)
|
||||
|
||||
# Store screenshot with timestamp to track latest
|
||||
now_utc = screenshot_timestamp or datetime.now(timezone.utc)
|
||||
timestamp = now_utc.strftime("%Y%m%d_%H%M%S_%f")
|
||||
filename = f"{uuid}_{timestamp}_{screenshot_type}.jpg"
|
||||
filepath = os.path.join(screenshots_dir, filename)
|
||||
|
||||
with open(filepath, "wb") as f:
|
||||
f.write(image_data)
|
||||
|
||||
# Also create/update a symlink or copy to {uuid}.jpg for easy retrieval
|
||||
latest_filepath = os.path.join(screenshots_dir, f"{uuid}.jpg")
|
||||
with open(latest_filepath, "wb") as f:
|
||||
f.write(image_data)
|
||||
|
||||
# Keep a dedicated copy for high-priority event screenshots.
|
||||
if screenshot_type in ("event_start", "event_stop"):
|
||||
priority_filepath = os.path.join(screenshots_dir, f"{uuid}_priority.jpg")
|
||||
with open(priority_filepath, "wb") as f:
|
||||
f.write(image_data)
|
||||
|
||||
metadata_path = os.path.join(screenshots_dir, f"{uuid}_meta.json")
|
||||
metadata = {}
|
||||
if os.path.exists(metadata_path):
|
||||
try:
|
||||
with open(metadata_path, "r", encoding="utf-8") as meta_file:
|
||||
metadata = json.load(meta_file)
|
||||
except Exception:
|
||||
metadata = {}
|
||||
|
||||
metadata.update({
|
||||
"latest_screenshot_type": screenshot_type,
|
||||
"latest_received_at": now_utc.isoformat(),
|
||||
})
|
||||
if screenshot_type in ("event_start", "event_stop"):
|
||||
metadata["last_priority_screenshot_type"] = screenshot_type
|
||||
metadata["last_priority_received_at"] = now_utc.isoformat()
|
||||
|
||||
with open(metadata_path, "w", encoding="utf-8") as meta_file:
|
||||
json.dump(metadata, meta_file)
|
||||
|
||||
# Update screenshot receive timestamp for monitoring dashboard
|
||||
client.last_screenshot_analyzed = now_utc
|
||||
client.last_screenshot_hash = hashlib.md5(image_data).hexdigest()
|
||||
session.commit()
|
||||
|
||||
# Cleanup: keep only last 20 timestamped screenshots per client
|
||||
pattern = os.path.join(screenshots_dir, f"{uuid}_*.jpg")
|
||||
existing_screenshots = sorted(
|
||||
[path for path in glob.glob(pattern) if not path.endswith("_priority.jpg")]
|
||||
)
|
||||
|
||||
# Keep last 20, delete older ones
|
||||
max_screenshots = 20
|
||||
if len(existing_screenshots) > max_screenshots:
|
||||
for old_file in existing_screenshots[:-max_screenshots]:
|
||||
try:
|
||||
os.remove(old_file)
|
||||
except Exception as cleanup_error:
|
||||
# Log but don't fail the request if cleanup fails
|
||||
import logging
|
||||
logging.warning(f"Failed to cleanup old screenshot {old_file}: {cleanup_error}")
|
||||
|
||||
return jsonify({
|
||||
"success": True,
|
||||
"message": f"Screenshot received for client {uuid}",
|
||||
"filename": filename,
|
||||
"size": len(image_data),
|
||||
"screenshot_type": screenshot_type,
|
||||
}), 200
|
||||
|
||||
except Exception as e:
|
||||
session.rollback()
|
||||
return jsonify({"error": f"Failed to process screenshot: {str(e)}"}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@clients_bp.route("/<uuid>", methods=["DELETE"])
|
||||
@admin_or_higher
|
||||
def delete_client(uuid):
|
||||
session = Session()
|
||||
client = session.query(Client).filter_by(uuid=uuid).first()
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
from flask import Blueprint, jsonify, request
|
||||
from server.permissions import editor_or_higher
|
||||
from server.database import Session
|
||||
from models.models import Conversion, ConversionStatus, EventMedia, MediaType
|
||||
from server.task_queue import get_queue
|
||||
@@ -19,6 +20,7 @@ def sha256_file(abs_path: str) -> str:
|
||||
|
||||
|
||||
@conversions_bp.route("/<int:media_id>/pdf", methods=["POST"])
|
||||
@editor_or_higher
|
||||
def ensure_conversion(media_id: int):
|
||||
session = Session()
|
||||
try:
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
from flask import Blueprint, request, jsonify
|
||||
from server.permissions import editor_or_higher
|
||||
from server.database import Session
|
||||
from models.models import EventException, Event
|
||||
from datetime import datetime, date
|
||||
@@ -7,6 +8,7 @@ event_exceptions_bp = Blueprint("event_exceptions", __name__, url_prefix="/api/e
|
||||
|
||||
|
||||
@event_exceptions_bp.route("", methods=["POST"])
|
||||
@editor_or_higher
|
||||
def create_exception():
|
||||
data = request.json
|
||||
session = Session()
|
||||
@@ -50,6 +52,7 @@ def create_exception():
|
||||
|
||||
|
||||
@event_exceptions_bp.route("/<exc_id>", methods=["PUT"])
|
||||
@editor_or_higher
|
||||
def update_exception(exc_id):
|
||||
data = request.json
|
||||
session = Session()
|
||||
@@ -77,6 +80,7 @@ def update_exception(exc_id):
|
||||
|
||||
|
||||
@event_exceptions_bp.route("/<exc_id>", methods=["DELETE"])
|
||||
@editor_or_higher
|
||||
def delete_exception(exc_id):
|
||||
session = Session()
|
||||
exc = session.query(EventException).filter_by(id=exc_id).first()
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
from re import A
|
||||
from flask import Blueprint, request, jsonify, send_from_directory
|
||||
from flask import Blueprint, request, jsonify, send_from_directory, Response, send_file
|
||||
from server.permissions import editor_or_higher
|
||||
from server.database import Session
|
||||
from models.models import EventMedia, MediaType, Conversion, ConversionStatus
|
||||
from server.task_queue import get_queue
|
||||
from server.worker import convert_event_media_to_pdf
|
||||
import hashlib
|
||||
import os
|
||||
import re
|
||||
|
||||
eventmedia_bp = Blueprint('eventmedia', __name__, url_prefix='/api/eventmedia')
|
||||
|
||||
@@ -25,6 +27,7 @@ def get_param(key, default=None):
|
||||
|
||||
|
||||
@eventmedia_bp.route('/filemanager/operations', methods=['GET', 'POST'])
|
||||
@editor_or_higher
|
||||
def filemanager_operations():
|
||||
action = get_param('action')
|
||||
path = get_param('path', '/')
|
||||
@@ -36,7 +39,18 @@ def filemanager_operations():
|
||||
|
||||
print(action, path, name, new_name, target_path, full_path) # Debug-Ausgabe
|
||||
|
||||
# Superadmin-only protection for the converted folder
|
||||
from flask import session as flask_session
|
||||
user_role = flask_session.get('role')
|
||||
is_superadmin = user_role == 'superadmin'
|
||||
# Normalize path for checks
|
||||
norm_path = os.path.normpath('/' + path.lstrip('/'))
|
||||
under_converted = norm_path == '/converted' or norm_path.startswith('/converted/')
|
||||
|
||||
if action == 'read':
|
||||
# Block listing inside converted for non-superadmins
|
||||
if under_converted and not is_superadmin:
|
||||
return jsonify({'files': [], 'cwd': {'name': os.path.basename(full_path), 'path': path}})
|
||||
# List files and folders
|
||||
items = []
|
||||
session = Session()
|
||||
@@ -59,7 +73,9 @@ def filemanager_operations():
|
||||
item['dateModified'] = entry.stat().st_mtime
|
||||
else:
|
||||
item['dateModified'] = entry.stat().st_mtime
|
||||
items.append(item)
|
||||
# Hide the converted folder at root for non-superadmins
|
||||
if not (not is_superadmin and not entry.is_file() and entry.name == 'converted' and (norm_path == '/' or norm_path == '')):
|
||||
items.append(item)
|
||||
session.close()
|
||||
return jsonify({'files': items, 'cwd': {'name': os.path.basename(full_path), 'path': path}})
|
||||
|
||||
@@ -88,6 +104,8 @@ def filemanager_operations():
|
||||
session.close()
|
||||
return jsonify({'details': details})
|
||||
elif action == 'delete':
|
||||
if under_converted and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
for item in request.form.getlist('names[]'):
|
||||
item_path = os.path.join(full_path, item)
|
||||
if os.path.isdir(item_path):
|
||||
@@ -96,16 +114,23 @@ def filemanager_operations():
|
||||
os.remove(item_path)
|
||||
return jsonify({'success': True})
|
||||
elif action == 'rename':
|
||||
if under_converted and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
src = os.path.join(full_path, name)
|
||||
dst = os.path.join(full_path, new_name)
|
||||
os.rename(src, dst)
|
||||
return jsonify({'success': True})
|
||||
elif action == 'move':
|
||||
# Prevent moving into converted if not superadmin
|
||||
if (target_path and target_path.strip('/').split('/')[0] == 'converted') and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
src = os.path.join(full_path, name)
|
||||
dst = os.path.join(MEDIA_ROOT, target_path.lstrip('/'), name)
|
||||
os.rename(src, dst)
|
||||
return jsonify({'success': True})
|
||||
elif action == 'create':
|
||||
if under_converted and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
os.makedirs(os.path.join(full_path, name), exist_ok=True)
|
||||
return jsonify({'success': True})
|
||||
else:
|
||||
@@ -115,10 +140,17 @@ def filemanager_operations():
|
||||
|
||||
|
||||
@eventmedia_bp.route('/filemanager/upload', methods=['POST'])
|
||||
@editor_or_higher
|
||||
def filemanager_upload():
|
||||
session = Session()
|
||||
# Korrigiert: Erst aus request.form, dann aus request.args lesen
|
||||
path = request.form.get('path') or request.args.get('path', '/')
|
||||
from flask import session as flask_session
|
||||
user_role = flask_session.get('role')
|
||||
is_superadmin = user_role == 'superadmin'
|
||||
norm_path = os.path.normpath('/' + path.lstrip('/'))
|
||||
if (norm_path == '/converted' or norm_path.startswith('/converted/')) and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
upload_path = os.path.join(MEDIA_ROOT, path.lstrip('/'))
|
||||
os.makedirs(upload_path, exist_ok=True)
|
||||
for file in request.files.getlist('uploadFiles'):
|
||||
@@ -181,9 +213,16 @@ def filemanager_upload():
|
||||
@eventmedia_bp.route('/filemanager/download', methods=['GET'])
|
||||
def filemanager_download():
|
||||
path = request.args.get('path', '/')
|
||||
from flask import session as flask_session
|
||||
user_role = flask_session.get('role')
|
||||
is_superadmin = user_role == 'superadmin'
|
||||
norm_path = os.path.normpath('/' + path.lstrip('/'))
|
||||
names = request.args.getlist('names[]')
|
||||
# Nur Einzel-Download für Beispiel
|
||||
if names:
|
||||
# Block access to converted for non-superadmins
|
||||
if (norm_path == '/converted' or norm_path.startswith('/converted/')) and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
file_path = os.path.join(MEDIA_ROOT, path.lstrip('/'), names[0])
|
||||
return send_from_directory(os.path.dirname(file_path), os.path.basename(file_path), as_attachment=True)
|
||||
return jsonify({'error': 'No file specified'}), 400
|
||||
@@ -194,6 +233,12 @@ def filemanager_download():
|
||||
@eventmedia_bp.route('/filemanager/get-image', methods=['GET'])
|
||||
def filemanager_get_image():
|
||||
path = request.args.get('path', '/')
|
||||
from flask import session as flask_session
|
||||
user_role = flask_session.get('role')
|
||||
is_superadmin = user_role == 'superadmin'
|
||||
norm_path = os.path.normpath('/' + path.lstrip('/'))
|
||||
if (norm_path == '/converted' or norm_path.startswith('/converted/')) and not is_superadmin:
|
||||
return jsonify({'error': 'Insufficient permissions'}), 403
|
||||
file_path = os.path.join(MEDIA_ROOT, path.lstrip('/'))
|
||||
return send_from_directory(os.path.dirname(file_path), os.path.basename(file_path))
|
||||
|
||||
@@ -210,6 +255,7 @@ def list_media():
|
||||
|
||||
|
||||
@eventmedia_bp.route('/<int:media_id>', methods=['PUT'])
|
||||
@editor_or_higher
|
||||
def update_media(media_id):
|
||||
session = Session()
|
||||
media = session.query(EventMedia).get(media_id)
|
||||
@@ -259,3 +305,63 @@ def get_media_by_id(media_id):
|
||||
}
|
||||
session.close()
|
||||
return jsonify(result)
|
||||
|
||||
|
||||
# --- Video Streaming with Range Request Support ---
|
||||
@eventmedia_bp.route('/stream/<int:media_id>/<path:filename>', methods=['GET'])
|
||||
def stream_video(media_id, filename):
|
||||
"""Stream video files with range request support for seeking"""
|
||||
session = Session()
|
||||
media = session.query(EventMedia).get(media_id)
|
||||
if not media or not media.file_path:
|
||||
session.close()
|
||||
return jsonify({'error': 'Video not found'}), 404
|
||||
|
||||
file_path = os.path.join(MEDIA_ROOT, media.file_path)
|
||||
if not os.path.exists(file_path):
|
||||
session.close()
|
||||
return jsonify({'error': 'File not found'}), 404
|
||||
|
||||
session.close()
|
||||
|
||||
# Determine MIME type based on file extension
|
||||
ext = os.path.splitext(filename)[1].lower()
|
||||
mime_types = {
|
||||
'.mp4': 'video/mp4',
|
||||
'.webm': 'video/webm',
|
||||
'.ogv': 'video/ogg',
|
||||
'.avi': 'video/x-msvideo',
|
||||
'.mkv': 'video/x-matroska',
|
||||
'.mov': 'video/quicktime',
|
||||
'.wmv': 'video/x-ms-wmv',
|
||||
'.flv': 'video/x-flv',
|
||||
'.mpg': 'video/mpeg',
|
||||
'.mpeg': 'video/mpeg',
|
||||
}
|
||||
mime_type = mime_types.get(ext, 'video/mp4')
|
||||
|
||||
# Support range requests for video seeking
|
||||
range_header = request.headers.get('Range', None)
|
||||
if not range_header:
|
||||
return send_file(file_path, mimetype=mime_type)
|
||||
|
||||
size = os.path.getsize(file_path)
|
||||
byte_start, byte_end = 0, size - 1
|
||||
|
||||
match = re.search(r'bytes=(\d+)-(\d*)', range_header)
|
||||
if match:
|
||||
byte_start = int(match.group(1))
|
||||
if match.group(2):
|
||||
byte_end = int(match.group(2))
|
||||
|
||||
length = byte_end - byte_start + 1
|
||||
|
||||
with open(file_path, 'rb') as f:
|
||||
f.seek(byte_start)
|
||||
data = f.read(length)
|
||||
|
||||
response = Response(data, 206, mimetype=mime_type, direct_passthrough=True)
|
||||
response.headers.add('Content-Range', f'bytes {byte_start}-{byte_end}/{size}')
|
||||
response.headers.add('Accept-Ranges', 'bytes')
|
||||
response.headers.add('Content-Length', str(length))
|
||||
return response
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
from flask import Blueprint, request, jsonify
|
||||
from server.permissions import editor_or_higher
|
||||
from server.database import Session
|
||||
from models.models import Event, EventMedia, MediaType, EventException
|
||||
from server.serializers import dict_to_camel_case, dict_to_snake_case
|
||||
from models.models import Event, EventMedia, MediaType, EventException, SystemSetting
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from sqlalchemy import and_
|
||||
from dateutil.rrule import rrulestr
|
||||
@@ -47,10 +49,22 @@ def get_events():
|
||||
else:
|
||||
end_dt = e.end
|
||||
|
||||
# Setze is_active auf False, wenn Termin vorbei ist
|
||||
if end_dt and end_dt < now and e.is_active:
|
||||
e.is_active = False
|
||||
session.commit()
|
||||
# Auto-deactivate only non-recurring events past their end.
|
||||
# Recurring events remain active until their RecurrenceEnd (UNTIL) has passed.
|
||||
if e.is_active:
|
||||
if e.recurrence_rule:
|
||||
# For recurring, deactivate only when recurrence_end exists and is in the past
|
||||
rec_end = e.recurrence_end
|
||||
if rec_end and rec_end.tzinfo is None:
|
||||
rec_end = rec_end.replace(tzinfo=timezone.utc)
|
||||
if rec_end and rec_end < now:
|
||||
e.is_active = False
|
||||
session.commit()
|
||||
else:
|
||||
# Non-recurring: deactivate when end is in the past
|
||||
if end_dt and end_dt < now:
|
||||
e.is_active = False
|
||||
session.commit()
|
||||
if not (show_inactive or e.is_active):
|
||||
continue
|
||||
|
||||
@@ -82,28 +96,32 @@ def get_events():
|
||||
recurrence_exception = ','.join(tokens)
|
||||
|
||||
base_payload = {
|
||||
"Id": str(e.id),
|
||||
"GroupId": e.group_id,
|
||||
"Subject": e.title,
|
||||
"Description": getattr(e, 'description', None),
|
||||
"StartTime": e.start.isoformat() if e.start else None,
|
||||
"EndTime": e.end.isoformat() if e.end else None,
|
||||
"IsAllDay": False,
|
||||
"MediaId": e.event_media_id,
|
||||
"Type": e.event_type.value if e.event_type else None, # <-- Enum zu String!
|
||||
"Icon": get_icon_for_type(e.event_type.value if e.event_type else None),
|
||||
"id": str(e.id),
|
||||
"group_id": e.group_id,
|
||||
"subject": e.title,
|
||||
"description": getattr(e, 'description', None),
|
||||
"start_time": e.start.isoformat() if e.start else None,
|
||||
"end_time": e.end.isoformat() if e.end else None,
|
||||
"is_all_day": False,
|
||||
"media_id": e.event_media_id,
|
||||
"slideshow_interval": e.slideshow_interval,
|
||||
"page_progress": e.page_progress,
|
||||
"auto_progress": e.auto_progress,
|
||||
"type": e.event_type.value if e.event_type else None,
|
||||
"icon": get_icon_for_type(e.event_type.value if e.event_type else None),
|
||||
# Recurrence metadata
|
||||
"RecurrenceRule": e.recurrence_rule,
|
||||
"RecurrenceEnd": e.recurrence_end.isoformat() if e.recurrence_end else None,
|
||||
"RecurrenceException": recurrence_exception,
|
||||
"SkipHolidays": bool(getattr(e, 'skip_holidays', False)),
|
||||
"recurrence_rule": e.recurrence_rule,
|
||||
"recurrence_end": e.recurrence_end.isoformat() if e.recurrence_end else None,
|
||||
"recurrence_exception": recurrence_exception,
|
||||
"skip_holidays": bool(getattr(e, 'skip_holidays', False)),
|
||||
}
|
||||
result.append(base_payload)
|
||||
|
||||
# No need to emit synthetic override events anymore since detached occurrences
|
||||
# are now real Event rows that will be returned in the main query
|
||||
session.close()
|
||||
return jsonify(result)
|
||||
# Convert all keys to camelCase for frontend
|
||||
return jsonify(dict_to_camel_case(result))
|
||||
|
||||
|
||||
@events_bp.route("/<event_id>", methods=["GET"]) # get single event
|
||||
@@ -113,25 +131,32 @@ def get_event(event_id):
|
||||
event = session.query(Event).filter_by(id=event_id).first()
|
||||
if not event:
|
||||
return jsonify({"error": "Termin nicht gefunden"}), 404
|
||||
|
||||
# Convert event to dictionary with all necessary fields
|
||||
event_dict = {
|
||||
"Id": str(event.id),
|
||||
"Subject": event.title,
|
||||
"StartTime": event.start.isoformat() if event.start else None,
|
||||
"EndTime": event.end.isoformat() if event.end else None,
|
||||
"Description": event.description,
|
||||
"Type": event.event_type.value if event.event_type else "presentation",
|
||||
"IsAllDay": False, # Assuming events are not all-day by default
|
||||
"MediaId": str(event.event_media_id) if event.event_media_id else None,
|
||||
"SlideshowInterval": event.slideshow_interval,
|
||||
"WebsiteUrl": event.event_media.url if event.event_media and hasattr(event.event_media, 'url') else None,
|
||||
"RecurrenceRule": event.recurrence_rule,
|
||||
"RecurrenceEnd": event.recurrence_end.isoformat() if event.recurrence_end else None,
|
||||
"SkipHolidays": event.skip_holidays,
|
||||
"Icon": get_icon_for_type(event.event_type.value if event.event_type else "presentation"),
|
||||
"id": str(event.id),
|
||||
"subject": event.title,
|
||||
"start_time": event.start.isoformat() if event.start else None,
|
||||
"end_time": event.end.isoformat() if event.end else None,
|
||||
"description": event.description,
|
||||
"type": event.event_type.value if event.event_type else "presentation",
|
||||
"is_all_day": False, # Assuming events are not all-day by default
|
||||
"media_id": str(event.event_media_id) if event.event_media_id else None,
|
||||
"slideshow_interval": event.slideshow_interval,
|
||||
"page_progress": event.page_progress,
|
||||
"auto_progress": event.auto_progress,
|
||||
"website_url": event.event_media.url if event.event_media and hasattr(event.event_media, 'url') else None,
|
||||
# Video-specific fields
|
||||
"autoplay": event.autoplay,
|
||||
"loop": event.loop,
|
||||
"volume": event.volume,
|
||||
"muted": event.muted,
|
||||
"recurrence_rule": event.recurrence_rule,
|
||||
"recurrence_end": event.recurrence_end.isoformat() if event.recurrence_end else None,
|
||||
"skip_holidays": event.skip_holidays,
|
||||
"icon": get_icon_for_type(event.event_type.value if event.event_type else "presentation"),
|
||||
}
|
||||
|
||||
return jsonify(dict_to_camel_case(event_dict))
|
||||
return jsonify(event_dict)
|
||||
except Exception as e:
|
||||
return jsonify({"error": f"Fehler beim Laden des Termins: {str(e)}"}), 500
|
||||
@@ -140,6 +165,7 @@ def get_event(event_id):
|
||||
|
||||
|
||||
@events_bp.route("/<event_id>", methods=["DELETE"]) # delete series or single event
|
||||
@editor_or_higher
|
||||
def delete_event(event_id):
|
||||
session = Session()
|
||||
event = session.query(Event).filter_by(id=event_id).first()
|
||||
@@ -162,7 +188,7 @@ def delete_event(event_id):
|
||||
|
||||
|
||||
@events_bp.route("/<event_id>/occurrences/<occurrence_date>", methods=["DELETE"]) # skip single occurrence
|
||||
|
||||
@editor_or_higher
|
||||
def delete_event_occurrence(event_id, occurrence_date):
|
||||
"""Delete a single occurrence of a recurring event by creating an EventException."""
|
||||
session = Session()
|
||||
@@ -217,6 +243,7 @@ def delete_event_occurrence(event_id, occurrence_date):
|
||||
|
||||
|
||||
@events_bp.route("/<event_id>/occurrences/<occurrence_date>/detach", methods=["POST"]) # detach single occurrence into standalone event
|
||||
@editor_or_higher
|
||||
def detach_event_occurrence(event_id, occurrence_date):
|
||||
"""BULLETPROOF: Detach single occurrence without touching master event."""
|
||||
session = Session()
|
||||
@@ -243,6 +270,8 @@ def detach_event_occurrence(event_id, occurrence_date):
|
||||
'event_type': master.event_type,
|
||||
'event_media_id': master.event_media_id,
|
||||
'slideshow_interval': getattr(master, 'slideshow_interval', None),
|
||||
'page_progress': getattr(master, 'page_progress', None),
|
||||
'auto_progress': getattr(master, 'auto_progress', None),
|
||||
'created_by': master.created_by,
|
||||
}
|
||||
|
||||
@@ -294,6 +323,8 @@ def detach_event_occurrence(event_id, occurrence_date):
|
||||
event_type=master_data['event_type'],
|
||||
event_media_id=master_data['event_media_id'],
|
||||
slideshow_interval=master_data['slideshow_interval'],
|
||||
page_progress=data.get("page_progress", master_data['page_progress']),
|
||||
auto_progress=data.get("auto_progress", master_data['auto_progress']),
|
||||
recurrence_rule=None,
|
||||
recurrence_end=None,
|
||||
skip_holidays=False,
|
||||
@@ -322,6 +353,7 @@ def detach_event_occurrence(event_id, occurrence_date):
|
||||
|
||||
|
||||
@events_bp.route("", methods=["POST"])
|
||||
@editor_or_higher
|
||||
def create_event():
|
||||
data = request.json
|
||||
session = Session()
|
||||
@@ -336,11 +368,15 @@ def create_event():
|
||||
event_type = data["event_type"]
|
||||
event_media_id = None
|
||||
slideshow_interval = None
|
||||
page_progress = None
|
||||
auto_progress = None
|
||||
|
||||
# Präsentation: event_media_id und slideshow_interval übernehmen
|
||||
if event_type == "presentation":
|
||||
event_media_id = data.get("event_media_id")
|
||||
slideshow_interval = data.get("slideshow_interval")
|
||||
page_progress = data.get("page_progress")
|
||||
auto_progress = data.get("auto_progress")
|
||||
if not event_media_id:
|
||||
return jsonify({"error": "event_media_id required for presentation"}), 400
|
||||
|
||||
@@ -359,6 +395,40 @@ def create_event():
|
||||
session.commit()
|
||||
event_media_id = media.id
|
||||
|
||||
# WebUntis: URL aus System-Einstellungen holen und EventMedia anlegen
|
||||
if event_type == "webuntis":
|
||||
# Hole WebUntis-URL aus Systemeinstellungen (verwendet supplement_table_url)
|
||||
webuntis_setting = session.query(SystemSetting).filter_by(key='supplement_table_url').first()
|
||||
webuntis_url = webuntis_setting.value if webuntis_setting else ''
|
||||
|
||||
if not webuntis_url:
|
||||
return jsonify({"error": "WebUntis / Supplement table URL not configured in system settings"}), 400
|
||||
|
||||
# EventMedia für WebUntis anlegen
|
||||
media = EventMedia(
|
||||
media_type=MediaType.website,
|
||||
url=webuntis_url,
|
||||
file_path=webuntis_url
|
||||
)
|
||||
session.add(media)
|
||||
session.commit()
|
||||
event_media_id = media.id
|
||||
|
||||
# Video: event_media_id und Video-Einstellungen übernehmen
|
||||
autoplay = None
|
||||
loop = None
|
||||
volume = None
|
||||
muted = None
|
||||
if event_type == "video":
|
||||
event_media_id = data.get("event_media_id")
|
||||
if not event_media_id:
|
||||
return jsonify({"error": "event_media_id required for video"}), 400
|
||||
# Get video-specific settings with defaults
|
||||
autoplay = data.get("autoplay", True)
|
||||
loop = data.get("loop", False)
|
||||
volume = data.get("volume", 0.8)
|
||||
muted = data.get("muted", False)
|
||||
|
||||
# created_by aus den Daten holen, Default: None
|
||||
created_by = data.get("created_by")
|
||||
|
||||
@@ -384,6 +454,12 @@ def create_event():
|
||||
is_active=True,
|
||||
event_media_id=event_media_id,
|
||||
slideshow_interval=slideshow_interval,
|
||||
page_progress=page_progress,
|
||||
auto_progress=auto_progress,
|
||||
autoplay=autoplay,
|
||||
loop=loop,
|
||||
volume=volume,
|
||||
muted=muted,
|
||||
created_by=created_by,
|
||||
# Recurrence
|
||||
recurrence_rule=data.get("recurrence_rule"),
|
||||
@@ -438,6 +514,7 @@ def create_event():
|
||||
|
||||
|
||||
@events_bp.route("/<event_id>", methods=["PUT"]) # update series or single event
|
||||
@editor_or_higher
|
||||
def update_event(event_id):
|
||||
data = request.json
|
||||
session = Session()
|
||||
@@ -455,6 +532,19 @@ def update_event(event_id):
|
||||
event.event_type = data.get("event_type", event.event_type)
|
||||
event.event_media_id = data.get("event_media_id", event.event_media_id)
|
||||
event.slideshow_interval = data.get("slideshow_interval", event.slideshow_interval)
|
||||
if "page_progress" in data:
|
||||
event.page_progress = data.get("page_progress")
|
||||
if "auto_progress" in data:
|
||||
event.auto_progress = data.get("auto_progress")
|
||||
# Video-specific fields
|
||||
if "autoplay" in data:
|
||||
event.autoplay = data.get("autoplay")
|
||||
if "loop" in data:
|
||||
event.loop = data.get("loop")
|
||||
if "volume" in data:
|
||||
event.volume = data.get("volume")
|
||||
if "muted" in data:
|
||||
event.muted = data.get("muted")
|
||||
event.created_by = data.get("created_by", event.created_by)
|
||||
# Track previous values to decide on exception regeneration
|
||||
prev_rule = event.recurrence_rule
|
||||
|
||||
@@ -3,6 +3,8 @@ from server.database import Session
|
||||
from models.models import EventMedia
|
||||
import os
|
||||
|
||||
from flask import Response, abort, session as flask_session
|
||||
|
||||
# Blueprint for direct file downloads by media ID
|
||||
files_bp = Blueprint("files", __name__, url_prefix="/api/files")
|
||||
|
||||
@@ -66,3 +68,29 @@ def download_converted(relpath: str):
|
||||
if not os.path.isfile(abs_path):
|
||||
return jsonify({"error": "File not found"}), 404
|
||||
return send_from_directory(os.path.dirname(abs_path), os.path.basename(abs_path), as_attachment=True)
|
||||
|
||||
|
||||
@files_bp.route('/stream/<path:filename>')
|
||||
def stream_file(filename: str):
|
||||
"""Stream a media file via nginx X-Accel-Redirect after basic auth checks.
|
||||
|
||||
The nginx config must define an internal alias for /internal_media/ that
|
||||
points to the media folder (for example: /opt/infoscreen/server/media/).
|
||||
"""
|
||||
# Basic session-based auth: adapt to your project's auth logic if needed
|
||||
user_role = flask_session.get('role')
|
||||
if not user_role:
|
||||
return abort(403)
|
||||
|
||||
# Normalize path to avoid directory traversal
|
||||
safe_path = os.path.normpath('/' + filename).lstrip('/')
|
||||
abs_path = os.path.join(MEDIA_ROOT, safe_path)
|
||||
if not os.path.isfile(abs_path):
|
||||
return abort(404)
|
||||
|
||||
# Return X-Accel-Redirect header to let nginx serve the file efficiently
|
||||
internal_path = f'/internal_media/{safe_path}'
|
||||
resp = Response()
|
||||
resp.headers['X-Accel-Redirect'] = internal_path
|
||||
# Optional: set content-type if you want (nginx can detect it)
|
||||
return resp
|
||||
|
||||
@@ -4,10 +4,11 @@ from models.models import Client
|
||||
from server.database import Session
|
||||
from models.models import ClientGroup
|
||||
from flask import Blueprint, request, jsonify
|
||||
from server.permissions import admin_or_higher, require_role
|
||||
from sqlalchemy import func
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
sys.path.append('/workspace')
|
||||
|
||||
@@ -15,11 +16,23 @@ groups_bp = Blueprint("groups", __name__, url_prefix="/api/groups")
|
||||
|
||||
|
||||
def get_grace_period():
|
||||
"""Wählt die Grace-Periode abhängig von ENV."""
|
||||
"""Wählt die Grace-Periode abhängig von ENV.
|
||||
|
||||
Clients send heartbeats every ~65s. Grace period allows 2 missed
|
||||
heartbeats plus safety margin before marking offline.
|
||||
"""
|
||||
env = os.environ.get("ENV", "production").lower()
|
||||
if env == "development" or env == "dev":
|
||||
return int(os.environ.get("HEARTBEAT_GRACE_PERIOD_DEV", "15"))
|
||||
return int(os.environ.get("HEARTBEAT_GRACE_PERIOD_PROD", "180"))
|
||||
return int(os.environ.get("HEARTBEAT_GRACE_PERIOD_DEV", "180"))
|
||||
return int(os.environ.get("HEARTBEAT_GRACE_PERIOD_PROD", "170"))
|
||||
|
||||
|
||||
def _to_utc(dt: datetime) -> datetime:
|
||||
if dt is None:
|
||||
return None
|
||||
if dt.tzinfo is None:
|
||||
return dt.replace(tzinfo=timezone.utc)
|
||||
return dt.astimezone(timezone.utc)
|
||||
|
||||
|
||||
def is_client_alive(last_alive, is_active):
|
||||
@@ -37,10 +50,14 @@ def is_client_alive(last_alive, is_active):
|
||||
return False
|
||||
else:
|
||||
last_alive_dt = last_alive
|
||||
return datetime.utcnow() - last_alive_dt <= timedelta(seconds=grace_period)
|
||||
# Vergleiche immer in UTC und mit tz-aware Datetimes
|
||||
last_alive_utc = _to_utc(last_alive_dt)
|
||||
now_utc = datetime.now(timezone.utc)
|
||||
return (now_utc - last_alive_utc) <= timedelta(seconds=grace_period)
|
||||
|
||||
|
||||
@groups_bp.route("", methods=["POST"])
|
||||
@admin_or_higher
|
||||
def create_group():
|
||||
data = request.get_json()
|
||||
name = data.get("name")
|
||||
@@ -83,6 +100,7 @@ def get_groups():
|
||||
|
||||
|
||||
@groups_bp.route("/<int:group_id>", methods=["PUT"])
|
||||
@admin_or_higher
|
||||
def update_group(group_id):
|
||||
data = request.get_json()
|
||||
session = Session()
|
||||
@@ -106,6 +124,7 @@ def update_group(group_id):
|
||||
|
||||
|
||||
@groups_bp.route("/<int:group_id>", methods=["DELETE"])
|
||||
@admin_or_higher
|
||||
def delete_group(group_id):
|
||||
session = Session()
|
||||
group = session.query(ClientGroup).filter_by(id=group_id).first()
|
||||
@@ -119,6 +138,7 @@ def delete_group(group_id):
|
||||
|
||||
|
||||
@groups_bp.route("/byname/<string:group_name>", methods=["DELETE"])
|
||||
@admin_or_higher
|
||||
def delete_group_by_name(group_name):
|
||||
session = Session()
|
||||
group = session.query(ClientGroup).filter_by(name=group_name).first()
|
||||
@@ -132,6 +152,7 @@ def delete_group_by_name(group_name):
|
||||
|
||||
|
||||
@groups_bp.route("/byname/<string:old_name>", methods=["PUT"])
|
||||
@admin_or_higher
|
||||
def rename_group_by_name(old_name):
|
||||
data = request.get_json()
|
||||
new_name = data.get("newName")
|
||||
@@ -187,3 +208,55 @@ def get_groups_with_clients():
|
||||
})
|
||||
session.close()
|
||||
return jsonify(result)
|
||||
|
||||
|
||||
@groups_bp.route("/order", methods=["GET"])
|
||||
def get_group_order():
|
||||
"""Retrieve the saved group order from system settings."""
|
||||
from models.models import SystemSetting
|
||||
session = Session()
|
||||
try:
|
||||
setting = session.query(SystemSetting).filter_by(key='group_order').first()
|
||||
if setting and setting.value:
|
||||
import json
|
||||
order = json.loads(setting.value)
|
||||
return jsonify({"order": order})
|
||||
return jsonify({"order": None})
|
||||
except Exception as e:
|
||||
print(f"Error loading group order: {e}")
|
||||
return jsonify({"order": None})
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@groups_bp.route("/order", methods=["POST"])
|
||||
@require_role('admin')
|
||||
def save_group_order():
|
||||
"""Save the custom group order to system settings."""
|
||||
from models.models import SystemSetting
|
||||
session = Session()
|
||||
try:
|
||||
data = request.get_json()
|
||||
order = data.get('order')
|
||||
|
||||
if not order or not isinstance(order, list):
|
||||
return jsonify({"success": False, "error": "Invalid order data"}), 400
|
||||
|
||||
import json
|
||||
order_json = json.dumps(order)
|
||||
|
||||
setting = session.query(SystemSetting).filter_by(key='group_order').first()
|
||||
if setting:
|
||||
setting.value = order_json
|
||||
else:
|
||||
setting = SystemSetting(key='group_order', value=order_json)
|
||||
session.add(setting)
|
||||
|
||||
session.commit()
|
||||
return jsonify({"success": True})
|
||||
except Exception as e:
|
||||
session.rollback()
|
||||
print(f"Error saving group order: {e}")
|
||||
return jsonify({"success": False, "error": str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
from flask import Blueprint, request, jsonify
|
||||
from server.permissions import admin_or_higher
|
||||
from server.database import Session
|
||||
from models.models import SchoolHoliday
|
||||
from datetime import datetime
|
||||
@@ -22,6 +23,7 @@ def list_holidays():
|
||||
|
||||
|
||||
@holidays_bp.route("/upload", methods=["POST"])
|
||||
@admin_or_higher
|
||||
def upload_holidays():
|
||||
"""
|
||||
Accepts a CSV/TXT file upload (multipart/form-data).
|
||||
|
||||
329
server/routes/system_settings.py
Normal file
329
server/routes/system_settings.py
Normal file
@@ -0,0 +1,329 @@
|
||||
"""
|
||||
System Settings API endpoints.
|
||||
Provides key-value storage for system-wide configuration.
|
||||
"""
|
||||
from flask import Blueprint, jsonify, request
|
||||
from server.database import Session
|
||||
from models.models import SystemSetting
|
||||
from server.permissions import admin_or_higher, superadmin_only
|
||||
from sqlalchemy.exc import SQLAlchemyError
|
||||
|
||||
system_settings_bp = Blueprint('system_settings', __name__, url_prefix='/api/system-settings')
|
||||
|
||||
|
||||
@system_settings_bp.route('', methods=['GET'])
|
||||
@admin_or_higher
|
||||
def get_all_settings():
|
||||
"""
|
||||
Get all system settings.
|
||||
Admin+ only.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
settings = session.query(SystemSetting).all()
|
||||
return jsonify({
|
||||
'settings': [s.to_dict() for s in settings]
|
||||
}), 200
|
||||
except SQLAlchemyError as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/<key>', methods=['GET'])
|
||||
def get_setting(key):
|
||||
"""
|
||||
Get a specific system setting by key.
|
||||
Public endpoint - settings are read-only configuration.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
setting = session.query(SystemSetting).filter_by(key=key).first()
|
||||
if not setting:
|
||||
return jsonify({'error': 'Setting not found'}), 404
|
||||
return jsonify(setting.to_dict()), 200
|
||||
except SQLAlchemyError as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/<key>', methods=['POST', 'PUT'])
|
||||
@admin_or_higher
|
||||
def update_setting(key):
|
||||
"""
|
||||
Create or update a system setting.
|
||||
Admin+ only.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"value": "string",
|
||||
"description": "string" (optional)
|
||||
}
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
data = request.get_json()
|
||||
if not data:
|
||||
return jsonify({'error': 'No data provided'}), 400
|
||||
|
||||
value = data.get('value')
|
||||
description = data.get('description')
|
||||
|
||||
# Try to find existing setting
|
||||
setting = session.query(SystemSetting).filter_by(key=key).first()
|
||||
|
||||
if setting:
|
||||
# Update existing
|
||||
setting.value = value
|
||||
if description is not None:
|
||||
setting.description = description
|
||||
else:
|
||||
# Create new
|
||||
setting = SystemSetting(
|
||||
key=key,
|
||||
value=value,
|
||||
description=description
|
||||
)
|
||||
session.add(setting)
|
||||
|
||||
session.commit()
|
||||
return jsonify(setting.to_dict()), 200
|
||||
except SQLAlchemyError as e:
|
||||
session.rollback()
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/<key>', methods=['DELETE'])
|
||||
@admin_or_higher
|
||||
def delete_setting(key):
|
||||
"""
|
||||
Delete a system setting.
|
||||
Admin+ only.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
setting = session.query(SystemSetting).filter_by(key=key).first()
|
||||
if not setting:
|
||||
return jsonify({'error': 'Setting not found'}), 404
|
||||
|
||||
session.delete(setting)
|
||||
session.commit()
|
||||
return jsonify({'message': 'Setting deleted successfully'}), 200
|
||||
except SQLAlchemyError as e:
|
||||
session.rollback()
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
# Convenience endpoints for specific settings
|
||||
@system_settings_bp.route('/supplement-table', methods=['GET'])
|
||||
@admin_or_higher
|
||||
def get_supplement_table_settings():
|
||||
"""
|
||||
Get supplement table URL and enabled status.
|
||||
Admin+ only.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
url_setting = session.query(SystemSetting).filter_by(key='supplement_table_url').first()
|
||||
enabled_setting = session.query(SystemSetting).filter_by(key='supplement_table_enabled').first()
|
||||
|
||||
return jsonify({
|
||||
'url': url_setting.value if url_setting else '',
|
||||
'enabled': enabled_setting.value == 'true' if enabled_setting else False,
|
||||
}), 200
|
||||
except SQLAlchemyError as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/supplement-table', methods=['POST'])
|
||||
@admin_or_higher
|
||||
def update_supplement_table_settings():
|
||||
"""
|
||||
Update supplement table URL and enabled status.
|
||||
Admin+ only.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"url": "https://...",
|
||||
"enabled": true/false
|
||||
}
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
data = request.get_json()
|
||||
if not data:
|
||||
return jsonify({'error': 'No data provided'}), 400
|
||||
|
||||
url = data.get('url', '')
|
||||
enabled = data.get('enabled', False)
|
||||
|
||||
# Update or create URL setting
|
||||
url_setting = session.query(SystemSetting).filter_by(key='supplement_table_url').first()
|
||||
if url_setting:
|
||||
url_setting.value = url
|
||||
else:
|
||||
url_setting = SystemSetting(
|
||||
key='supplement_table_url',
|
||||
value=url,
|
||||
description='URL für Vertretungsplan / WebUntis (Stundenplan-Änderungstabelle)'
|
||||
)
|
||||
session.add(url_setting)
|
||||
|
||||
# Update or create enabled setting
|
||||
enabled_setting = session.query(SystemSetting).filter_by(key='supplement_table_enabled').first()
|
||||
if enabled_setting:
|
||||
enabled_setting.value = 'true' if enabled else 'false'
|
||||
else:
|
||||
enabled_setting = SystemSetting(
|
||||
key='supplement_table_enabled',
|
||||
value='true' if enabled else 'false',
|
||||
description='Ob Vertretungsplan aktiviert ist'
|
||||
)
|
||||
session.add(enabled_setting)
|
||||
|
||||
session.commit()
|
||||
|
||||
return jsonify({
|
||||
'url': url,
|
||||
'enabled': enabled,
|
||||
'message': 'Supplement table settings updated successfully'
|
||||
}), 200
|
||||
except SQLAlchemyError as e:
|
||||
session.rollback()
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/holiday-banner', methods=['GET'])
|
||||
def get_holiday_banner_setting():
|
||||
"""
|
||||
Get holiday banner enabled status.
|
||||
Public endpoint - dashboard needs this.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
setting = session.query(SystemSetting).filter_by(key='holiday_banner_enabled').first()
|
||||
enabled = setting.value == 'true' if setting else True
|
||||
|
||||
return jsonify({'enabled': enabled}), 200
|
||||
except SQLAlchemyError as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/holiday-banner', methods=['POST'])
|
||||
@admin_or_higher
|
||||
def update_holiday_banner_setting():
|
||||
"""
|
||||
Update holiday banner enabled status.
|
||||
Admin+ only.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"enabled": true/false
|
||||
}
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
data = request.get_json()
|
||||
if not data:
|
||||
return jsonify({'error': 'No data provided'}), 400
|
||||
|
||||
enabled = data.get('enabled', True)
|
||||
|
||||
# Update or create setting
|
||||
setting = session.query(SystemSetting).filter_by(key='holiday_banner_enabled').first()
|
||||
if setting:
|
||||
setting.value = 'true' if enabled else 'false'
|
||||
else:
|
||||
setting = SystemSetting(
|
||||
key='holiday_banner_enabled',
|
||||
value='true' if enabled else 'false',
|
||||
description='Ferienstatus-Banner auf Dashboard anzeigen'
|
||||
)
|
||||
session.add(setting)
|
||||
|
||||
session.commit()
|
||||
|
||||
return jsonify({
|
||||
'enabled': enabled,
|
||||
'message': 'Holiday banner setting updated successfully'
|
||||
}), 200
|
||||
except SQLAlchemyError as e:
|
||||
session.rollback()
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/organization-name', methods=['GET'])
|
||||
def get_organization_name():
|
||||
"""
|
||||
Get organization name.
|
||||
Public endpoint - header needs this.
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
setting = session.query(SystemSetting).filter_by(key='organization_name').first()
|
||||
name = setting.value if setting and setting.value else ''
|
||||
|
||||
return jsonify({'name': name}), 200
|
||||
except SQLAlchemyError as e:
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
|
||||
@system_settings_bp.route('/organization-name', methods=['POST'])
|
||||
@superadmin_only
|
||||
def update_organization_name():
|
||||
"""
|
||||
Update organization name.
|
||||
Superadmin only.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"name": "Meine Organisation"
|
||||
}
|
||||
"""
|
||||
session = Session()
|
||||
try:
|
||||
data = request.get_json()
|
||||
if not data:
|
||||
return jsonify({'error': 'No data provided'}), 400
|
||||
|
||||
name = data.get('name', '')
|
||||
|
||||
# Update or create setting
|
||||
setting = session.query(SystemSetting).filter_by(key='organization_name').first()
|
||||
if setting:
|
||||
setting.value = name
|
||||
else:
|
||||
setting = SystemSetting(
|
||||
key='organization_name',
|
||||
value=name,
|
||||
description='Name der Organisation (wird im Header angezeigt)'
|
||||
)
|
||||
session.add(setting)
|
||||
|
||||
session.commit()
|
||||
|
||||
return jsonify({
|
||||
'name': name,
|
||||
'message': 'Organization name updated successfully'
|
||||
}), 200
|
||||
except SQLAlchemyError as e:
|
||||
session.rollback()
|
||||
return jsonify({'error': str(e)}), 500
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
439
server/routes/users.py
Normal file
439
server/routes/users.py
Normal file
@@ -0,0 +1,439 @@
|
||||
"""
|
||||
User management routes.
|
||||
|
||||
This module provides endpoints for managing users (CRUD operations).
|
||||
Access is role-based: admin can manage user/editor/admin, superadmin can manage all.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, request, jsonify, session
|
||||
from server.database import Session
|
||||
from models.models import User, UserRole
|
||||
from server.permissions import require_role, superadmin_only
|
||||
import bcrypt
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
sys.path.append('/workspace')
|
||||
|
||||
users_bp = Blueprint("users", __name__, url_prefix="/api/users")
|
||||
|
||||
|
||||
@users_bp.route("", methods=["GET"])
|
||||
@require_role('admin', 'superadmin')
|
||||
def list_users():
|
||||
"""
|
||||
List all users (filtered by current user's role).
|
||||
|
||||
Admin: sees user, editor, admin
|
||||
Superadmin: sees all including superadmin
|
||||
|
||||
Returns:
|
||||
200: [
|
||||
{
|
||||
"id": int,
|
||||
"username": "string",
|
||||
"role": "string",
|
||||
"isActive": boolean,
|
||||
"createdAt": "ISO8601",
|
||||
"updatedAt": "ISO8601"
|
||||
}
|
||||
]
|
||||
"""
|
||||
db_session = Session()
|
||||
try:
|
||||
current_role = session.get('role')
|
||||
|
||||
query = db_session.query(User)
|
||||
|
||||
# Admin cannot see superadmin users
|
||||
if current_role == 'admin':
|
||||
query = query.filter(User.role.in_([UserRole.user, UserRole.editor, UserRole.admin]))
|
||||
|
||||
users = query.order_by(User.username).all()
|
||||
|
||||
result = []
|
||||
for user in users:
|
||||
result.append({
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"role": user.role.value,
|
||||
"isActive": user.is_active,
|
||||
"lastLoginAt": user.last_login_at.isoformat() if user.last_login_at else None,
|
||||
"lastPasswordChangeAt": user.last_password_change_at.isoformat() if user.last_password_change_at else None,
|
||||
"failedLoginAttempts": user.failed_login_attempts,
|
||||
"createdAt": user.created_at.isoformat() if user.created_at else None,
|
||||
"updatedAt": user.updated_at.isoformat() if user.updated_at else None
|
||||
})
|
||||
|
||||
return jsonify(result), 200
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@users_bp.route("", methods=["POST"])
|
||||
@require_role('admin', 'superadmin')
|
||||
def create_user():
|
||||
"""
|
||||
Create a new user.
|
||||
|
||||
Admin: can create user, editor, admin
|
||||
Superadmin: can create any role including superadmin
|
||||
|
||||
Request body:
|
||||
{
|
||||
"username": "string",
|
||||
"password": "string",
|
||||
"role": "user|editor|admin|superadmin",
|
||||
"isActive": boolean (optional, default true)
|
||||
}
|
||||
|
||||
Returns:
|
||||
201: {
|
||||
"id": int,
|
||||
"username": "string",
|
||||
"role": "string",
|
||||
"isActive": boolean,
|
||||
"message": "User created successfully"
|
||||
}
|
||||
400: {"error": "Validation error"}
|
||||
403: {"error": "Permission denied"}
|
||||
409: {"error": "Username already exists"}
|
||||
"""
|
||||
data = request.get_json()
|
||||
|
||||
if not data:
|
||||
return jsonify({"error": "Request body required"}), 400
|
||||
|
||||
username = data.get("username", "").strip()
|
||||
password = data.get("password", "")
|
||||
role_str = data.get("role", "user")
|
||||
is_active = data.get("isActive", True)
|
||||
|
||||
# Validation
|
||||
if not username:
|
||||
return jsonify({"error": "Username is required"}), 400
|
||||
|
||||
if len(username) < 3:
|
||||
return jsonify({"error": "Username must be at least 3 characters"}), 400
|
||||
|
||||
if not password:
|
||||
return jsonify({"error": "Password is required"}), 400
|
||||
|
||||
if len(password) < 6:
|
||||
return jsonify({"error": "Password must be at least 6 characters"}), 400
|
||||
|
||||
# Check if role is valid
|
||||
try:
|
||||
new_role = UserRole[role_str]
|
||||
except KeyError:
|
||||
return jsonify({"error": f"Invalid role: {role_str}"}), 400
|
||||
|
||||
# Check permissions: admin cannot create superadmin
|
||||
current_role = session.get('role')
|
||||
if current_role == 'admin' and new_role == UserRole.superadmin:
|
||||
return jsonify({"error": "Admin cannot create superadmin accounts"}), 403
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
# Check if username already exists
|
||||
existing = db_session.query(User).filter_by(username=username).first()
|
||||
if existing:
|
||||
return jsonify({"error": "Username already exists"}), 409
|
||||
|
||||
# Hash password
|
||||
password_hash = bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||
|
||||
# Create user
|
||||
new_user = User(
|
||||
username=username,
|
||||
password_hash=password_hash,
|
||||
role=new_role,
|
||||
is_active=is_active
|
||||
)
|
||||
|
||||
db_session.add(new_user)
|
||||
db_session.commit()
|
||||
|
||||
return jsonify({
|
||||
"id": new_user.id,
|
||||
"username": new_user.username,
|
||||
"role": new_user.role.value,
|
||||
"isActive": new_user.is_active,
|
||||
"message": "User created successfully"
|
||||
}), 201
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@users_bp.route("/<int:user_id>", methods=["GET"])
|
||||
@require_role('admin', 'superadmin')
|
||||
def get_user(user_id):
|
||||
"""
|
||||
Get a single user by ID.
|
||||
|
||||
Admin: cannot get superadmin users
|
||||
Superadmin: can get any user
|
||||
|
||||
Returns:
|
||||
200: {
|
||||
"id": int,
|
||||
"username": "string",
|
||||
"role": "string",
|
||||
"isActive": boolean,
|
||||
"createdAt": "ISO8601",
|
||||
"updatedAt": "ISO8601"
|
||||
}
|
||||
403: {"error": "Permission denied"}
|
||||
404: {"error": "User not found"}
|
||||
"""
|
||||
db_session = Session()
|
||||
try:
|
||||
user = db_session.query(User).filter_by(id=user_id).first()
|
||||
|
||||
if not user:
|
||||
return jsonify({"error": "User not found"}), 404
|
||||
|
||||
# Admin cannot view superadmin users
|
||||
current_role = session.get('role')
|
||||
if current_role == 'admin' and user.role == UserRole.superadmin:
|
||||
return jsonify({"error": "Permission denied"}), 403
|
||||
|
||||
return jsonify({
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"role": user.role.value,
|
||||
"isActive": user.is_active,
|
||||
"lastLoginAt": user.last_login_at.isoformat() if user.last_login_at else None,
|
||||
"lastPasswordChangeAt": user.last_password_change_at.isoformat() if user.last_password_change_at else None,
|
||||
"lastFailedLoginAt": user.last_failed_login_at.isoformat() if user.last_failed_login_at else None,
|
||||
"failedLoginAttempts": user.failed_login_attempts,
|
||||
"lockedUntil": user.locked_until.isoformat() if user.locked_until else None,
|
||||
"deactivatedAt": user.deactivated_at.isoformat() if user.deactivated_at else None,
|
||||
"createdAt": user.created_at.isoformat() if user.created_at else None,
|
||||
"updatedAt": user.updated_at.isoformat() if user.updated_at else None
|
||||
}), 200
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@users_bp.route("/<int:user_id>", methods=["PUT"])
|
||||
@require_role('admin', 'superadmin')
|
||||
def update_user(user_id):
|
||||
"""
|
||||
Update a user's details.
|
||||
|
||||
Admin: cannot edit superadmin users, cannot assign superadmin role
|
||||
Superadmin: can edit any user
|
||||
|
||||
Restrictions:
|
||||
- Cannot change own role
|
||||
- Cannot change own active status
|
||||
|
||||
Request body:
|
||||
{
|
||||
"username": "string" (optional),
|
||||
"role": "string" (optional),
|
||||
"isActive": boolean (optional)
|
||||
}
|
||||
|
||||
Returns:
|
||||
200: {
|
||||
"id": int,
|
||||
"username": "string",
|
||||
"role": "string",
|
||||
"isActive": boolean,
|
||||
"message": "User updated successfully"
|
||||
}
|
||||
400: {"error": "Validation error"}
|
||||
403: {"error": "Permission denied"}
|
||||
404: {"error": "User not found"}
|
||||
409: {"error": "Username already exists"}
|
||||
"""
|
||||
data = request.get_json()
|
||||
|
||||
if not data:
|
||||
return jsonify({"error": "Request body required"}), 400
|
||||
|
||||
current_user_id = session.get('user_id')
|
||||
current_role = session.get('role')
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
user = db_session.query(User).filter_by(id=user_id).first()
|
||||
|
||||
if not user:
|
||||
return jsonify({"error": "User not found"}), 404
|
||||
|
||||
# Admin cannot edit superadmin users
|
||||
if current_role == 'admin' and user.role == UserRole.superadmin:
|
||||
return jsonify({"error": "Cannot edit superadmin users"}), 403
|
||||
|
||||
# Update username if provided
|
||||
if "username" in data:
|
||||
new_username = data["username"].strip()
|
||||
if new_username and new_username != user.username:
|
||||
if len(new_username) < 3:
|
||||
return jsonify({"error": "Username must be at least 3 characters"}), 400
|
||||
|
||||
# Check if username already exists
|
||||
existing = db_session.query(User).filter(
|
||||
User.username == new_username,
|
||||
User.id != user_id
|
||||
).first()
|
||||
if existing:
|
||||
return jsonify({"error": "Username already exists"}), 409
|
||||
|
||||
user.username = new_username
|
||||
|
||||
# Update role if provided
|
||||
if "role" in data:
|
||||
role_str = data["role"]
|
||||
|
||||
# Cannot change own role
|
||||
if user_id == current_user_id:
|
||||
return jsonify({"error": "Cannot change your own role"}), 403
|
||||
|
||||
try:
|
||||
new_role = UserRole[role_str]
|
||||
except KeyError:
|
||||
return jsonify({"error": f"Invalid role: {role_str}"}), 400
|
||||
|
||||
# Admin cannot assign superadmin role
|
||||
if current_role == 'admin' and new_role == UserRole.superadmin:
|
||||
return jsonify({"error": "Cannot assign superadmin role"}), 403
|
||||
|
||||
user.role = new_role
|
||||
|
||||
# Update active status if provided
|
||||
if "isActive" in data:
|
||||
# Cannot deactivate own account
|
||||
if user_id == current_user_id:
|
||||
return jsonify({"error": "Cannot deactivate your own account"}), 403
|
||||
|
||||
new_status = bool(data["isActive"])
|
||||
user.is_active = new_status
|
||||
|
||||
# Track deactivation
|
||||
if not new_status and not user.deactivated_at:
|
||||
user.deactivated_at = datetime.now(timezone.utc)
|
||||
user.deactivated_by = current_user_id
|
||||
|
||||
db_session.commit()
|
||||
|
||||
return jsonify({
|
||||
"id": user.id,
|
||||
"username": user.username,
|
||||
"role": user.role.value,
|
||||
"isActive": user.is_active, "lastLoginAt": None,
|
||||
"lastPasswordChangeAt": None,
|
||||
"failedLoginAttempts": 0, "lastLoginAt": user.last_login_at.isoformat() if user.last_login_at else None,
|
||||
"lastPasswordChangeAt": user.last_password_change_at.isoformat() if user.last_password_change_at else None,
|
||||
"failedLoginAttempts": user.failed_login_attempts,
|
||||
"message": "User updated successfully"
|
||||
}), 200
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@users_bp.route("/<int:user_id>/password", methods=["PUT"])
|
||||
@require_role('admin', 'superadmin')
|
||||
def reset_password(user_id):
|
||||
"""
|
||||
Reset a user's password.
|
||||
|
||||
Admin: cannot reset superadmin passwords
|
||||
Superadmin: can reset any password
|
||||
|
||||
Request body:
|
||||
{
|
||||
"password": "string"
|
||||
}
|
||||
|
||||
Returns:
|
||||
200: {"message": "Password reset successfully"}
|
||||
400: {"error": "Validation error"}
|
||||
403: {"error": "Permission denied"}
|
||||
404: {"error": "User not found"}
|
||||
"""
|
||||
data = request.get_json()
|
||||
|
||||
if not data:
|
||||
return jsonify({"error": "Request body required"}), 400
|
||||
|
||||
password = data.get("password", "")
|
||||
|
||||
if not password:
|
||||
return jsonify({"error": "Password is required"}), 400
|
||||
|
||||
if len(password) < 6:
|
||||
return jsonify({"error": "Password must be at least 6 characters"}), 400
|
||||
|
||||
current_role = session.get('role')
|
||||
current_user_id = session.get('user_id')
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
user = db_session.query(User).filter_by(id=user_id).first()
|
||||
|
||||
if not user:
|
||||
return jsonify({"error": "User not found"}), 404
|
||||
|
||||
# Users must change their own password via /auth/change-password (requires current password)
|
||||
if user.id == current_user_id:
|
||||
return jsonify({"error": "Use /api/auth/change-password to change your own password"}), 403
|
||||
|
||||
# Admin cannot reset superadmin passwords
|
||||
if current_role == 'admin' and user.role == UserRole.superadmin:
|
||||
return jsonify({"error": "Cannot reset superadmin passwords"}), 403
|
||||
|
||||
# Hash new password and update timestamp
|
||||
password_hash = bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
|
||||
user.password_hash = password_hash
|
||||
user.last_password_change_at = datetime.now(timezone.utc)
|
||||
|
||||
db_session.commit()
|
||||
|
||||
return jsonify({"message": "Password reset successfully"}), 200
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
|
||||
|
||||
@users_bp.route("/<int:user_id>", methods=["DELETE"])
|
||||
@superadmin_only
|
||||
def delete_user(user_id):
|
||||
"""
|
||||
Permanently delete a user (superadmin only).
|
||||
|
||||
Cannot delete own account.
|
||||
|
||||
Returns:
|
||||
200: {"message": "User deleted successfully"}
|
||||
403: {"error": "Cannot delete your own account"}
|
||||
404: {"error": "User not found"}
|
||||
"""
|
||||
current_user_id = session.get('user_id')
|
||||
|
||||
# Cannot delete own account
|
||||
if user_id == current_user_id:
|
||||
return jsonify({"error": "Cannot delete your own account"}), 403
|
||||
|
||||
db_session = Session()
|
||||
try:
|
||||
user = db_session.query(User).filter_by(id=user_id).first()
|
||||
|
||||
if not user:
|
||||
return jsonify({"error": "User not found"}), 404
|
||||
|
||||
username = user.username # Store for message
|
||||
db_session.delete(user)
|
||||
db_session.commit()
|
||||
|
||||
return jsonify({"message": f"User '{username}' deleted successfully"}), 200
|
||||
|
||||
finally:
|
||||
db_session.close()
|
||||
74
server/serializers.py
Normal file
74
server/serializers.py
Normal file
@@ -0,0 +1,74 @@
|
||||
"""
|
||||
Serialization helpers for converting between Python snake_case and JavaScript camelCase.
|
||||
"""
|
||||
import re
|
||||
from typing import Any, Dict, List, Union
|
||||
|
||||
|
||||
def to_camel_case(snake_str: str) -> str:
|
||||
"""
|
||||
Convert snake_case string to camelCase.
|
||||
|
||||
Examples:
|
||||
event_type -> eventType
|
||||
start_time -> startTime
|
||||
is_active -> isActive
|
||||
"""
|
||||
components = snake_str.split('_')
|
||||
# Keep the first component as-is, capitalize the rest
|
||||
return components[0] + ''.join(word.capitalize() for word in components[1:])
|
||||
|
||||
|
||||
def to_snake_case(camel_str: str) -> str:
|
||||
"""
|
||||
Convert camelCase string to snake_case.
|
||||
|
||||
Examples:
|
||||
eventType -> event_type
|
||||
startTime -> start_time
|
||||
isActive -> is_active
|
||||
"""
|
||||
# Insert underscore before uppercase letters and convert to lowercase
|
||||
snake = re.sub('([A-Z])', r'_\1', camel_str).lower()
|
||||
# Remove leading underscore if present
|
||||
return snake.lstrip('_')
|
||||
|
||||
|
||||
def dict_to_camel_case(data: Union[Dict, List, Any]) -> Union[Dict, List, Any]:
|
||||
"""
|
||||
Recursively convert dictionary keys from snake_case to camelCase.
|
||||
Also handles lists of dictionaries.
|
||||
|
||||
Args:
|
||||
data: Dictionary, list, or primitive value to convert
|
||||
|
||||
Returns:
|
||||
Converted data structure with camelCase keys
|
||||
"""
|
||||
if isinstance(data, dict):
|
||||
return {to_camel_case(key): dict_to_camel_case(value)
|
||||
for key, value in data.items()}
|
||||
elif isinstance(data, list):
|
||||
return [dict_to_camel_case(item) for item in data]
|
||||
else:
|
||||
return data
|
||||
|
||||
|
||||
def dict_to_snake_case(data: Union[Dict, List, Any]) -> Union[Dict, List, Any]:
|
||||
"""
|
||||
Recursively convert dictionary keys from camelCase to snake_case.
|
||||
Also handles lists of dictionaries.
|
||||
|
||||
Args:
|
||||
data: Dictionary, list, or primitive value to convert
|
||||
|
||||
Returns:
|
||||
Converted data structure with snake_case keys
|
||||
"""
|
||||
if isinstance(data, dict):
|
||||
return {to_snake_case(key): dict_to_snake_case(value)
|
||||
for key, value in data.items()}
|
||||
elif isinstance(data, list):
|
||||
return [dict_to_snake_case(item) for item in data]
|
||||
else:
|
||||
return data
|
||||
@@ -8,6 +8,10 @@ from server.routes.holidays import holidays_bp
|
||||
from server.routes.academic_periods import academic_periods_bp
|
||||
from server.routes.groups import groups_bp
|
||||
from server.routes.clients import clients_bp
|
||||
from server.routes.client_logs import client_logs_bp
|
||||
from server.routes.auth import auth_bp
|
||||
from server.routes.users import users_bp
|
||||
from server.routes.system_settings import system_settings_bp
|
||||
from server.database import Session, engine
|
||||
from flask import Flask, jsonify, send_from_directory, request
|
||||
import glob
|
||||
@@ -17,9 +21,33 @@ sys.path.append('/workspace')
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
# Allow uploads up to 1 GiB at the Flask level (application hard limit)
|
||||
# See nginx.conf for proxy limit; keep both in sync.
|
||||
app.config['MAX_CONTENT_LENGTH'] = 1 * 1024 * 1024 * 1024 # 1 GiB
|
||||
|
||||
# Configure Flask session
|
||||
# In production, use a secure random key from environment variable
|
||||
app.config['SECRET_KEY'] = os.environ.get('FLASK_SECRET_KEY', 'dev-secret-key-change-in-production')
|
||||
app.config['SESSION_COOKIE_HTTPONLY'] = True
|
||||
app.config['SESSION_COOKIE_SAMESITE'] = 'Lax'
|
||||
app.register_blueprint(system_settings_bp)
|
||||
# In production, set to True if using HTTPS
|
||||
app.config['SESSION_COOKIE_SECURE'] = os.environ.get('ENV', 'development').lower() == 'production'
|
||||
# Session lifetime: longer in development for convenience
|
||||
from datetime import timedelta
|
||||
_env = os.environ.get('ENV', 'development').lower()
|
||||
if _env in ('development', 'dev'):
|
||||
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(days=30)
|
||||
else:
|
||||
# Keep modest in production; can be tuned via env later
|
||||
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(days=1)
|
||||
|
||||
# Blueprints importieren und registrieren
|
||||
|
||||
app.register_blueprint(auth_bp)
|
||||
app.register_blueprint(users_bp)
|
||||
app.register_blueprint(clients_bp)
|
||||
app.register_blueprint(client_logs_bp)
|
||||
app.register_blueprint(groups_bp)
|
||||
app.register_blueprint(events_bp)
|
||||
app.register_blueprint(event_exceptions_bp)
|
||||
@@ -40,13 +68,31 @@ def index():
|
||||
return "Hello from Infoscreen‐API!"
|
||||
|
||||
|
||||
@app.route("/screenshots/<uuid>/priority")
|
||||
def get_priority_screenshot(uuid):
|
||||
normalized_uuid = uuid[:-4] if uuid.lower().endswith('.jpg') else uuid
|
||||
priority_filename = f"{normalized_uuid}_priority.jpg"
|
||||
priority_path = os.path.join("screenshots", priority_filename)
|
||||
if os.path.exists(priority_path):
|
||||
return send_from_directory("screenshots", priority_filename)
|
||||
return get_screenshot(uuid)
|
||||
|
||||
|
||||
@app.route("/screenshots/<uuid>")
|
||||
@app.route("/screenshots/<uuid>.jpg")
|
||||
def get_screenshot(uuid):
|
||||
pattern = os.path.join("screenshots", f"{uuid}*.jpg")
|
||||
normalized_uuid = uuid[:-4] if uuid.lower().endswith('.jpg') else uuid
|
||||
latest_filename = f"{normalized_uuid}.jpg"
|
||||
latest_path = os.path.join("screenshots", latest_filename)
|
||||
if os.path.exists(latest_path):
|
||||
return send_from_directory("screenshots", latest_filename)
|
||||
|
||||
pattern = os.path.join("screenshots", f"{normalized_uuid}_*.jpg")
|
||||
files = glob.glob(pattern)
|
||||
if not files:
|
||||
# Dummy-Bild als Redirect oder direkt als Response
|
||||
return jsonify({"error": "Screenshot not found", "dummy": "https://placehold.co/400x300?text=No+Screenshot"}), 404
|
||||
files.sort(reverse=True)
|
||||
filename = os.path.basename(files[0])
|
||||
return send_from_directory("screenshots", filename)
|
||||
|
||||
|
||||
139
userrole-management.md
Normal file
139
userrole-management.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# User Role Management Integration Guide
|
||||
|
||||
This document outlines a step-by-step workpath to introduce user management and role-based access control (RBAC) into the infoscreen_2025 project. It is designed to minimize friction and allow incremental rollout.
|
||||
|
||||
---
|
||||
|
||||
## 1. Define Roles and Permissions
|
||||
|
||||
- **Roles:**
|
||||
- `superadmin`: Developer, can edit all settings (including critical/system settings), manage all users.
|
||||
- `admin`: Organization employee, can edit all organization-relevant settings, manage users (except superadmin), manage groups/clients.
|
||||
- `editor`: Can create, read, update, delete (CRUD) events and media.
|
||||
- `user`: Can view events only.
|
||||
- **Permission Matrix:**
|
||||
- See summary in the main design notes above for CRUD rights per area.
|
||||
|
||||
---
|
||||
|
||||
## 2. Extend Database Schema
|
||||
|
||||
- Add a `role` column to the `users` table (Enum: superadmin, admin, editor, user; default: user).
|
||||
- Create an Alembic migration for this change.
|
||||
- Update SQLAlchemy models accordingly.
|
||||
|
||||
---
|
||||
|
||||
## 3. Seed Initial Superadmin
|
||||
|
||||
- Update `server/init_defaults.py` to create a default superadmin user (with secure password, ideally from env or prompt).
|
||||
|
||||
---
|
||||
|
||||
## 4. Backend: Auth Context and Role Exposure
|
||||
|
||||
- Ensure current user is determined per request (session or token-based).
|
||||
- Add `/api/me` endpoint to return current user's id, username, and role.
|
||||
|
||||
---
|
||||
|
||||
## 5. Backend: Permission Decorators
|
||||
|
||||
- Implement decorators (e.g., `@require_role('admin')`, `@require_any_role(['editor','admin','superadmin'])`).
|
||||
- Apply to sensitive routes:
|
||||
- User management (admin+)
|
||||
- Program settings (admin/superadmin)
|
||||
- Event settings (admin+)
|
||||
- Event CRUD (editor+)
|
||||
|
||||
---
|
||||
|
||||
## 6. Frontend: Role Awareness and Gating
|
||||
|
||||
- Add a `useCurrentUser()` hook or similar to fetch `/api/me` and store role in context.
|
||||
- Gate navigation and UI controls based on role:
|
||||
- Hide or disable settings/actions not permitted for current role.
|
||||
- Show/hide user management UI for admin+ only.
|
||||
|
||||
---
|
||||
|
||||
## 7. Frontend: User Management UI
|
||||
|
||||
- Add a Users page (admin+):
|
||||
- List users (GridComponent)
|
||||
- Create/invite user (Dialog)
|
||||
- Set/reset role (DropDownList, prevent superadmin assignment unless current is superadmin)
|
||||
- Reset password (Dialog)
|
||||
|
||||
---
|
||||
|
||||
## 8. Rollout Strategy
|
||||
|
||||
- **Stage 1:** Implement model, seed, and minimal enforcement (now)
|
||||
- **Stage 2:** Expand backend enforcement and frontend gating (before wider testing)
|
||||
- **Stage 3:** Polish UI, add audit logging if needed (before production)
|
||||
|
||||
---
|
||||
|
||||
## 9. Testing
|
||||
|
||||
- Test each role for correct access and UI visibility.
|
||||
- Ensure superadmin cannot be demoted or deleted by non-superadmin.
|
||||
- Validate that critical settings are only editable by superadmin.
|
||||
|
||||
---
|
||||
|
||||
## 10. Optional: Audit Logging
|
||||
|
||||
- For production, consider logging critical changes (role changes, user creation/deletion, settings changes) for traceability.
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
- See `models/models.py`, `server/routes/`, and `dashboard/src/` for integration points.
|
||||
- Syncfusion: Use GridComponent, Dialog, DropDownList for user management UI.
|
||||
|
||||
---
|
||||
|
||||
This guide is designed for incremental, low-friction integration. Adjust steps as needed for your workflow and deployment practices.
|
||||
|
||||
---
|
||||
|
||||
## Stage 1: Concrete Step-by-Step Checklist
|
||||
|
||||
1. **Extend the User Model**
|
||||
- Add a `role` column to the `users` table in your SQLAlchemy model (`models/models.py`).
|
||||
- Use an Enum for roles: `superadmin`, `admin`, `editor`, `user` (default: `user`).
|
||||
- Create an Alembic migration to add this column to the database.
|
||||
|
||||
2. **Seed a Superadmin User** ✅
|
||||
- Update `server/init_defaults.py` to create a default superadmin user with a secure password (from environment variable).
|
||||
- Ensure this user is only created if not already present.
|
||||
- **Status:** Completed. See `server/init_defaults.py` - requires `DEFAULT_SUPERADMIN_PASSWORD` environment variable.
|
||||
|
||||
3. **Expose Current User Role** ✅
|
||||
- Add a `/api/me` endpoint (e.g., in `server/routes/auth.py`) that returns the current user's id, username, and role.
|
||||
- Ensure the frontend can fetch and store this information on login or page load.
|
||||
- **Status:** Completed. See:
|
||||
- Backend: `server/routes/auth.py` (login, logout, /me, /check endpoints)
|
||||
- Frontend: `dashboard/src/apiAuth.ts` and `dashboard/src/useAuth.tsx`
|
||||
- Permissions: `server/permissions.py` (decorators for route protection)
|
||||
|
||||
4. **Implement Minimal Role Enforcement** ✅
|
||||
- Decorators added in `server/permissions.py`: `require_auth`, `require_role`, `editor_or_higher`, `admin_or_higher`, `superadmin_only`.
|
||||
- Applied to sensitive endpoints:
|
||||
- Groups (admin+): create, update, delete, rename (`server/routes/groups.py`)
|
||||
- Clients (admin+): sync-all-groups, set description, bulk group update, patch, restart, delete (`server/routes/clients.py`)
|
||||
- Academic periods (admin+): set active (`server/routes/academic_periods.py`)
|
||||
- Events (editor+): create, update, delete, delete occurrence, detach occurrence (`server/routes/events.py`)
|
||||
- Event media (editor+): filemanager operations/upload, metadata update (`server/routes/eventmedia.py`)
|
||||
- Event exceptions (editor+): create, update, delete (`server/routes/event_exceptions.py`)
|
||||
- Conversions (editor+): ensure conversion (`server/routes/conversions.py`)
|
||||
- Superadmin dev bypass: In development (`ENV=development`), `superadmin` bypasses checks for unblocking.
|
||||
|
||||
5. **Test the Flow**
|
||||
- Confirm that the superadmin can access all protected endpoints.
|
||||
- Confirm that users with other roles are denied access to protected endpoints.
|
||||
- Ensure the frontend can correctly gate UI elements based on the current user's role.
|
||||
|
||||
---
|
||||
Reference in New Issue
Block a user