Compare commits

...

4 Commits

Author SHA1 Message Date
6e9885404c fix: resolve 3 runtime errors in inference and UI
- result_handler: add _InferenceResultEncoder to handle dataclass objects
  (ObjectDetectionResult, ClassificationResult) in JSON serialization;
  fixes "Object of type ObjectDetectionResult is not JSON serializable"

- deployment: replace textCursor().movePosition() with toPlainText/setPlainText
  for log trimming; eliminates QTextCursor cross-thread Qt warning

- main: remove duplicate setAttribute(AA_EnableHighDpiScaling) call in
  setup_application() which ran after QApplication was already created;
  fixes "Attribute Qt::AA_EnableHighDpiScaling must be set before
  QCoreApplication is created"

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-06 19:32:30 +08:00
d2fdbf85ee feat: integrate Phase 1-4 components into main dashboard
- Performance tab: add PerformanceDashboard live stats widget
- Performance tab: add Benchmark button (opens BenchmarkDialog)
- Performance tab: add Export Report button (opens ExportReportDialog)
- Dongles tab: embed DeviceManagementPanel with 3s auto-refresh
- Fix pipeline_config type: analyze_pipeline_stages returns List not Dict

All integrations are guarded with try/except and AVAILABLE flags to
prevent import errors from breaking existing functionality.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-06 19:32:18 +08:00
55040733fe feat: implement Phase 1-4 performance visualization and device management
Phase 1 — Performance Benchmarking:
- PerformanceBenchmarker: sequential vs parallel benchmark with injectable runner
- PerformanceHistory: JSON-backed benchmark history with regression support
- PerformanceDashboard: real-time FPS/latency display widget
- BenchmarkDialog: one-click benchmark with 3-phase progress bar

Phase 2 — Device Management:
- DeviceManager: NPU dongle scan, assign/unassign, load balance recommendation
- DeviceManagementPanel: live device status cards with auto-refresh
- BottleneckAlert: dataclass for pipeline bottleneck detection

Phase 3 — Advanced Features:
- OptimizationEngine: 3 optimization rules (rebalance/adjust_queue/add_devices)
- TemplateManager: 3 built-in pipeline templates (YOLOv5, fire detection, dual-model)

Phase 4 — Report Export:
- ReportExporter: PDF (reportlab, optional) and CSV export
- ExportReportDialog: format selection + path picker UI

192 unit tests, all passing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-06 19:32:05 +08:00
5aa374625f docs: add autoflow project docs and test infrastructure
- Add .autoflow/ with health check, PRD, Design Doc, TDD, progress tracking
- Add tests/conftest.py with PyQt5/KP SDK stubs for unit testing
- Add pytest config to pyproject.toml (pythonpath, import-mode, test naming)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-06 19:31:52 +08:00
39 changed files with 7372 additions and 19 deletions

View File

@ -0,0 +1,141 @@
# 專案健檢報告
## 基本資訊
- **專案名稱**Cluster4NPU UI — Visual Pipeline Designer
- **版本**v0.0.3
- **程式碼來源**:本地路徑 `C:\Users\sungs\Documents\abin\temp\cluster4npu`
- **Git 分支**developer主分支為 main
- **最後 commit**feat: Reorganize test scripts and improve YOLOv5 postprocessing
- **健檢日期**2026-04-05
---
## 技術堆疊
| 層級 | 技術 | 版本 |
|------|------|------|
| 語言 | Python | >=3.9, <3.12 |
| GUI 框架 | PyQt5 | >=5.15.11 |
| 視覺節點編輯器 | NodeGraphQt | >=0.6.40 |
| 影像處理 | OpenCV | (runtime dependency) |
| 數值運算 | NumPy | (runtime dependency) |
| 硬體 SDK | Kneron KP SDK | (runtime, NPU dongle 驅動) |
| 套件管理 | uv | — |
| 打包 | PyInstaller (main.spec) | — |
**支援硬體:** Kneron NPU dongles — KL520、KL720、KL1080
---
## 專案結構概覽
```
cluster4npu/
├── main.py # 應用程式入口點
├── config/ # 設定與主題 (settings.py, theme.py)
├── core/
│ ├── pipeline.py # Pipeline 分析、stage 偵測、驗證
│ ├── functions/
│ │ ├── InferencePipeline.py # 多 stage pipeline 執行引擎(多執行緒)
│ │ ├── Multidongle.py # NPU dongle 管理與自動偵測
│ │ ├── camera_source.py # 相機輸入來源
│ │ ├── video_source.py # 影片輸入來源
│ │ ├── result_handler.py # 推論結果處理
│ │ ├── workflow_orchestrator.py
│ │ ├── mflow_converter.py # .mflow 格式轉換
│ │ └── yolo_v5_postprocess_reference.py
│ └── nodes/ # 節點定義5 種類型)
│ ├── base_node.py
│ ├── input_node.py
│ ├── model_node.py
│ ├── preprocess_node.py
│ ├── postprocess_node.py
│ ├── output_node.py
│ ├── simple_input_node.py
│ └── exact_nodes.py
├── ui/
│ ├── windows/ # 主視窗login.py, dashboard.py, pipeline_editor.py
│ ├── components/ # 可重用元件node_palette, properties_widget, common_widgets
│ └── dialogs/ # 對話框deployment, performance, stage_config 等)
├── utils/ # 工具函式file_utils, folder_dialog, ui_utils
├── example_utils/ # 範例後處理工具ByteTrack 等)
├── tests/ # 測試腳本42 個,多為腳本式,非正式 test suite
├── resources/ # 資源檔案
└── output/ # 推論輸出結果
```
---
## 文件完整度
| 文件類型 | 狀態 | 位置 | 備註 |
|---------|------|------|------|
| README | ✅ 有 | `README.md` | 詳細,含安裝、架構說明 |
| 產品需求 / PRD | ⚠️ 部分 | `PROJECT_SUMMARY.md` | 有願景與待開發功能,但非正式 PRD 格式 |
| 開發路線圖 | ✅ 有 | `DEVELOPMENT_ROADMAP.md` | 四個 Phase有具體目標 |
| 架構設計文件 | ❌ 無 | — | README 內有簡介,但無正式 Design Doc |
| API 文件 | ❌ 無 | — | 無正式 API 文件 |
| 設計稿 | ❌ 無 | 只有 `Flowchart.jpg` | 無 Wireframe 或 UI 規格 |
| 技術設計文件 (TDD) | ❌ 無 | — | 無 |
| 測試計畫 | ❌ 無 | — | 有測試腳本但無正式測試計畫 |
| 部署文件 | ⚠️ 部分 | README 內 | 有基本步驟,無完整部署文件 |
| Release Notes | ✅ 有 | `release_note.md` | 目前到 v0.0.2 |
---
## 程式碼健康度
- **測試覆蓋率**:⚠️ 部分測試 — `tests/` 下有 42 個腳本,但多為情境測試腳本(非 pytest 單元測試),缺乏系統性覆蓋
- **程式碼品質**:中等 — 有明確的模組分離部分根目錄腳本debug_*.py, force_cleanup.py 等)為開發過程遺留,結構略混亂
- **安全性**:低風險(本地桌面應用,無網路 API
- **技術債**
- 根目錄有多個 debug/cleanup 腳本未整理
- tests/ 下腳本命名與分類混亂(部分非 test_ 開頭)
- 缺乏正式的 pytest 測試架構
---
## 現有功能清單
| 功能 | 描述 | 狀態 |
|------|------|------|
| 視覺化 Pipeline 編輯器 | 拖拽節點建立 PipelineNodeGraphQt | ✅ 完成 |
| 5 種節點類型 | Input / Preprocess / Model / Postprocess / Output | ✅ 完成 |
| Pipeline 驗證 | 即時 stage 偵測與錯誤標示 | ✅ 完成 |
| .mflow 檔案格式 | Pipeline 儲存與載入JSON | ✅ 完成 |
| 多 NPU Dongle 支援 | KL520 / KL720 / KL1080 自動偵測 | ✅ 完成 |
| 多 stage 推論引擎 | 多執行緒 Pipeline 執行 | ✅ 完成 |
| 效能監控 | FPS、延遲即時顯示 | ✅ 完成(有 known bugs |
| 相機 / 影片 / 圖片輸入 | 多種輸入來源 | ✅ 完成 |
| 專案管理 | 登入畫面、最近專案、新增/載入 Pipeline | ✅ 完成 |
| YOLOv5 後處理 | 偵測結果格式化 | ✅ 完成(最近改善) |
| ByteTrack 追蹤 | 物件追蹤後處理 | ✅ 完成example_utils |
---
## 缺失項目摘要(待開發)
根據 `PROJECT_SUMMARY.md``DEVELOPMENT_ROADMAP.md`
1. **效能視覺化**:並行 vs 循序執行比較、Speedup 指標顯示Phase 1
2. **Benchmarking 系統**自動化效能測試、圖表比較Phase 1
3. **裝置管理介面**視覺化裝置分配、負載平衡Phase 2
4. **即時監控 Dashboard**FPS/延遲圖表、資源使用率Phase 2
5. **優化引擎**自動化建議、效能預測Phase 3
已知 Bug
- 節點屬性顯示問題
- 輸出視覺化(含後處理)
---
## CI/CD 與基礎設施
| 項目 | 狀態 |
|------|------|
| Docker | ❌ 無 |
| CI/CD | ❌ 無 |
| 部署設定 | ❌ 無(本地桌面應用,有 PyInstaller spec |
| 環境變數管理 | ❌ 無 |
| 版本控制 | ✅ GitGitHub 遠端) |

344
.autoflow/02-prd/PRD.md Normal file
View File

@ -0,0 +1,344 @@
# PRD — Cluster4NPU UI
> 此 PRD 為從既有程式碼與文件反向整理,反映截至 2026-04-05 的實際狀況。
> 版本v0.0.3developer 分支)
---
## 1. 產品概覽
### 1.1 產品願景
Cluster4NPU UI 的目標是讓任何人(不需要寫程式)都能夠透過直覺的視覺化拖拽介面,設計並執行平行 AI 推論 Pipeline充分發揮 Kneron NPU Dongle 的硬體效能,並清楚看見平行處理帶來的效能提升。
**一句話描述**:「用拖拽的方式設計 AI Pipeline不需要一行程式碼就能讓多個 NPU Dongle 平行加速你的 AI 推論工作。」
### 1.2 目標用戶
**主要用戶AI 應用整合工程師 / 系統整合商**
- 具備 AI 模型使用知識,但未必熟悉底層 NPU 程式設計
- 需要快速驗證多模型串接 Pipeline 的效能
- 希望在不修改程式碼的情況下調整 Pipeline 設定與硬體分配
**次要用戶AI 研究員 / 技術評估人員**
- 需要比較不同 Pipeline 配置下的效能表現
- 希望有可視化的數據佐證平行處理的效益(用於提案或報告)
**潛在用戶Kneron 硬體銷售團隊**
- 需要 Demo 工具,向潛在客戶展示 Kneron NPU 的效能優勢
### 1.3 核心價值主張
1. **無程式碼 Pipeline 設計**:拖拽介面即可建立複雜多模型 AI Pipeline
2. **平行效能可視化**:清楚顯示平行 vs 循序處理的效能差異2x、3x、4x 加速)
3. **硬體自動管理**:自動偵測並最佳化 NPU Dongle 分配,降低使用門檻
4. **專業監控工具**:即時 FPS、延遲、吞吐量監控滿足工程師級的分析需求
---
## 2. 市場背景
### 2.1 問題陳述
隨著 Edge AI 應用普及,使用者面臨以下問題:
1. **設定複雜**:在多個 NPU Dongle 上執行平行 AI 推論需要撰寫大量底層程式碼
2. **效能不透明**:難以量化平行處理帶來的效能增益,缺乏說服力
3. **Pipeline 設計困難**:多模型串接(如 偵測 → 追蹤 → 分類)需要手動處理資料流
4. **硬體管理負擔**:多個 NPU Dongle 的分配、監控、除錯缺乏統一工具
### 2.2 目標市場
- **主要市場**:使用 Kneron NPU 硬體KL520、KL720、KL1080的系統整合商與企業用戶
- **市場範圍**Edge AI 推論領域,偏向工業視覺、安全監控、智慧零售等應用場景
- **地理範圍**:目前以繁體中文、英文環境為主(台灣、亞太地區)
---
## 3. 用戶故事
以下用戶故事基於現有功能與規劃功能:
**已實現的用戶故事:**
- As a system integrator, I want to design an AI inference pipeline by dragging and dropping nodes, so that I can build complex multi-model workflows without writing code.
- As a developer, I want to see real-time pipeline validation errors, so that I can fix configuration issues before deployment.
- As a user, I want to save my pipeline configuration to a file (.mflow), so that I can reuse and share it with teammates.
- As an engineer, I want to see live FPS and latency metrics during inference, so that I can monitor pipeline performance in real time.
- As a hardware manager, I want the application to automatically detect available NPU dongles, so that I don't need to manually configure device connections.
- As a user, I want to load video files, camera streams, or images as pipeline inputs, so that I can test my pipeline with different data sources.
**待開發的用戶故事:**
- As a user, I want to compare parallel vs sequential inference performance side by side, so that I can clearly see the speedup benefit of using multiple NPU dongles.
- As an engineer, I want to run automated benchmarks with one click, so that I can measure performance without manual testing.
- As a hardware manager, I want to visually assign NPU dongles to specific pipeline stages, so that I have fine-grained control over device allocation.
- As a user, I want to see live performance graphs (FPS, latency over time), so that I can identify bottlenecks during pipeline execution.
- As an engineer, I want to receive automated optimization suggestions, so that I can improve pipeline performance without deep NPU expertise.
- As a sales engineer, I want to generate a performance report showing speedup metrics, so that I can present the ROI of parallel NPU processing to clients.
---
## 4. 功能需求
### 4.1 已完成功能(現有)
以下功能已在 v0.0.3 中實作完成(資料來源:健檢報告):
| 功能 | 描述 | 狀態 |
|------|------|------|
| 視覺化 Pipeline 編輯器 | 基於 NodeGraphQt 的拖拽節點介面 | 完成 |
| 5 種節點類型 | Input / Preprocess / Model / Postprocess / Output | 完成 |
| Pipeline 即時驗證 | 即時 Stage 偵測與錯誤標示 | 完成 |
| .mflow 檔案格式 | Pipeline 儲存與載入JSON 格式) | 完成 |
| 三面板 UI 佈局 | 左:節點面板、中:編輯器、右:設定與監控 | 完成 |
| 多 NPU Dongle 支援 | KL520 / KL720 / KL1080 自動偵測 | 完成 |
| 多 Stage 推論引擎 | 基於多執行緒的平行 Pipeline 執行 | 完成 |
| 效能基礎監控 | FPS、延遲即時顯示有已知 Bug | 完成(有瑕疵) |
| 多種輸入來源 | 相機USB、影片MP4/AVI/MOV、圖片JPG/PNG/BMP、RTSP 串流(基本) | 完成 |
| 專案管理 | 登入畫面、最近專案清單、新增 / 載入 Pipeline | 完成 |
| YOLOv5 後處理 | 偵測結果格式化與邊界框處理 | 完成 |
| ByteTrack 追蹤 | 物件追蹤後處理example_utils | 完成 |
| 固件上傳支援 | upload_fw 選項與推論流程整合 | 完成v0.0.2 |
| PyInstaller 打包 | 獨立執行檔打包支援main.spec | 完成 |
**已知 Bugv0.0.2 記錄):**
- 節點屬性顯示問題
- 輸出視覺化(含後處理結果)異常
### 4.2 待開發功能(依優先級)
#### Phase 1效能視覺化第 1-2 週優先級P0
**功能 1平行 vs 循序效能比較**
- **描述**:提供並行處理與循序處理的效能對照,視覺化顯示加速倍數(如 "3.2x FASTER"
- **驗收標準**
- 可選擇「單裝置 / 多裝置」模式執行同一 Pipeline
- 顯示兩種模式的 FPS 與延遲數值
- 以視覺指標(進度條、倍數文字)呈現加速結果
- 比較結果可在 UI 中保留供查閱
- **優先級**P0
- **所屬 Phase**Phase 1
**功能 2自動化效能 Benchmark 系統PerformanceBenchmarker**
- **描述**:一鍵啟動效能測試,自動執行單裝置與多裝置比較並記錄結果
- **驗收標準**
- 提供「執行 Benchmark」按鈕
- 自動完成測試並呈現結果圖表
- 結果可歷史保存(追蹤效能變化)
- 支援回歸測試(比較不同版本的效能)
- **優先級**P0
- **所屬 Phase**Phase 1
**功能 3即時效能儀表板PerformanceDashboard**
- **描述**:在推論執行期間顯示即時 FPS、延遲、吞吐量折線圖
- **驗收標準**
- 以圖表形式顯示 FPS 隨時間變化
- 以圖表形式顯示延遲分佈
- 更新頻率 >= 1 Hz
- 不影響推論效能CPU 使用率增加 < 5%
- **優先級**P0
- **所屬 Phase**Phase 1
#### Phase 2裝置管理第 3-4 週優先級P1
**功能 4視覺化裝置管理面板DeviceManagementPanel**
- **描述**:提供 NPU Dongle 狀態總覽,包含裝置健康度、型號、當前分配狀態
- **驗收標準**
- 列出所有已偵測的 NPU Dongle 及其狀態(線上/離線/繁忙)
- 顯示每個裝置的型號KL520/KL720/KL1080
- 顯示每個裝置當前分配至哪個 Pipeline Stage
- **優先級**P1
- **所屬 Phase**Phase 2
**功能 5手動裝置分配介面**
- **描述**:允許用戶手動將特定 NPU Dongle 指定給特定 Pipeline Stage
- **驗收標準**
- 可透過下拉選單或拖拽方式指定裝置
- 指定後立即反映在 Pipeline 執行設定中
- 無效的分配(如指定離線裝置)會有錯誤提示
- **優先級**P1
- **所屬 Phase**Phase 2
**功能 6裝置效能分析DeviceManager 強化)**
- **描述**:追蹤個別 NPU Dongle 的效能指標與歷史記錄
- **驗收標準**
- 顯示每個裝置的推論吞吐量Inference/sec
- 顯示裝置使用率百分比
- 提供自動負載平衡建議
- **優先級**P1
- **所屬 Phase**Phase 2
**功能 7瓶頸偵測與警告系統**
- **描述**:自動識別 Pipeline 中的效能瓶頸並發出警告
- **驗收標準**
- 當某 Stage 的佇列持續積壓時觸發警告
- 在 UI 中以視覺提示標示瓶頸 Stage
- 提供基本的改善建議(如增加裝置數量)
- **優先級**P1
- **所屬 Phase**Phase 2
#### Phase 3進階功能第 5-6 週優先級P2
**功能 8自動化優化引擎OptimizationEngine**
- **描述**:分析當前 Pipeline 配置,自動產生效能優化建議
- **驗收標準**
- 分析 Stage 效能差異,建議最佳裝置分配方式
- 識別不必要的前後處理步驟並提出建議
- 建議以卡片形式呈現,用戶可選擇採納或忽略
- **優先級**P2
- **所屬 Phase**Phase 3
**功能 9Pipeline 設定範本**
- **描述**:提供常見使用情境的預設 Pipeline 範本(如 YOLOv5 偵測、物件追蹤)
- **驗收標準**
- 提供至少 3 種常見範本
- 範本可直接載入並修改
- 現有 Pipeline 可儲存為自訂範本
- **優先級**P2
- **所屬 Phase**Phase 3
**功能 10效能預測執行前估算**
- **描述**:在執行 Pipeline 之前,根據硬體設定預估效能表現
- **驗收標準**
- 顯示預估 FPS 與延遲範圍
- 預估值與實際值誤差 <= 20%(基於歷史資料)
- **優先級**P2
- **所屬 Phase**Phase 3
#### Phase 4專業潤色第 7-8 週優先級P2
**功能 11效能報告匯出**
- **描述**:將 Benchmark 結果匯出為可分享的報告格式
- **驗收標準**
- 支援匯出為 PDF 或 CSV
- 報告包含Pipeline 設定、裝置配置、效能指標、加速倍數
- **優先級**P2
- **所屬 Phase**Phase 4
**功能 12進階分析與趨勢圖**
- **描述**:追蹤效能指標的歷史趨勢,識別長期的效能退化
- **驗收標準**
- 顯示多次執行的效能趨勢圖
- 支援篩選特定時間範圍
- **優先級**P2
- **所屬 Phase**Phase 4
---
## 5. 非功能需求
### 5.1 效能需求
- UI 互動回應時間 < 200ms節點拖拽屬性切換
- Pipeline 即時驗證延遲 < 100ms
- 效能儀表板更新不得對推論 FPS 造成超過 5% 的影響
- 應用程式啟動時間(含硬體偵測)< 10
### 5.2 相容性需求
- **作業系統**Windows 10/11主要Linux次要
- **Python 版本**3.9 以上、3.12 以下
- **硬體**Kneron NPU DongleKL520、KL720、KL1080USB 3.0 連接
- **PyQt5 版本**>= 5.15.11
### 5.3 可用性需求
- 首次使用者應能在 5 分鐘內完成基本 Pipeline 設計(拖拽 5 個節點並連接)
- 節點設定面板需防止水平滾動條出現(已在 v0.0.2 修正)
- 所有錯誤訊息應具有可讀性,避免技術術語
### 5.4 可靠性需求
- 重複執行推論不得出現錯誤(已在 v0.0.2 修正)
- Pipeline 儲存(.mflow需能完整還原節點設定與連接關係
- 應用程式異常關閉後,下次啟動應能顯示最近專案清單
### 5.5 可維護性需求
- 新增節點類型需有對應的單元測試
- 核心模組InferencePipeline、Multidongle需有 pytest 格式的測試覆蓋
- 根目錄的 debug/cleanup 腳本應整理並移至 `tools/``tests/` 目錄
---
## 6. 成功指標
### 6.1 核心使用目標(依產品階段)
**Phase 1 完成標準(效能視覺化):**
- 用戶可在 3 步以內啟動 Benchmark 並看到加速倍數比較結果
- 儀表板更新流暢(無明顯卡頓)
**Phase 2 完成標準(裝置管理):**
- 用戶可在不修改程式碼的情況下手動調整裝置分配
- 瓶頸偵測正確識別率 > 80%(在測試情境下)
**Phase 3 完成標準(進階功能):**
- OptimizationEngine 建議的裝置分配方案,實際效能提升 > 10%
- 提供至少 3 種可直接使用的 Pipeline 範本
**整體產品品質標準:**
- 已知 Bug節點屬性顯示、輸出視覺化全數修復
- 完整的 pytest 測試覆蓋核心模組
### 6.2 使用者體驗指標
- Pipeline 設計完成時間(目標:首次使用 < 5 分鐘熟悉後 < 2 分鐘
- Benchmark 一鍵啟動到結果呈現(目標:< 30 秒完成
---
## 7. 超出範圍
以下事項明確不在 v0.0.3 至 Phase 4 的開發範圍內:
1. **雲端功能**:無雲端儲存、遠端執行、或 SaaS 服務
2. **非 Kneron 硬體支援**:不支援其他廠商的 NPU如 Hailo、Coral
3. **模型訓練**本工具僅處理推論Inference不包含模型訓練功能
4. **行動端 App**僅為桌面應用Windows / Linux
5. **多人協作**:不支援多人同時編輯同一 Pipeline
6. **付費 / 授權系統**:目前無商業授權機制
7. **自動語言切換 / 完整多語系**:目前以英文 UI 為主,無正式多語系支援
8. **RTSP 串流完整支援**RTSP 目前僅為基本支援,完整串流管理不在當前範圍
---
## 附錄
### A. 版本歷史摘要
| 版本 | 日期 | 主要變更 |
|------|------|---------|
| v0.0.1 | — | 初始版本(確切日期不明) |
| v0.0.2 | 2025-07-31 | 自動資料清理、固件上傳支援、修復多次推論錯誤、FPS 修正 |
| v0.0.3 | 進行中 | YOLOv5 後處理改善、測試腳本整理developer 分支) |
### B. 相關文件
- 健檢報告:`C:\Users\sungs\Documents\abin\temp\cluster4npu\.autoflow\00-onboarding\health-check.md`
- 開發路線圖:`C:\Users\sungs\Documents\abin\temp\cluster4npu\DEVELOPMENT_ROADMAP.md`
- 專案摘要:`C:\Users\sungs\Documents\abin\temp\cluster4npu\PROJECT_SUMMARY.md`
- README`C:\Users\sungs\Documents\abin\temp\cluster4npu\README.md`
### C. 技術限制說明
- 本工具強依賴 Kneron KP SDKSDK 版本更新可能影響硬體相容性
- NodeGraphQt 的視覺編輯器版本(>= 0.6.40)限制了某些 UI 客製化能力
- Python 版本限制3.93.11)源自 PyQt5 與 Kneron SDK 的相容性需求

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,581 @@
# Design Doc — Cluster4NPU UI
## 作者Architect Agent
## 狀態Draft
## 最後更新2026-04-05
## 版本對應v0.0.3developer 分支)
---
## 1. 背景與目標
### 1.1 背景
Cluster4NPU UI 是一個桌面應用程式,讓使用者不需要撰寫程式碼,就能透過視覺化拖拽介面設計並執行 AI 推論 Pipeline並將工作負載分配到多個 Kneron NPU DongleKL520、KL720、KL1080上平行執行。
現有系統已完成核心 Pipeline 設計器與推論引擎的基礎建設,但缺乏:
- 效能視覺化(無法直觀看到平行處理的加速效果)
- 進階裝置管理介面
- 自動化 Benchmark 系統
- 優化建議引擎
### 1.2 目標
1. **核心目標**:使任何 AI 應用工程師都能在 5 分鐘內完成 Pipeline 設計並看到推論結果
2. **差異化目標**:清楚視覺化呈現多 NPU Dongle 平行處理帶來的效能加速2x、3x、4x
3. **工程目標**:提供可擴展的架構,支援 Phase 1-4 的功能迭代
### 1.3 範圍
**本文件涵蓋:**
- 現有v0.0.3)核心架構的完整說明
- Phase 1-3 待開發功能的架構設計方向
**不涵蓋:**
- 雲端功能、非 Kneron 硬體、模型訓練、行動端
---
## 2. 系統架構總覽
### 2.1 整體分層架構
```
┌─────────────────────────────────────────────────────────┐
│ 使用者介面層UI Layer
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Login Window │ │ Dashboard │ │ Dialogs │ │
│ │ (login.py) │ │(dashboard.py)│ │ (deployment, │ │
│ └──────────────┘ └──────────────┘ │ performance)│ │
│ │ └──────────────┘ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ 三面板佈局Three-Panel Layout │ │
│ │ ┌──────────┐ ┌──────────────┐ ┌──────────┐ │ │
│ │ │ 左面板 │ │ 中面板 │ │ 右面板 │ │ │
│ │ │ 節點面板 │ │ Pipeline 編輯│ │ 設定/監控│ │ │
│ │ │(palette) │ │ (NodeGraphQt)│ │(properties│ │ │
│ │ └──────────┘ └──────────────┘ └──────────┘ │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ 應用程式核心層Core Layer
│ │
│ ┌────────────────────┐ ┌──────────────────────────┐ │
│ │ Pipeline 分析引擎 │ │ 節點系統Nodes │ │
│ │ (pipeline.py) │ │ (base/input/model/ │ │
│ │ │ │ preprocess/postprocess/ │ │
│ │ - Stage 偵測 │ │ output nodes) │ │
│ │ - 結構驗證 │ │ │ │
│ │ - 路徑分析 │ │ - 業務屬性管理 │ │
│ │ - 設定匯出 │ │ - 設定序列化 │ │
│ └────────────────────┘ └──────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ 推論執行層Inference Execution Layer │ │
│ │ │ │
│ │ ┌──────────────────────┐ ┌─────────────────┐ │ │
│ │ │ InferencePipeline │ │ MultiDongle │ │ │
│ │ │ │ │ │ │ │
│ │ │ - 多 Stage 協調 │ │ - NPU 裝置管理 │ │ │
│ │ │ - 執行緒管理 │ │ - 非同步推論 │ │ │
│ │ │ - 佇列管理 │ │ - 前後處理 │ │ │
│ │ │ - FPS 計算 │ │ - 多裝置排程 │ │ │
│ │ └──────────────────────┘ └─────────────────┘ │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ 硬體抽象層Hardware Abstraction Layer
│ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Kneron KP SDK │ │
│ │ │ │
│ │ KL520 Dongle │ KL720 Dongle │ KL1080 Dongle │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
```
### 2.2 模組間依賴關係
```
main.py
└── ui/windows/login.py (DashboardLogin)
└── ui/windows/dashboard.py (DashboardWindow)
├── ui/windows/pipeline_editor.py
│ └── core/pipeline.py (PipelineAnalyzer)
│ └── core/nodes/*.py
├── ui/components/properties_widget.py
│ └── core/nodes/*.py
└── core/functions/InferencePipeline.py
└── core/functions/Multidongle.py
└── kp (Kneron KP SDK)
```
---
## 3. 核心元件說明
### 3.1 Pipeline 分析引擎(`core/pipeline.py`
**職責:** 分析 NodeGraphQt 視覺圖形,識別 Pipeline 的 Stage 結構、驗證合法性、產生執行設定。
**關鍵類別:**
| 類別/函式 | 職責 |
|---------|------|
| `PipelineStage` | 代表一個推論 Stage包含 ModelNode 與可選的 Pre/Postprocess Node |
| `analyze_pipeline_stages(node_graph)` | 從視覺圖形中識別所有 Stage依距離排序 |
| `get_stage_count(node_graph)` | 計算 Pipeline 中的 Stage 數量(用於 UI 顯示) |
| `validate_pipeline_structure(node_graph)` | 驗證 Pipeline 是否包含必要節點Input、Model、Output |
| `get_pipeline_summary(node_graph)` | 回傳 Pipeline 統計摘要節點數、Stage 數、驗證結果) |
**設計決策:**
- 採用多重節點識別策略(`__identifier__``type_``NODE_NAME`、class 名稱、特定方法的存在)以提高相容性
- Stage 排序依據:計算各 ModelNode 到輸入節點的最短路徑距離BFS
- 所有圖遍歷方法都包含 defensive exception handling避免 NodeGraphQt 物件狀態不一致時崩潰
**介面:**
```python
# 主要公開介面
get_stage_count(node_graph: NodeGraph) -> int
analyze_pipeline_stages(node_graph: NodeGraph) -> List[PipelineStage]
validate_pipeline_structure(node_graph: NodeGraph) -> Tuple[bool, str]
get_pipeline_summary(node_graph: NodeGraph) -> Dict[str, Any]
```
### 3.2 節點系統(`core/nodes/`
**職責:** 定義 Pipeline 中的各類節點,提供業務屬性管理與設定序列化能力。
**繼承架構:**
```
NodeGraphQt.BaseNode
└── BaseNodeWithPropertiesbase_node.py
├── InputNodeinput_node.py
├── ModelNodemodel_node.py
├── PreprocessNodepreprocess_node.py
├── PostprocessNodepostprocess_node.py
└── OutputNodeoutput_node.py
```
**`BaseNodeWithProperties` 核心能力:**
- `create_business_property(name, default, options)` — 建立帶驗證選項的業務屬性
- `validate_property(name, value)` — 數值範圍、選項列表驗證
- `get_node_config()` / `load_node_config(config)` — JSON 序列化/還原
- `create_node_property_widget(node, prop_name, value, options)` — 根據屬性型別自動生成 Qt Widget
**ModelNode 屬性(主要節點):**
| 屬性 | 型別 | 說明 |
|------|------|------|
| `model_path` | file_path | .nef 模型檔案路徑 |
| `dongle_series` | choice | KL520 / KL720 / KL1080 |
| `num_dongles` | int (1-16) | 分配給此 Stage 的 Dongle 數量 |
| `port_id` | string | USB Port ID或 auto |
| `batch_size` | int (1-32) | 推論批次大小 |
| `max_queue_size` | int (1-100) | 輸入佇列最大長度 |
### 3.3 推論執行引擎(`core/functions/InferencePipeline.py`
**職責:** 管理多 Stage Pipeline 的生命週期、協調執行緒間資料流、計算效能指標。
**主要資料結構:**
```python
@dataclass
class StageConfig:
stage_id: str
port_ids: List[int]
scpu_fw_path: str # SCPU 韌體路徑
ncpu_fw_path: str # NCPU 韌體路徑
model_path: str # .nef 模型路徑
upload_fw: bool # 是否上傳韌體
max_queue_size: int # 佇列大小(預設 50
multi_series_config: Optional[Dict] # 多系列模式設定
input_preprocessor: Optional[PreProcessor]
output_postprocessor: Optional[PostProcessor]
@dataclass
class PipelineData:
data: Any # 當前資料(影像、中間結果)
metadata: Dict[str, Any] # 時間戳、處理資訊
stage_results: Dict[str, Any] # 各 Stage 推論結果
pipeline_id: str # 唯一識別碼
timestamp: float
```
**執行緒模型:**
```
主執行緒UI
├── InferencePipeline.coordinator_thread協調器
│ │ 從 pipeline_input_queue 取資料
│ │ 依序分配給各 Stage
│ └── 收集結果放入 pipeline_output_queue
├── PipelineStage[0].worker_threadStage 0 工作執行緒)
│ └── 從 input_queue 取資料 → MultiDongle 推論 → 放入 output_queue
├── PipelineStage[1].worker_threadStage 1 工作執行緒)
│ └── ...
└── stats_thread效能統計回報
```
**FPS 計算方式:** 採用累積式計算(`completed_counter / elapsed_time`),與 Kneron 範例程式的計算邏輯一致,只計算真實推論結果(排除 async/processing 狀態)。
**佇列管理策略:**
- 輸入佇列滿時:捨棄最舊的幀(為了即時串流的實時性)
- 輸出佇列上限 50 筆:超出時捨棄最舊的結果,避免記憶體無限增長
### 3.4 硬體抽象層(`core/functions/Multidongle.py`
**職責:** 封裝 Kneron KP SDK提供統一的 NPU Dongle 管理介面支援單裝置與多裝置multi-series模式。
**核心抽象類別:**
```python
class DataProcessor(ABC):
def process(self, data: Any, *args, **kwargs) -> Any: ...
class PreProcessor(DataProcessor):
# 影像縮放resize+ 格式轉換BGR → BGR565/RGB8888
class PostProcessor(DataProcessor):
# 支援 4 種後處理類型:
# - FIRE_DETECTION火焰分類
# - CLASSIFICATION一般分類
# - YOLO_V3物件偵測
# - YOLO_V5物件偵測使用參考實作
# - RAW_OUTPUT原始輸出
```
**裝置規格DongleSeriesSpec**
| 系列 | Product ID | GOPS 算力 |
|------|-----------|---------|
| KL520 | 0x100 | 2 GOPS |
| KL720 | 0x720 | 28 GOPS |
| KL630 | 0x630 | 400 GOPS |
| KL730 | 0x730 | 1600 GOPS |
**推論結果資料結構:**
```python
@dataclass
class ClassificationResult:
probability: float
class_name: str
class_num: int
confidence_threshold: float
@dataclass
class ObjectDetectionResult:
class_count: int
box_count: int
box_list: List[BoundingBox]
# Letterbox 映射資訊(用於還原到原始影像座標)
model_input_width, model_input_height: int
pad_left, pad_top, pad_right, pad_bottom: int
```
### 3.5 使用者介面層(`ui/`
**職責:** 呈現視覺化 Pipeline 設計環境,管理節點屬性設定、效能監控顯示。
**主要視窗:**
- `DashboardLogin``ui/windows/login.py`):啟動畫面、最近專案清單、新建/載入 Pipeline
- `DashboardWindow``ui/windows/dashboard.py`):主工作介面,三面板佈局
- `PipelineEditor``ui/windows/pipeline_editor.py`):內嵌 NodeGraphQt 視覺編輯器
**三面板配置:**
| 面板 | 寬度比例 | 主要內容 |
|------|---------|---------|
| 左面板 | 25% | 節點面板拖拽來源、Pipeline 操作按鈕 |
| 中面板 | 50% | NodeGraphQt 視覺編輯器、全域狀態列 |
| 右面板 | 25% | Properties Tab節點設定、Performance Tab效能監控、Dongles Tab裝置管理 |
### 3.6 應用程式入口(`main.py`
**職責:** 應用程式初始化、單一實例保護、Qt 環境設定。
**單一實例機制:** `SingleInstance` 類別採用雙重保護:
1. Qt `QSharedMemory`(跨平台)
2. 檔案鎖Unix: fcntl / Windows: O_CREAT|O_EXCL
3. 自動清理 5 分鐘以上的過期鎖定檔案
---
## 4. 資料流
### 4.1 設計階段資料流Design Time
```
使用者拖拽節點
NodeGraphQt 視覺圖形
core/pipeline.py
analyze_pipeline_stages()
List[PipelineStage](邏輯 Stage 列表)
├──→ UI 顯示 Stage 數量(狀態列)
└──→ 驗證錯誤提示Validation Errors
```
### 4.2 執行階段資料流Runtime
```
輸入來源(相機 / 影片 / 圖片)
camera_source.py / video_source.py
│ numpy.ndarrayBGR 影像幀)
InferencePipeline.put_data()
pipeline_input_queueQueue, maxsize=100
coordinator_thread協調器執行緒
建立 PipelineData 包裝器
▼(依序通過每個 Stage
PipelineStage[0].input_queue
worker_thread[0]
1. input_preprocessor可選的 Stage 間前處理)
2. MultiDongle.preprocess_frame()BGR → BGR565 格式轉換)
3. MultiDongle.put_input()(送入推論佇列)
4. MultiDongle.get_latest_inference_result()(非阻塞取結果)
5. 更新 PipelineData.stage_results
PipelineStage[0].output_queue
▼(下一個 Stage...
pipeline_output_queueQueue, maxsize=50
├──→ result_callbackUI 更新)
└──→ stats_callback效能統計
```
### 4.3 .mflow 檔案格式
Pipeline 儲存為 JSON 格式:
```json
{
"nodes": [
{
"type": "ModelNode",
"name": "Stage 1 Model",
"properties": {
"model_path": "/path/to/model.nef",
"dongle_series": "720",
"num_dongles": 2
},
"position": [100, 200]
}
],
"connections": [
{"from_node": "input_0", "from_port": "output", "to_node": "model_0", "to_port": "input"}
]
}
```
---
## 5. 技術決策紀錄ADR
### ADR-001選用 PyQt5 作為 GUI 框架
**決策**:使用 PyQt5>= 5.15.11
**原因:**
- NodeGraphQt 依賴 PyQt5無法使用其他框架
- PyQt5 在 Windows 上有成熟的支援
- 提供豐富的 Widget 與 Signal/Slot 機制
**取捨:**
- 限制 Python 版本在 3.93.11PyQt5 + Kneron SDK 相容性)
- PyQt6 不向下相容,短期不考慮遷移
### ADR-002選用 NodeGraphQt 作為視覺節點編輯器
**決策**:使用 NodeGraphQt>= 0.6.40
**原因:**
- 提供完整的拖拽節點圖形編輯能力,開發成本低
- 支援節點連接、屬性面板、視覺化輸出
**取捨:**
- NodeGraphQt 的 UI 客製化能力有限(如節點顏色、形狀)
- 節點識別採用多重 fallback 機制(透過 `__identifier__``NODE_NAME` 等),因 NodeGraphQt 版本差異可能造成 API 不一致
### ADR-003多執行緒 Pipeline 架構
**決策**:每個 Stage 一個 Worker Thread + 一個 Coordinator Thread
**原因:**
- 推論為 CPU/硬體密集操作,多執行緒可避免 UI 阻塞
- 各 Stage 獨立執行緒允許流水線pipelining並行提升吞吐量
**取捨:**
- 協調器採用循序sequential方式傳遞資料並非真正平行真正平行需要 DAG 調度器)
- 使用 `queue.Queue` 進行執行緒間通訊,有固定的記憶體上限
### ADR-004非阻塞式推論結果取得
**決策**`MultiDongle.get_latest_inference_result()` 採用非阻塞模式
**原因:**
- 與 Kneron 範例程式碼example.py的設計模式一致
- 避免推論延遲阻塞整個 Pipeline 執行緒
**取捨:**
- 結果可能為 None尚未完成需要 async/processing 狀態的過濾邏輯
### ADR-005FPS 計算採用累積式
**決策**`completed_counter / elapsed_time`(從第一個結果開始計算)
**原因:**
- 與 Kneron 官方範例的計算方式一致,確保可比性
- 排除熱機warm-up期間的異常低 FPS
**取捨:**
- 無法反映即時的 FPS 波動(適合穩定場景,不適合延遲敏感場景)
### ADR-006PyInstaller 打包
**決策**:使用 PyInstaller`main.spec`)產生獨立可執行檔
**原因:**
- 目標用戶(系統整合商)可能沒有 Python 環境
- 簡化部署流程
**取捨:**
- 打包後的執行檔體積較大
- Kneron KP SDK 的動態函式庫需要正確包含在打包設定中
---
## 6. 已知限制與技術債
### 6.1 已知 Bug
| Bug | 狀態 | 影響 |
|-----|------|------|
| 節點屬性顯示問題 | 未修復v0.0.2 記錄) | 右面板 Properties Tab 可能顯示錯誤 |
| 輸出視覺化異常(含後處理結果) | 未修復v0.0.2 記錄) | 輸出畫面可能不正確 |
### 6.2 技術債
| 項目 | 嚴重度 | 說明 |
|------|--------|------|
| 根目錄 debug 腳本未整理 | 低 | `debug_*.py``force_cleanup.py` 等應移至 `tools/` |
| tests/ 命名混亂 | 中 | 42 個腳本缺乏系統性分類,部分非 test_ 開頭 |
| 缺乏 pytest 測試框架 | 中 | 核心模組InferencePipeline、MultiDongle無 pytest 覆蓋 |
| Coordinator 為循序設計 | 中 | 真正的 Stage 並行需要重構協調器為 DAG 模式 |
| 節點識別多重 fallback | 低 | 可讀性差,應統一為單一識別策略 |
| RTSP 串流僅基本支援 | 低 | 完整 RTSP 功能未在當前路線圖中 |
### 6.3 效能限制
- **協調器為循序傳遞**:目前 Coordinator 依序將資料傳給 Stage 0 → Stage 1無真正的平行推論真正平行需重構為流水線佇列模式
- **FPS 計算不反映即時波動**:累積式 FPS 在長時間執行後準確,但短期波動不可見
- **輸出佇列上限 50**:高吞吐量場景下可能成為瓶頸
---
## 7. 未來架構演進方向
### Phase 1效能視覺化對應 DEVELOPMENT_ROADMAP Phase 1
**需要新增的架構元件:**
```python
# 新增模組core/performance/
class PerformanceBenchmarker:
"""自動化效能測試器"""
def run_sequential_benchmark(self, pipeline_config) -> BenchmarkResult
def run_parallel_benchmark(self, pipeline_config) -> BenchmarkResult
def calculate_speedup(self, seq: BenchmarkResult, par: BenchmarkResult) -> float
class PerformanceHistory:
"""效能歷史記錄(本地 JSON 儲存)"""
def record(self, result: BenchmarkResult)
def get_history(self, limit: int) -> List[BenchmarkResult]
```
**UI 層新增:**
- `ui/components/performance_dashboard.py`:即時 FPS/延遲折線圖(使用 pyqtgraph 或 matplotlib
- `ui/dialogs/benchmark_dialog.py`Benchmark 啟動與結果呈現
**架構考量:**
- Benchmark 需要能控制 `InferencePipeline` 以單裝置/多裝置模式執行,需要在 `StageConfig` 層級提供模式切換介面
- 效能圖表更新須在獨立執行緒中產生資料,透過 Qt Signal 傳遞到 UI 執行緒
### Phase 2裝置管理對應 DEVELOPMENT_ROADMAP Phase 2
**需要新增的架構元件:**
```python
# 強化 core/functions/Multidongle.py
class DeviceManager:
"""裝置管理器"""
def scan_devices() -> List[DeviceInfo]
def get_device_health(device_id: str) -> DeviceHealth
def assign_device(device_id: str, stage_id: str)
def get_load_balance_recommendation() -> Dict[str, str]
@dataclass
class DeviceInfo:
device_id: str
series: str # KL520/KL720/KL1080
status: str # online/offline/busy
gops: int # 算力(來自 DongleSeriesSpec
assigned_stage: Optional[str]
```
**UI 層新增:**
- `ui/components/device_management_panel.py`:裝置狀態儀表板
### Phase 3優化引擎對應 DEVELOPMENT_ROADMAP Phase 3
**需要新增的架構元件:**
```python
# 新增模組core/optimization/
class OptimizationEngine:
def analyze_pipeline(self, stats: PipelineStats) -> List[OptimizationSuggestion]
def predict_performance(self, config: PipelineConfig) -> PerformancePrediction
@dataclass
class OptimizationSuggestion:
type: str # "rebalance_devices" | "remove_redundant_node" | ...
description: str
estimated_improvement: float # 預估效能提升 %
action: Callable # 可執行的改善動作
```
### 架構演進的長期考量
1. **Coordinator 重構**:當前循序協調器在多 Stage Pipeline 中形成瓶頸。長期應重構為流水線pipeline模式讓 Stage N+1 在 Stage N 處理下一幀時就開始處理上一幀的結果。
2. **測試架構建立**:建立 pytest 測試框架,核心模組需達到 80% 以上覆蓋率(特別是 `InferencePipeline` 的佇列邏輯、`pipeline.py` 的 Stage 分析邏輯)。
3. **型別標註完善**:目前部分模組缺乏完整型別標註,建議逐步加入 mypy 靜態分析。

39
.autoflow/progress.md Normal file
View File

@ -0,0 +1,39 @@
# 專案進度 — Cluster4NPU UI
## 目的:接入既有專案 → 文件補齊 → Phase 1 開發
## 當前階段Phase 1 開發完成,待執行測試
## 當前狀態:進行中
## 最後更新2026-04-05
## 進度表
| 階段 | 狀態 | 完成時間 | 備註 |
|------|------|----------|------|
| 專案接入 | ✅ 已完成 | 2026-04-05 | 本地路徑 |
| 專案健檢 | ✅ 已完成 | 2026-04-05 | 見 00-onboarding/health-check.md |
| PRD 產出 | ✅ 已完成 | 2026-04-05 | 02-prd/PRD.md |
| Design Doc 產出 | ✅ 已完成 | 2026-04-05 | 04-architecture/design-doc.md |
| TDD 產出 | ✅ 已完成 | 2026-04-05 | 04-architecture/TDD.md |
| 交叉審閱 | ✅ 已完成 | 2026-04-05 | PM 審閱 TDD缺口已補充 |
| TDD 補充Phase 4 功能 11 | ✅ 已完成 | 2026-04-05 | reportlab PDF + csv 標準庫 |
| Phase 1 後端實作 | ✅ Review 通過 | 2026-04-05 | PerformanceBenchmarker + PerformanceHistory31 tests |
| Phase 1 UI 實作 | ✅ Review 通過 | 2026-04-05 | PerformanceDashboard + BenchmarkDialog58 tests total |
| Phase 1 整合到 dashboard | ✅ Review 通過 | 2026-04-05 | dashboard.py 整合完成 |
| Phase 2 後端實作 | ✅ Review 通過 | 2026-04-05 | DeviceManager + BottleneckAlert94 tests |
| Phase 2 UI 實作 | ✅ Review 通過 | 2026-04-05 | DeviceManagementPanel已整合到 dashboard |
| Phase 3 開發 | ✅ Review 通過 | 2026-04-06 | OptimizationEngine + TemplateManager154 tests |
| Phase 4 開發 | ✅ Review 通過 | 2026-04-06 | ReportExporter + ExportReportDialog192 tests |
## 當前待辦
- [ ] 執行 Phase 1 整合測試確認所有元件協同運作
- [ ] 決定是否繼續 Phase 2
## 未解決問題
- 無
## 重要決策紀錄
- 程式碼來源:本地路徑(非 GitHub
- 文件補齊策略:從程式碼反向整理,不補設計稿(無現有 UI 截圖或 Wireframe

1
core/device/__init__.py Normal file
View File

@ -0,0 +1 @@
"""core.device — device management subpackage."""

32
core/device/bottleneck.py Normal file
View File

@ -0,0 +1,32 @@
"""
core/device/bottleneck.py
BottleneckAlert dataclass describes a detected pipeline bottleneck.
Integration with InferencePipeline is deferred to a later phase.
This module only defines the data structure.
"""
from dataclasses import dataclass
@dataclass
class BottleneckAlert:
"""Describes a detected pipeline bottleneck in a single Stage.
Attributes
----------
stage_id:
The pipeline Stage that is experiencing the bottleneck.
queue_fill_rate:
Input queue utilisation as a fraction in [0.0, 1.0].
suggested_action:
Human-readable suggestion (e.g. "Add more Dongles to this stage").
severity:
Either ``"warning"`` (fill_rate > 0.8) or ``"critical"``
(fill_rate > 0.95).
"""
stage_id: str
queue_fill_rate: float
suggested_action: str
severity: str # "warning" | "critical"

View File

@ -0,0 +1,217 @@
"""
core/device/device_manager.py
DeviceManager manages NPU Dongle discovery, health, and assignment.
Design:
- scan_devices() calls the Kneron KP SDK but accepts an injectable kp_api
parameter so tests can supply a Mock without real hardware.
- DongleSeriesSpec constants are inlined here to avoid a circular import
from core.functions.Multidongle.
"""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Dict, List, Optional
# ---------------------------------------------------------------------------
# GOPS table (mirrors DongleSeriesSpec in Multidongle.py)
# ---------------------------------------------------------------------------
_PRODUCT_ID_TO_SERIES: Dict[int, str] = {
0x100: "KL520",
0x720: "KL720",
0x630: "KL630",
0x730: "KL730",
}
_SERIES_GOPS: Dict[str, int] = {
"KL520": 2,
"KL720": 28,
"KL630": 400,
"KL730": 1600,
}
# ---------------------------------------------------------------------------
# Data classes
# ---------------------------------------------------------------------------
@dataclass
class DeviceInfo:
"""Snapshot of a single NPU Dongle's state."""
device_id: str # unique id, e.g. "usb-<port_id>"
series: str # "KL520" | "KL720" | ...
product_id: int # raw USB product ID
status: str # "online" | "offline" | "busy"
gops: int # compute capacity
assigned_stage: Optional[str] # currently assigned stage ID, or None
current_fps: float # live inference throughput
utilization_pct: float # 0.0 100.0
@dataclass
class DeviceHealth:
"""Health snapshot of a single NPU Dongle."""
device_id: str
temperature_celsius: Optional[float] # None if SDK does not support it
error_count: int
last_error: Optional[str]
uptime_seconds: float
# ---------------------------------------------------------------------------
# DeviceManager
# ---------------------------------------------------------------------------
class DeviceManager:
"""Manages NPU Dongle discovery, health queries, and stage assignment.
Parameters
----------
kp_api:
Kneron KP SDK module reference. Pass ``None`` to import the real
``kp`` module at runtime, or inject a Mock in tests.
"""
def __init__(self, kp_api=None) -> None:
if kp_api is None:
import kp as _kp # real SDK (requires hardware)
self._kp = _kp
else:
self._kp = kp_api
# Known devices, populated by scan_devices()
self._devices: Dict[str, DeviceInfo] = {}
# stage assignments: {device_id: stage_id}
self._assignments: Dict[str, str] = {}
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def scan_devices(self) -> List[DeviceInfo]:
"""Scan for connected Kneron Dongles and update internal state.
Returns
-------
List[DeviceInfo]
All currently connected devices, each with status "online".
"""
try:
descriptors = self._kp.core.scan_devices()
except Exception:
return []
if not descriptors or descriptors.device_descriptor_number == 0:
return []
found: Dict[str, DeviceInfo] = {}
for desc in descriptors.device_descriptor_list:
try:
port_id = desc.usb_port_id
product_id = desc.product_id
device_id = f"usb-{port_id}"
series = _PRODUCT_ID_TO_SERIES.get(product_id, "Unknown")
gops = _SERIES_GOPS.get(series, 0)
assigned = self._assignments.get(device_id)
info = DeviceInfo(
device_id=device_id,
series=series,
product_id=product_id,
status="online",
gops=gops,
assigned_stage=assigned,
current_fps=0.0,
utilization_pct=0.0,
)
found[device_id] = info
except Exception:
continue
self._devices = found
return list(self._devices.values())
def get_device_health(self, device_id: str) -> DeviceHealth:
"""Return a health snapshot for the given device.
Temperature is returned as ``None`` because the current KP SDK
version does not expose thermal sensors.
"""
return DeviceHealth(
device_id=device_id,
temperature_celsius=None,
error_count=0,
last_error=None,
uptime_seconds=0.0,
)
def assign_device(self, device_id: str, stage_id: str) -> bool:
"""Assign *device_id* to *stage_id*.
Returns
-------
bool
``False`` if the device is unknown or already assigned to a
different stage; ``True`` on success.
"""
device = self._devices.get(device_id)
if device is None or device.status == "offline":
return False
existing_stage = self._assignments.get(device_id)
if existing_stage is not None and existing_stage != stage_id:
return False # already assigned to a different stage
self._assignments[device_id] = stage_id
self._devices[device_id].assigned_stage = stage_id
return True
def unassign_device(self, device_id: str) -> bool:
"""Release *device_id* from its current stage assignment.
Returns
-------
bool
``False`` if the device is unknown; ``True`` on success.
"""
if device_id not in self._devices:
return False
self._assignments.pop(device_id, None)
self._devices[device_id].assigned_stage = None
return True
def get_load_balance_recommendation(
self, stages: List[str]
) -> Dict[str, str]:
"""Recommend device-to-stage assignment by GOPS (descending).
Higher-GOPS devices are assigned to earlier stages. Stages with
no available device are mapped to an empty string.
Parameters
----------
stages:
Ordered list of stage IDs (first stage has highest priority).
Returns
-------
Dict[str, str]
``{stage_id: device_id}``; device_id is "" if unavailable.
"""
available = sorted(
self._devices.values(),
key=lambda d: d.gops,
reverse=True,
)
recommendation: Dict[str, str] = {}
for i, stage_id in enumerate(stages):
if i < len(available):
recommendation[stage_id] = available[i].device_id
else:
recommendation[stage_id] = ""
return recommendation
def get_device_statistics(self) -> Dict[str, DeviceInfo]:
"""Return a snapshot of all known devices keyed by device_id."""
return dict(self._devices)

View File

@ -3,8 +3,18 @@ import json
import csv
import os
import time
import dataclasses
from typing import Any, Dict, List
class _InferenceResultEncoder(json.JSONEncoder):
"""將 dataclass 推論結果物件轉為可序列化的 dict。"""
def default(self, o):
if dataclasses.is_dataclass(o) and not isinstance(o, type):
return dataclasses.asdict(o)
return super().default(o)
class ResultSerializer:
"""
Serializes inference results into various formats.
@ -12,8 +22,10 @@ class ResultSerializer:
def to_json(self, data: Dict[str, Any]) -> str:
"""
Serializes data to a JSON string.
Dataclass objects (ObjectDetectionResult, ClassificationResult, etc.)
are automatically converted to dicts via _InferenceResultEncoder.
"""
return json.dumps(data, indent=2)
return json.dumps(data, indent=2, cls=_InferenceResultEncoder)
def to_csv(self, data: List[Dict[str, Any]], fieldnames: List[str]) -> str:
"""

View File

@ -0,0 +1 @@
"""core/optimization — Pipeline 優化建議模組。"""

248
core/optimization/engine.py Normal file
View File

@ -0,0 +1,248 @@
"""
core/optimization/engine.py
OptimizationEngine 分析 Pipeline 執行統計產生可執行的優化建議
設計重點
- analyze_pipeline 接受來自 InferencePipeline.get_pipeline_statistics() stats 字典
- 三條優化規則rebalance_devicesadjust_queueadd_devices各自獨立
可個別觸發不互斥
- apply_suggestion rebalance_devices 呼叫 device_manager.assign_device
其他類型add_devicesadjust_queue需要人工操作僅記錄 log 後回傳 True
- predict_performance 使用保守係數 0.6 的啟發式估算
"""
from __future__ import annotations
import logging
import uuid
from dataclasses import dataclass, field
from typing import Any, Dict, List, Tuple
logger = logging.getLogger(__name__)
# 優化規則閾值
_QUEUE_FILL_THRESHOLD = 0.70 # queue_fill_rate > 此值觸發 rebalance_devices
_TIME_RATIO_THRESHOLD = 2.0 # max/min avg_processing_time > 此值觸發 adjust_queue
_UTILIZATION_THRESHOLD = 85.0 # 所有裝置 utilization_pct > 此值觸發 add_devices
_CONSERVATIVE_FACTOR = 0.6 # predict_performance 的保守係數
@dataclass
class OptimizationSuggestion:
"""單一優化建議。
屬性
suggestion_id: 唯一識別碼UUID 字串
type: 建議類型 "rebalance_devices" | "adjust_queue" | "add_devices"
description: 使用者可讀的說明避免技術術語
estimated_improvement_pct: 預估改善百分比0.0100.0
confidence: 信心程度"high" | "medium" | "low"
action_params: 執行建議所需的參數字典
"""
suggestion_id: str
type: str
description: str
estimated_improvement_pct: float
confidence: str
action_params: Dict[str, Any]
class OptimizationEngine:
"""分析 Pipeline 執行統計並產生優化建議。"""
# ------------------------------------------------------------------
# 公開介面
# ------------------------------------------------------------------
def analyze_pipeline(
self,
stats: Dict[str, Any],
) -> List[OptimizationSuggestion]:
"""分析 Pipeline 執行統計,產生優化建議清單。
參數
stats: 來自 InferencePipeline.get_pipeline_statistics() 的字典
格式詳見模組文件
回傳
可能為空的 OptimizationSuggestion 清單
"""
stages: Dict[str, Any] = stats.get("stages", {})
devices: Dict[str, Any] = stats.get("devices", {})
suggestions: List[OptimizationSuggestion] = []
suggestions.extend(self._check_rebalance_devices(stages))
suggestions.extend(self._check_adjust_queue(stages))
suggestions.extend(self._check_add_devices(devices))
return suggestions
def predict_performance(
self,
config: List[Any],
available_devices: List[Any],
) -> Dict[str, float]:
"""以啟發式方法估算 Pipeline 效能。
公式
estimated_fps = sum(device.gops for d in available_devices) / num_stages * 0.6
estimated_latency_ms = 1000 / estimated_fps
confidence_range = (estimated_fps * 0.8, estimated_fps * 1.2)
參數
config: Stage 設定列表每個元素代表一個 Stage
available_devices: DeviceInfo 物件列表具備 gops 屬性
回傳
包含 estimated_fpsestimated_latency_msconfidence_range 的字典
"""
num_stages = len(config)
total_gops = sum(getattr(d, "gops", 0) for d in available_devices)
if num_stages == 0 or total_gops == 0:
return {
"estimated_fps": 0.0,
"estimated_latency_ms": 0.0,
"confidence_range": (0.0, 0.0),
}
estimated_fps = total_gops / num_stages * _CONSERVATIVE_FACTOR
estimated_latency_ms = 1000.0 / estimated_fps
confidence_range = (estimated_fps * 0.8, estimated_fps * 1.2)
return {
"estimated_fps": estimated_fps,
"estimated_latency_ms": estimated_latency_ms,
"confidence_range": confidence_range,
}
def apply_suggestion(
self,
suggestion: OptimizationSuggestion,
device_manager: Any,
) -> bool:
"""執行優化建議。
- rebalance_devices呼叫 device_manager.assign_device 並回傳其結果
- add_devices / adjust_queue記錄 log需人工操作回傳 True
參數
suggestion: 要執行的優化建議
device_manager: DeviceManager 實例
回傳
執行是否成功
"""
if suggestion.type == "rebalance_devices":
device_id = suggestion.action_params.get("device_id", "")
stage_id = suggestion.action_params.get("stage_id", "")
success = device_manager.assign_device(device_id, stage_id)
if success:
logger.info(
"已將裝置 %s 重新分配至 Stage %s", device_id, stage_id
)
else:
logger.warning(
"無法將裝置 %s 分配至 Stage %s", device_id, stage_id
)
return success
if suggestion.type in ("add_devices", "adjust_queue"):
logger.info(
"優化建議 [%s]%s(需要人工操作)",
suggestion.type,
suggestion.description,
)
return True
logger.warning("未知的建議類型:%s", suggestion.type)
return False
# ------------------------------------------------------------------
# 內部規則實作
# ------------------------------------------------------------------
def _check_rebalance_devices(
self, stages: Dict[str, Any]
) -> List[OptimizationSuggestion]:
"""規則 1queue_fill_rate > 0.70 → 建議重新分配裝置。"""
suggestions = []
for stage_id, stage_data in stages.items():
fill_rate: float = stage_data.get("queue_fill_rate", 0.0)
if fill_rate > _QUEUE_FILL_THRESHOLD:
pct = round((fill_rate - _QUEUE_FILL_THRESHOLD) / _QUEUE_FILL_THRESHOLD * 100, 1)
suggestions.append(
OptimizationSuggestion(
suggestion_id=str(uuid.uuid4()),
type="rebalance_devices",
description=(
f"{stage_id} 的佇列使用率偏高({fill_rate:.0%}"
"建議將算力較高的裝置分配給此階段以降低積壓。"
),
estimated_improvement_pct=min(pct, 40.0),
confidence="medium",
action_params={"stage_id": stage_id, "device_id": ""},
)
)
return suggestions
def _check_adjust_queue(
self, stages: Dict[str, Any]
) -> List[OptimizationSuggestion]:
"""規則 2avg_processing_time 最大/最小比值 > 2.0 → 建議調整佇列大小。"""
if len(stages) < 2:
return []
times = {
sid: data.get("avg_processing_time", 0.0)
for sid, data in stages.items()
}
max_time = max(times.values())
min_time = min(times.values())
if min_time <= 0 or max_time / min_time <= _TIME_RATIO_THRESHOLD:
return []
ratio = max_time / min_time
return [
OptimizationSuggestion(
suggestion_id=str(uuid.uuid4()),
type="adjust_queue",
description=(
f"各 Stage 的處理時間差異達 {ratio:.1f} 倍,"
"建議調整佇列大小以平衡各階段的吞吐量。"
),
estimated_improvement_pct=min((ratio - 2.0) * 10.0, 30.0),
confidence="low",
action_params={"max_stage": max(times, key=times.get), "ratio": ratio},
)
]
def _check_add_devices(
self, devices: Dict[str, Any]
) -> List[OptimizationSuggestion]:
"""規則 3所有 Dongle 使用率 > 85% → 建議增加更多 Dongle。"""
if not devices:
return []
utilizations = [
data.get("utilization_pct", 0.0) for data in devices.values()
]
if not all(u > _UTILIZATION_THRESHOLD for u in utilizations):
return []
avg_util = sum(utilizations) / len(utilizations)
return [
OptimizationSuggestion(
suggestion_id=str(uuid.uuid4()),
type="add_devices",
description=(
f"所有裝置的平均使用率已達 {avg_util:.1f}%"
"系統已接近飽和,建議增加更多 NPU 裝置。"
),
estimated_improvement_pct=min((avg_util - 85.0) * 2.0, 50.0),
confidence="high",
action_params={"current_avg_utilization": avg_util},
)
]

View File

@ -0,0 +1,23 @@
"""
core/performance 效能測試與歷史記錄模組
提供 Benchmark 執行結果儲存與回歸分析功能
使用範例
from core.performance import (
PerformanceBenchmarker,
BenchmarkConfig,
BenchmarkResult,
PerformanceHistory,
)
"""
from .benchmarker import BenchmarkConfig, BenchmarkResult, PerformanceBenchmarker
from .history import PerformanceHistory
__all__ = [
"BenchmarkConfig",
"BenchmarkResult",
"PerformanceBenchmarker",
"PerformanceHistory",
]

View File

@ -0,0 +1,247 @@
"""
core/performance/benchmarker.py 效能基準測試模組
提供 BenchmarkConfigBenchmarkResult 資料結構
以及 PerformanceBenchmarker 執行單/多裝置效能測試並計算加速倍數
設計重點
- 實際推論呼叫透過 inference_runner callable 注入
方便在沒有硬體的環境下進行單元測試注入 Mock
- 純計算邏輯calculate_speedup 可直接測試無需 Mock
使用範例測試環境
config = BenchmarkConfig(pipeline_config=[], test_input_source="test.mp4")
benchmarker = PerformanceBenchmarker()
def mock_runner(frame_data):
return {"result": "ok"}
seq = benchmarker.run_sequential_benchmark(config, inference_runner=mock_runner)
par = benchmarker.run_parallel_benchmark(config, inference_runner=mock_runner)
speedup = benchmarker.calculate_speedup(seq, par)
"""
import time
import statistics
from dataclasses import dataclass, field
from typing import Any, Callable, Dict, List, Optional, Tuple
@dataclass
class BenchmarkConfig:
"""Benchmark 測試設定。
屬性
pipeline_config: Pipeline Stage 的設定列表來自 UI
test_input_source: 測試輸入來源影片檔路徑或相機索引
test_duration_seconds: 測試持續時間不含暖機階段
warmup_frames: 暖機幀數不計入統計
"""
pipeline_config: List[Any]
test_input_source: str
test_duration_seconds: float = 30.0
warmup_frames: int = 50
@dataclass
class BenchmarkResult:
"""單次 Benchmark 的測試結果。
屬性
mode: 測試模式'sequential'單裝置 'parallel'多裝置
fps: 每秒幀數
avg_latency_ms: 平均推論延遲毫秒
p95_latency_ms: 95th percentile 延遲毫秒
total_frames: 測試期間處理的總幀數不含暖機
timestamp: 測試開始的 Unix timestamp
device_config: 裝置分配設定例如 {"KL520": 1}
id: 唯一識別碼 PerformanceHistory.record() 填入
"""
mode: str
fps: float
avg_latency_ms: float
p95_latency_ms: float
total_frames: int
timestamp: float
device_config: Dict[str, Any]
id: Optional[str] = field(default=None)
class PerformanceBenchmarker:
"""執行單裝置 vs 多裝置效能測試,計算加速倍數。
設計為可測試性Testability-First
- run_sequential_benchmark / run_parallel_benchmark 接受 inference_runner 參數
讓測試時可注入 Mock 而不需要真實硬體
- calculate_speedup 為純函式直接接受 BenchmarkResult 計算
屬性
device_config: 裝置設定資訊會填入 BenchmarkResult.device_config
"""
def __init__(self, device_config: Optional[Dict[str, Any]] = None):
"""初始化 PerformanceBenchmarker。
參數
device_config: 裝置設定例如 {"KL520": 1}未指定時使用空字典
"""
self.device_config: Dict[str, Any] = device_config or {}
# ------------------------------------------------------------------
# 公開介面
# ------------------------------------------------------------------
def run_sequential_benchmark(
self,
config: BenchmarkConfig,
inference_runner: Optional[Callable[[Any], Any]] = None,
) -> BenchmarkResult:
"""以單裝置(循序)模式執行 Benchmark。
參數
config: 測試設定
inference_runner: 推論執行函式簽名為 ``(frame_data: Any) -> Any``
若為 None使用 no-op 函式僅供架構驗證
回傳
mode='sequential' BenchmarkResult
"""
runner = inference_runner or self._default_runner
return self._run_benchmark(config, runner, mode="sequential")
def run_parallel_benchmark(
self,
config: BenchmarkConfig,
inference_runner: Optional[Callable[[Any], Any]] = None,
) -> BenchmarkResult:
"""以多裝置(平行)模式執行 Benchmark。
參數
config: 測試設定
inference_runner: 推論執行函式簽名為 ``(frame_data: Any) -> Any``
若為 None使用 no-op 函式僅供架構驗證
回傳
mode='parallel' BenchmarkResult
"""
runner = inference_runner or self._default_runner
return self._run_benchmark(config, runner, mode="parallel")
def calculate_speedup(
self,
seq: BenchmarkResult,
par: BenchmarkResult,
) -> float:
"""計算平行相對於循序的加速倍數。
計算公式par.fps / seq.fps
參數
seq: 循序模式的 BenchmarkResult
par: 平行模式的 BenchmarkResult
回傳
加速倍數float
引發
ValueError: seq.fps <= 0 避免除以零
"""
if seq.fps <= 0:
raise ValueError(
f"循序模式的 FPS 必須大於 0收到{seq.fps}"
)
return par.fps / seq.fps
def run_full_benchmark(
self,
config: BenchmarkConfig,
inference_runner: Optional[Callable[[Any], Any]] = None,
) -> Tuple[BenchmarkResult, BenchmarkResult, float]:
"""執行完整 Benchmark循序 → 平行 → 計算加速倍數。
執行序列
1. 執行循序 Benchmark
2. 執行平行 Benchmark
3. 計算加速倍數
參數
config: 測試設定
inference_runner: 推論執行函式可注入 Mock
回傳
Tuple[BenchmarkResult, BenchmarkResult, float]
(sequential_result, parallel_result, speedup)
"""
seq_result = self.run_sequential_benchmark(config, inference_runner)
par_result = self.run_parallel_benchmark(config, inference_runner)
speedup = self.calculate_speedup(seq_result, par_result)
return seq_result, par_result, speedup
# ------------------------------------------------------------------
# 內部實作
# ------------------------------------------------------------------
def _run_benchmark(
self,
config: BenchmarkConfig,
runner: Callable[[Any], Any],
mode: str,
) -> BenchmarkResult:
"""執行 Benchmark 的共用邏輯。
流程
1. 暖機warmup_frames 不計入統計
2. 正式測試test_duration_seconds
3. 計算 FPS平均延遲p95 延遲
參數
config: 測試設定
runner: 推論執行函式
mode: 'sequential' 'parallel'
回傳
BenchmarkResult
"""
# 暖機階段
for _ in range(config.warmup_frames):
runner(None)
# 正式測試
latencies: List[float] = []
test_start = time.time()
while time.time() - test_start < config.test_duration_seconds:
frame_start = time.time()
runner(None)
frame_end = time.time()
latencies.append((frame_end - frame_start) * 1000.0) # 轉換為毫秒
total_frames = len(latencies)
elapsed = time.time() - test_start
# 計算統計數值
if total_frames == 0:
fps = 0.0
avg_latency_ms = 0.0
p95_latency_ms = 0.0
else:
fps = total_frames / elapsed if elapsed > 0 else 0.0
avg_latency_ms = statistics.mean(latencies)
sorted_latencies = sorted(latencies)
p95_index = int(len(sorted_latencies) * 0.95)
p95_latency_ms = sorted_latencies[min(p95_index, len(sorted_latencies) - 1)]
return BenchmarkResult(
mode=mode,
fps=fps,
avg_latency_ms=avg_latency_ms,
p95_latency_ms=p95_latency_ms,
total_frames=total_frames,
timestamp=test_start,
device_config=dict(self.device_config),
)
@staticmethod
def _default_runner(frame_data: Any) -> Any:
"""預設的推論執行函式no-op僅供架構驗證"""
return None

233
core/performance/history.py Normal file
View File

@ -0,0 +1,233 @@
"""
core/performance/history.py Benchmark 歷史記錄模組
提供 PerformanceHistory 類別負責
- BenchmarkResult JSON 格式持久化到本地磁碟
- 依條件limit / mode查詢歷史記錄
- 產生兩次測試間的回歸比較報告
儲存格式範例
{
"records": [
{
"id": "benchmark_20260405_143022",
"mode": "parallel",
"fps": 45.2,
"avg_latency_ms": 22.1,
"p95_latency_ms": 35.0,
"total_frames": 1356,
"timestamp": 1743856222.0,
"device_config": {"KL720": 2}
}
]
}
"""
import json
import logging
import os
import time
from datetime import datetime
from typing import Any, Dict, List, Optional
logger = logging.getLogger(__name__)
from .benchmarker import BenchmarkResult
class PerformanceHistory:
"""本地 Benchmark 歷史記錄管理器。
屬性
storage_path: JSON 儲存檔案的完整路徑
預設為 ``~/.cluster4npu/benchmark_history.json``
"""
DEFAULT_STORAGE_PATH = os.path.join(
os.path.expanduser("~"), ".cluster4npu", "benchmark_history.json"
)
def __init__(self, storage_path: str = DEFAULT_STORAGE_PATH):
"""初始化 PerformanceHistory。
若儲存目錄不存在會自動建立
參數
storage_path: JSON 儲存檔案路徑
"""
self.storage_path = storage_path
self._ensure_storage_directory()
# ------------------------------------------------------------------
# 公開介面
# ------------------------------------------------------------------
def record(self, result: BenchmarkResult) -> None:
"""記錄一筆 BenchmarkResult 並持久化至 JSON。
此方法會
1. 為結果產生唯一 id若尚未有 id
2. id 寫回 result.id
3. 追加到 JSON 儲存
參數
result: 要記錄的 BenchmarkResult
"""
data = self._load_raw()
# 產生唯一 id
record_id = self._generate_id(result)
result.id = record_id
record_dict = self._result_to_dict(result)
data["records"].append(record_dict)
self._save_raw(data)
def get_history(
self,
limit: int = 50,
mode: Optional[str] = None,
) -> List[BenchmarkResult]:
"""查詢歷史記錄。
回傳最新優先reverse chronological的記錄列表
參數
limit: 最多回傳幾筆預設 50
mode: 若指定只回傳符合 mode 的記錄'sequential' 'parallel'
回傳
List[BenchmarkResult]最新的記錄排在最前面
"""
data = self._load_raw()
records = data.get("records", [])
# 過濾 mode
if mode is not None:
records = [r for r in records if r.get("mode") == mode]
# 最新優先(依 timestamp 降序)
records = sorted(records, key=lambda r: r.get("timestamp", 0), reverse=True)
# 套用 limit
records = records[:limit]
return [self._dict_to_result(r) for r in records]
def get_regression_report(
self,
baseline_id: str,
compare_id: str,
) -> Dict[str, Any]:
"""比較兩次測試的效能差異,產生回歸報告。
參數
baseline_id: 基準測試的 id
compare_id: 比較測試的 id
回傳
包含以下鍵的字典
- baseline: BenchmarkResult基準
- compare: BenchmarkResult比較對象
- fps_change_pct: FPS 變化百分比正值為改善
- avg_latency_change_pct: 平均延遲變化百分比負值為改善
- p95_latency_change_pct: P95 延遲變化百分比負值為改善
引發
ValueError: 若任一 id 不存在於歷史記錄中
"""
data = self._load_raw()
all_records = {r["id"]: r for r in data.get("records", [])}
if baseline_id not in all_records:
raise ValueError(f"找不到基準測試 id{baseline_id}")
if compare_id not in all_records:
raise ValueError(f"找不到比較測試 id{compare_id}")
baseline = self._dict_to_result(all_records[baseline_id])
compare = self._dict_to_result(all_records[compare_id])
def pct_change(old: float, new: float) -> float:
"""計算相對變化百分比。"""
if old == 0:
return 0.0
return (new - old) / old * 100.0
return {
"baseline": baseline,
"compare": compare,
"fps_change_pct": pct_change(baseline.fps, compare.fps),
"avg_latency_change_pct": pct_change(
baseline.avg_latency_ms, compare.avg_latency_ms
),
"p95_latency_change_pct": pct_change(
baseline.p95_latency_ms, compare.p95_latency_ms
),
}
# ------------------------------------------------------------------
# 內部實作
# ------------------------------------------------------------------
def _ensure_storage_directory(self) -> None:
"""若儲存目錄不存在,自動建立。"""
parent_dir = os.path.dirname(self.storage_path)
if parent_dir:
os.makedirs(parent_dir, exist_ok=True)
def _load_raw(self) -> Dict[str, Any]:
"""從 JSON 檔案讀取原始資料。若檔案不存在或損毀,回傳空結構。"""
if not os.path.exists(self.storage_path):
return {"records": []}
try:
with open(self.storage_path, "r", encoding="utf-8") as f:
return json.load(f)
except json.JSONDecodeError as e:
logger.warning("歷史記錄 JSON 檔案損毀,降級回傳空結構:%s", e)
return {"records": []}
except (IOError, OSError) as e:
logger.warning("無法讀取歷史記錄檔案,降級回傳空結構:%s", e)
return {"records": []}
def _save_raw(self, data: Dict[str, Any]) -> None:
"""將資料寫入 JSON 檔案。"""
with open(self.storage_path, "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=2)
@staticmethod
def _generate_id(result: BenchmarkResult) -> str:
"""依 timestamp 產生唯一識別碼。
格式``benchmark_YYYYMMDD_HHMMSSffffff``
"""
dt = datetime.fromtimestamp(result.timestamp)
return dt.strftime("benchmark_%Y%m%d_%H%M%S%f")
@staticmethod
def _result_to_dict(result: BenchmarkResult) -> Dict[str, Any]:
"""將 BenchmarkResult 轉換為可序列化的字典。"""
return {
"id": result.id,
"mode": result.mode,
"fps": result.fps,
"avg_latency_ms": result.avg_latency_ms,
"p95_latency_ms": result.p95_latency_ms,
"total_frames": result.total_frames,
"timestamp": result.timestamp,
"device_config": result.device_config,
}
@staticmethod
def _dict_to_result(data: Dict[str, Any]) -> BenchmarkResult:
"""將字典轉換回 BenchmarkResult。"""
return BenchmarkResult(
id=data.get("id"),
mode=data["mode"],
fps=data["fps"],
avg_latency_ms=data["avg_latency_ms"],
p95_latency_ms=data["p95_latency_ms"],
total_frames=data["total_frames"],
timestamp=data["timestamp"],
device_config=data.get("device_config", {}),
)

View File

@ -0,0 +1,428 @@
"""
core/performance/report_exporter.py 效能報告匯出模組
提供 DeviceSummaryReportData 資料結構與 ReportExporter 主類別
支援將 Benchmark 結果匯出為 PDF需要 reportlab CSV標準庫
設計重點
- ReportExporter 不依賴 PyQt5只依賴 reportlab 與標準庫
- reportlab try/except ImportError 保護若未安裝export_pdf() 拋出 ImportError
- export_csv() 只用標準庫 csv永遠可用
- 無狀態設計stateless每次匯出建立新實例或直接呼叫靜態方法
"""
from __future__ import annotations
import csv
import io
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, List, Optional
# ---------------------------------------------------------------------------
# reportlab 可用性旗標
# ---------------------------------------------------------------------------
try:
from reportlab.platypus import SimpleDocTemplate # noqa: F401
_REPORTLAB_AVAILABLE = True
except ImportError:
_REPORTLAB_AVAILABLE = False
# ---------------------------------------------------------------------------
# 資料結構
# ---------------------------------------------------------------------------
@dataclass
class DeviceSummary:
"""單一裝置的摘要資訊,來自 DeviceManager。"""
device_id: str
product_name: str # 如 "KL720"
firmware_version: str
is_active: bool
@dataclass
class ReportData:
"""
報告所需的完整資料由呼叫方UI 從各模組收集後傳入 ReportExporter
設計為純資料容器 UI / SDK 解耦方便單元測試
"""
# 報告基本資訊
report_title: str = "效能測試報告"
generated_at: float = field(default_factory=time.time) # UNIX timestamp
pipeline_name: str = "" # 來自 .mflow 檔名或使用者命名
# Benchmark 結果(來自 PerformanceBenchmarker.run_full_benchmark()
sequential_result: Optional[Any] = None # BenchmarkResult
parallel_result: Optional[Any] = None # BenchmarkResult
speedup: Optional[float] = None # par.fps / seq.fps
# 歷史記錄(來自 PerformanceHistory.get_history()
history_records: List[Any] = field(default_factory=list) # List[BenchmarkResult]
# 裝置資訊(來自 DeviceManager.get_all_devices()
devices: List[DeviceSummary] = field(default_factory=list)
# 圖表截圖(由 UI 層在匯出前擷取)
chart_image_bytes: Optional[bytes] = None # PNG bytes來自 PerformanceDashboard
# ---------------------------------------------------------------------------
# ReportExporter
# ---------------------------------------------------------------------------
class ReportExporter:
"""
負責將 ReportData 序列化為 PDF CSV 檔案
無狀態設計stateless每次匯出建立新實例或直接呼叫靜態方法
"""
# ------------------------------------------------------------------
# PDF 匯出
# ------------------------------------------------------------------
def export_pdf(
self,
data: ReportData,
output_path: "str | Path",
) -> Path:
"""
將完整效能報告匯出為 PDF
回傳實際寫入的檔案路徑
output_path 的父目錄不存在自動建立
引發
ImportError: reportlab 未安裝提示安裝指令
"""
if not _REPORTLAB_AVAILABLE:
raise ImportError(
"reportlab is required for PDF export. Install with: pip install reportlab>=4.0.0"
)
try:
from reportlab.platypus import (
SimpleDocTemplate,
Table,
TableStyle,
Paragraph,
Spacer,
Image,
)
from reportlab.lib.pagesizes import A4
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib import colors
from reportlab.lib.units import mm
import reportlab # noqa: F401 — 確認已安裝
except ImportError as e:
raise ImportError(
f"reportlab 未安裝請執行pip install reportlab>=4.0.0\n原始錯誤:{e}"
) from e
output_path = Path(output_path)
output_path.parent.mkdir(parents=True, exist_ok=True)
doc = SimpleDocTemplate(
str(output_path),
pagesize=A4,
rightMargin=20 * mm,
leftMargin=20 * mm,
topMargin=20 * mm,
bottomMargin=20 * mm,
)
story: list = []
styles = getSampleStyleSheet()
# 封面(用 Paragraph 實作,封面 callback 難以在無 GUI 環境穩定測試)
self._build_cover_paragraphs(story, data, styles, Paragraph, Spacer)
# Benchmark 結果表
self._build_benchmark_table(story, data, styles, Table, TableStyle, Paragraph, Spacer, colors)
# 趨勢圖
self._build_trend_chart(story, data, styles, Paragraph, Spacer, Image)
# 歷史記錄表
self._build_history_table(story, data, styles, Table, TableStyle, Paragraph, Spacer, colors)
# 裝置資訊
self._build_device_info(story, data, styles, Paragraph, Spacer)
doc.build(story)
return output_path
def _build_cover_page(self, canvas, data: ReportData) -> None:
"""繪製封面報告標題、生成時間、Pipeline 名稱、裝置清單canvas callback 版本)"""
canvas.saveState()
canvas.setFont("Helvetica-Bold", 24)
canvas.drawCentredString(
canvas._pagesize[0] / 2,
canvas._pagesize[1] * 0.65,
data.report_title,
)
canvas.setFont("Helvetica", 12)
canvas.drawCentredString(
canvas._pagesize[0] / 2,
canvas._pagesize[1] * 0.58,
f"生成時間:{self._get_timestamp_str(data.generated_at)}",
)
if data.pipeline_name:
canvas.drawCentredString(
canvas._pagesize[0] / 2,
canvas._pagesize[1] * 0.53,
f"Pipeline{data.pipeline_name}",
)
canvas.drawCentredString(
canvas._pagesize[0] / 2,
canvas._pagesize[1] * 0.48,
f"裝置數量:{len(data.devices)}",
)
canvas.restoreState()
def _build_cover_paragraphs(self, story, data, styles, Paragraph, Spacer) -> None:
"""以 Paragraph flowable 形式建立封面內容(嵌入 story 流)。"""
story.append(Spacer(1, 60))
story.append(Paragraph(data.report_title, styles["Title"]))
story.append(Spacer(1, 12))
story.append(Paragraph(
f"生成時間:{self._get_timestamp_str(data.generated_at)}",
styles["Normal"],
))
if data.pipeline_name:
story.append(Paragraph(f"Pipeline{data.pipeline_name}", styles["Normal"]))
story.append(Paragraph(f"裝置數量:{len(data.devices)}", styles["Normal"]))
story.append(Spacer(1, 30))
def _build_benchmark_table(
self, story, data, styles=None,
Table=None, TableStyle=None, Paragraph=None, Spacer=None, colors=None,
) -> None:
"""
建立 Benchmark 結果對比表reportlab Table
欄位指標 / 循序模式 / 平行模式 / 差異%
指標FPS平均延遲(ms)P95 延遲(ms)總幀數
"""
if Paragraph is None:
return
story.append(Paragraph("Benchmark 結果", styles["Heading1"]))
story.append(Spacer(1, 8))
seq = data.sequential_result
par = data.parallel_result
if seq is None or par is None:
story.append(Paragraph("無 Benchmark 資料", styles["Normal"]))
story.append(Spacer(1, 12))
return
def diff_pct(a, b):
if a and a != 0:
return f"{(b - a) / a * 100:+.1f}%"
return ""
table_data = [
["指標", "循序模式", "平行模式", "差異%"],
["FPS", f"{seq.fps:.1f}", f"{par.fps:.1f}", diff_pct(seq.fps, par.fps)],
["平均延遲(ms)", f"{seq.avg_latency_ms:.1f}", f"{par.avg_latency_ms:.1f}", diff_pct(seq.avg_latency_ms, par.avg_latency_ms)],
["P95 延遲(ms)", f"{seq.p95_latency_ms:.1f}", f"{par.p95_latency_ms:.1f}", diff_pct(seq.p95_latency_ms, par.p95_latency_ms)],
["總幀數", str(seq.total_frames), str(par.total_frames), ""],
]
if data.speedup is not None:
table_data.append(["加速倍數", "", f"{data.speedup:.2f}x", ""])
t = Table(table_data)
t.setStyle(TableStyle([
("BACKGROUND", (0, 0), (-1, 0), colors.grey),
("TEXTCOLOR", (0, 0), (-1, 0), colors.whitesmoke),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("FONTNAME", (0, 0), (-1, 0), "Helvetica-Bold"),
("GRID", (0, 0), (-1, -1), 0.5, colors.black),
]))
story.append(t)
story.append(Spacer(1, 20))
def _build_trend_chart(
self, story, data, styles=None,
Paragraph=None, Spacer=None, Image=None,
) -> None:
"""
data.chart_image_bytes 不為 None將圖表 PNG 嵌入 PDF
若為 None插入無圖表資料的提示文字
"""
if Paragraph is None:
return
story.append(Paragraph("效能趨勢圖", styles["Heading1"]))
story.append(Spacer(1, 8))
if data.chart_image_bytes is not None:
img_buf = io.BytesIO(data.chart_image_bytes)
img = Image(img_buf, width=400, height=200)
story.append(img)
else:
story.append(Paragraph("(無圖表資料)", styles["Normal"]))
story.append(Spacer(1, 20))
def _build_history_table(
self, story, data, styles=None,
Table=None, TableStyle=None, Paragraph=None, Spacer=None, colors=None,
) -> None:
"""
建立歷史記錄表最多顯示 20 超過則截斷並標注
欄位測試時間 / 模式 / FPS / 平均延遲(ms) / P95 延遲(ms)
"""
if Paragraph is None:
return
story.append(Paragraph("歷史記錄", styles["Heading1"]))
story.append(Spacer(1, 8))
records = data.history_records[:20]
truncated = len(data.history_records) > 20
table_data = [["測試時間", "模式", "FPS", "平均延遲(ms)", "P95 延遲(ms)"]]
for r in records:
table_data.append([
self._get_timestamp_str(r.timestamp),
r.mode,
f"{r.fps:.1f}",
f"{r.avg_latency_ms:.1f}",
f"{r.p95_latency_ms:.1f}",
])
if not records:
table_data.append(["(無記錄)", "", "", "", ""])
t = Table(table_data)
t.setStyle(TableStyle([
("BACKGROUND", (0, 0), (-1, 0), colors.grey),
("TEXTCOLOR", (0, 0), (-1, 0), colors.whitesmoke),
("ALIGN", (0, 0), (-1, -1), "CENTER"),
("FONTNAME", (0, 0), (-1, 0), "Helvetica-Bold"),
("GRID", (0, 0), (-1, -1), 0.5, colors.black),
]))
story.append(t)
if truncated:
story.append(Spacer(1, 6))
story.append(Paragraph(
f"(僅顯示最新 20 筆,共 {len(data.history_records)} 筆)",
styles["Normal"],
))
story.append(Spacer(1, 20))
def _build_device_info(
self, story, data, styles=None,
Paragraph=None, Spacer=None,
) -> None:
"""列出測試時連接的裝置清單:裝置 ID、型號、韌體版本、是否啟用。"""
if Paragraph is None:
return
story.append(Paragraph("裝置資訊", styles["Heading1"]))
story.append(Spacer(1, 8))
if not data.devices:
story.append(Paragraph("(無裝置資訊)", styles["Normal"]))
else:
for dev in data.devices:
status = "啟用" if dev.is_active else "停用"
story.append(Paragraph(
f"裝置 {dev.device_id}{dev.product_name},韌體 {dev.firmware_version}{status}",
styles["Normal"],
))
story.append(Spacer(1, 12))
# ------------------------------------------------------------------
# CSV 匯出
# ------------------------------------------------------------------
def export_csv(
self,
data: ReportData,
output_path: "str | Path",
) -> Path:
"""
Benchmark 結果與歷史記錄匯出為 CSV
CSV 包含兩個邏輯區塊以空行分隔
1. Benchmark 摘要循序 vs 平行對比
2. 歷史記錄每筆 BenchmarkResult 一行
回傳實際寫入的檔案路徑
引發
ValueError: sequential_result parallel_result None
"""
if data.sequential_result is None or data.parallel_result is None:
raise ValueError(
"export_csv() 需要 sequential_result 與 parallel_result但其中一個為 None。"
)
output_path = Path(output_path)
output_path.parent.mkdir(parents=True, exist_ok=True)
seq = data.sequential_result
par = data.parallel_result
def diff_pct(a, b):
if a and a != 0:
return f"{(b - a) / a * 100:+.1f}%"
return ""
with output_path.open("w", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
# 區塊 1Benchmark 摘要
writer.writerow(["section", "metric", "sequential", "parallel", "diff_pct"])
writer.writerow([
"benchmark_summary", "fps",
f"{seq.fps:.1f}", f"{par.fps:.1f}",
diff_pct(seq.fps, par.fps),
])
writer.writerow([
"benchmark_summary", "avg_latency_ms",
f"{seq.avg_latency_ms:.1f}", f"{par.avg_latency_ms:.1f}",
diff_pct(seq.avg_latency_ms, par.avg_latency_ms),
])
writer.writerow([
"benchmark_summary", "p95_latency_ms",
f"{seq.p95_latency_ms:.1f}", f"{par.p95_latency_ms:.1f}",
diff_pct(seq.p95_latency_ms, par.p95_latency_ms),
])
writer.writerow([
"benchmark_summary", "total_frames",
str(seq.total_frames), str(par.total_frames),
"",
])
speedup_val = f"{data.speedup:.2f}x" if data.speedup is not None else ""
writer.writerow([
"benchmark_summary", "speedup",
"", speedup_val,
"",
])
# 空行分隔
writer.writerow([])
# 區塊 2歷史記錄
writer.writerow(["id", "timestamp", "mode", "fps", "avg_latency_ms", "p95_latency_ms", "total_frames"])
for r in data.history_records:
writer.writerow([
r.id or "",
self._get_timestamp_str(r.timestamp),
r.mode,
f"{r.fps:.1f}",
f"{r.avg_latency_ms:.1f}",
f"{r.p95_latency_ms:.1f}",
str(r.total_frames),
])
return output_path
# ------------------------------------------------------------------
# 工廠方法
# ------------------------------------------------------------------
@staticmethod
def _get_timestamp_str(ts: float) -> str:
"""將 UNIX timestamp 格式化為 'YYYY-MM-DD HH:MM:SS'(本地時間)。"""
import time as _time
local = _time.localtime(ts)
return _time.strftime("%Y-%m-%d %H:%M:%S", local)

View File

@ -0,0 +1 @@
"""core/templates — Pipeline 設定範本模組。"""

182
core/templates/manager.py Normal file
View File

@ -0,0 +1,182 @@
"""
core/templates/manager.py
TemplateManager 提供常見使用情境的預設 Pipeline 範本
設計重點
- 三個內建範本yolov5_detectionfire_detectiondual_model_cascade以常數定義
- save_as_template 將自訂範本儲存於記憶體in-memory不持久化到磁碟
- load_template 先查內建範本再查自訂範本找不到時拋出 ValueError
- nodes/connections 格式與 .mflow JSON 相同idtype 為必要欄位
"""
from __future__ import annotations
import time
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
@dataclass
class PipelineTemplate:
"""單一 Pipeline 範本。
屬性
template_id: 唯一識別碼內建範本使用語意名稱自訂範本以 custom_ 開頭
name: 顯示名稱 "YOLOv5 物件偵測"
description: 範本說明
nodes: 節點定義列表格式與 .mflow 相同每個節點至少含 id type
connections: 連線定義列表每條連線含 from to
"""
template_id: str
name: str
description: str
nodes: List[Dict[str, Any]]
connections: List[Dict[str, Any]]
# ---------------------------------------------------------------------------
# 內建範本定義
# ---------------------------------------------------------------------------
_BUILTIN_TEMPLATES: List[PipelineTemplate] = [
PipelineTemplate(
template_id="yolov5_detection",
name="YOLOv5 物件偵測",
description="標準 YOLOv5 物件偵測流程:輸入影像經前處理後送入模型,後處理輸出邊界框結果。",
nodes=[
{"id": "input_0", "type": "Input", "label": "Input"},
{"id": "preprocess_0", "type": "Preprocess", "label": "Preprocess"},
{"id": "model_0", "type": "Model", "label": "Model"},
{"id": "postprocess_0","type": "Postprocess", "label": "Postprocess"},
{"id": "output_0", "type": "Output", "label": "Output"},
],
connections=[
{"from": "input_0", "to": "preprocess_0"},
{"from": "preprocess_0", "to": "model_0"},
{"from": "model_0", "to": "postprocess_0"},
{"from": "postprocess_0", "to": "output_0"},
],
),
PipelineTemplate(
template_id="fire_detection",
name="火焰偵測分類",
description="火焰偵測流程:影像直接送入模型推論,後處理輸出火焰偵測結果(無前處理節點)。",
nodes=[
{"id": "input_0", "type": "Input", "label": "Input"},
{"id": "model_0", "type": "Model", "label": "Model"},
{"id": "postprocess_0","type": "Postprocess", "label": "Postprocess"},
{"id": "output_0", "type": "Output", "label": "Output"},
],
connections=[
{"from": "input_0", "to": "model_0"},
{"from": "model_0", "to": "postprocess_0"},
{"from": "postprocess_0", "to": "output_0"},
],
),
PipelineTemplate(
template_id="dual_model_cascade",
name="雙模型串接",
description=(
"兩個模型串接的複合推論流程:第一個模型的輸出結果經後處理後,"
"作為第二個模型的輸入,適合先偵測後分類的使用情境。"
),
nodes=[
{"id": "input_0", "type": "Input", "label": "Input"},
{"id": "model_0", "type": "Model", "label": "Model 1"},
{"id": "postprocess_0", "type": "Postprocess", "label": "Postprocess 1"},
{"id": "model_1", "type": "Model", "label": "Model 2"},
{"id": "postprocess_1", "type": "Postprocess", "label": "Postprocess 2"},
{"id": "output_0", "type": "Output", "label": "Output"},
],
connections=[
{"from": "input_0", "to": "model_0"},
{"from": "model_0", "to": "postprocess_0"},
{"from": "postprocess_0", "to": "model_1"},
{"from": "model_1", "to": "postprocess_1"},
{"from": "postprocess_1", "to": "output_0"},
],
),
]
# 以 template_id 建立快速查找字典
_BUILTIN_BY_ID: Dict[str, PipelineTemplate] = {
t.template_id: t for t in _BUILTIN_TEMPLATES
}
# ---------------------------------------------------------------------------
# TemplateManager
# ---------------------------------------------------------------------------
class TemplateManager:
"""管理內建與自訂 Pipeline 範本。
自訂範本儲存於記憶體每個 TemplateManager 實例各自獨立
"""
def __init__(self) -> None:
# 自訂範本字典:{template_id: PipelineTemplate}
self._custom: Dict[str, PipelineTemplate] = {}
# ------------------------------------------------------------------
# 公開介面
# ------------------------------------------------------------------
def get_builtin_templates(self) -> List[PipelineTemplate]:
"""回傳所有內建範本的清單(共 3 個)。
回傳
PipelineTemplate 列表不含自訂範本
"""
return list(_BUILTIN_TEMPLATES)
def load_template(self, template_id: str) -> PipelineTemplate:
"""依 template_id 載入範本。
查找順序內建範本 自訂範本
參數
template_id: 範本唯一識別碼
回傳
對應的 PipelineTemplate
引發
ValueError: template_id 不存在於任何範本時
"""
if template_id in _BUILTIN_BY_ID:
return _BUILTIN_BY_ID[template_id]
if template_id in self._custom:
return self._custom[template_id]
raise ValueError(f"Template {template_id} not found")
def save_as_template(
self,
pipeline_config: Dict[str, Any],
name: str,
description: str,
) -> PipelineTemplate:
"""將 Pipeline 設定儲存為新的自訂範本。
參數
pipeline_config: 包含 nodes connections 列表的字典
name: 範本顯示名稱
description: 範本說明
回傳
新建立的 PipelineTemplatetemplate_id custom_ 開頭
"""
safe_name = name.lower().replace(" ", "_")
template_id = f"custom_{safe_name}_{int(time.time() * 1000)}"
template = PipelineTemplate(
template_id=template_id,
name=name,
description=description,
nodes=list(pipeline_config.get("nodes", [])),
connections=list(pipeline_config.get("connections", [])),
)
self._custom[template_id] = template
return template

View File

@ -233,10 +233,10 @@ class SingleInstance:
def setup_application():
"""Initialize and configure the QApplication."""
# Enable high DPI support BEFORE creating QApplication
QApplication.setAttribute(Qt.AA_EnableHighDpiScaling, True)
QApplication.setAttribute(Qt.AA_UseHighDpiPixmaps, True)
# High DPI attributes must be set before QApplication is created.
# They are set in main() before the first QApplication instantiation.
# Do NOT set them here — QApplication already exists at this point.
# Create QApplication if it doesn't exist
if not QApplication.instance():
app = QApplication(sys.argv)

View File

@ -8,3 +8,11 @@ dependencies = [
"nodegraphqt>=0.6.40",
"pyqt5>=5.15.11",
]
[tool.pytest.ini_options]
testpaths = ["tests/unit"]
pythonpath = ["."]
addopts = "--import-mode=importlib"
python_files = ["test_*.py"]
python_classes = ["Test*"]
python_functions = ["test_*", "should_*"]

46
tests/conftest.py Normal file
View File

@ -0,0 +1,46 @@
"""
tests/conftest.py 單元測試環境設定
conftest.py 位於 tests/ 目錄 Python 套件
可在 root __init__.py 被觸發前完成 Mock 注入
在沒有 Kneron NPU 硬體PyQt5NodeGraphQt 的環境下
仍可測試 core/performance/ 的純 Python 邏輯
"""
import sys
from unittest.mock import MagicMock
def _install_mock(name: str) -> None:
"""若模組尚未存在,安裝空 MagicMock 作為替代。"""
if name not in sys.modules:
sys.modules[name] = MagicMock()
# Kneron KP SDK需要硬體驅動程式
_install_mock("kp")
# NumPy可能未安裝
try:
import numpy # noqa: F401
except ImportError:
_install_mock("numpy")
# PyQt5 相關模組(需要 GUI 環境)
for _mod in [
"PyQt5",
"PyQt5.QtWidgets",
"PyQt5.QtCore",
"PyQt5.QtGui",
"PyQt5.QtChart",
]:
_install_mock(_mod)
# NodeGraphQt依賴 PyQt5
_install_mock("NodeGraphQt")
_install_mock("NodeGraphQt.constants")
_install_mock("NodeGraphQt.base")
_install_mock("NodeGraphQt.base.node")
# OpenCV可能未安裝
_install_mock("cv2")

295
tests/unit/conftest.py Normal file
View File

@ -0,0 +1,295 @@
"""
pytest conftest.py 單元測試環境設定
此測試環境沒有 Kneron NPU 硬體也沒有 PyQt5 GUI 函式庫
為了能夠測試純 Python core/ ui/ 模組
在收集測試前預先注入 Mock 模組避免 import 時觸發硬體/GUI 初始化
UI 元件測試需要 QWidget 等基底類別可被正常繼承與多次實例化
因此使用輕量 Stub 取代 MagicMock 作為 PyQt5 Widget 基底
"""
import sys
from unittest.mock import MagicMock
def _install_mock(name: str) -> None:
"""若模組尚未存在,安裝空 MagicMock 作為替代。"""
if name not in sys.modules:
sys.modules[name] = MagicMock()
# Kneron KP SDK需要硬體驅動程式
_install_mock("kp")
# NumPy可能未安裝
try:
import numpy # noqa: F401
except ImportError:
_install_mock("numpy")
# OpenCV可能未安裝
_install_mock("cv2")
# NodeGraphQt依賴 PyQt5
_install_mock("NodeGraphQt")
_install_mock("NodeGraphQt.constants")
_install_mock("NodeGraphQt.base")
_install_mock("NodeGraphQt.base.node")
# ---------------------------------------------------------------------------
# PyQt5 Stub — 允許 QWidget/QDialog 子類別被正常繼承並多次實例化。
# 使用輕量 Python 類別替代,避免 MagicMock 繼承時的 side_effect 耗盡問題。
# ---------------------------------------------------------------------------
class _StubQObject:
"""所有 Qt 物件的基底 Stub。"""
def __init__(self, *args, **kwargs):
pass
class _StubQWidget(_StubQObject):
"""QWidget Stub可被繼承支援多次實例化。提供常用 QWidget 方法的空實作。"""
def setLayout(self, layout):
pass
def setParent(self, parent):
pass
def show(self):
pass
def hide(self):
pass
def setVisible(self, visible: bool):
pass
def setEnabled(self, enabled: bool):
pass
def isEnabled(self) -> bool:
return True
def setObjectName(self, name: str):
pass
def setStyleSheet(self, style: str):
pass
def setMinimumWidth(self, w: int):
pass
def setMinimumHeight(self, h: int):
pass
def setMaximumWidth(self, w: int):
pass
def setMaximumHeight(self, h: int):
pass
def resize(self, *args):
pass
def setWindowTitle(self, title: str):
pass
def setSizePolicy(self, *args):
pass
def update(self):
pass
def repaint(self):
pass
def close(self):
pass
def font(self):
return MagicMock()
def setFont(self, font):
pass
class _StubQDialog(_StubQWidget):
"""QDialog Stub。"""
Accepted = 1
Rejected = 0
def exec_(self):
return self.Accepted
def accept(self):
pass
def reject(self):
pass
class _StubQLabel(_StubQWidget):
"""QLabel Stub追蹤 setText 呼叫,可在測試中驗證顯示文字。"""
def __init__(self, text: str = "", parent=None):
super().__init__(parent)
self._text = text
self.setText = MagicMock(side_effect=self._set_text)
def _set_text(self, text: str) -> None:
self._text = text
def text(self) -> str:
return self._text
class _StubLayout(_StubQObject):
"""QLayout Stub忽略所有 add* 呼叫。"""
def addWidget(self, *args, **kwargs):
pass
def addLayout(self, *args, **kwargs):
pass
def addStretch(self, *args, **kwargs):
pass
def setSpacing(self, *args, **kwargs):
pass
def setContentsMargins(self, *args, **kwargs):
pass
class _StubQVBoxLayout(_StubLayout):
pass
class _StubQHBoxLayout(_StubLayout):
pass
class _StubQProgressBar(_StubQWidget):
def __init__(self, parent=None):
super().__init__(parent)
self._value = 0
self._maximum = 100
self._minimum = 0
self.setValue = MagicMock(side_effect=self._set_value)
def _set_value(self, v: int) -> None:
self._value = v
def value(self) -> int:
return self._value
def setMaximum(self, v: int) -> None:
self._maximum = v
def setMinimum(self, v: int) -> None:
self._minimum = v
class _StubQTableWidget(_StubQWidget):
def __init__(self, *args, **kwargs):
super().__init__()
self.setItem = MagicMock()
self.setHorizontalHeaderLabels = MagicMock()
class _StubQPushButton(_StubQWidget):
def __init__(self, text: str = "", parent=None):
super().__init__(parent)
self._text = text
self._enabled = True
self.clicked = MagicMock()
self.setEnabled = MagicMock(side_effect=self._set_enabled)
def _set_enabled(self, enabled: bool) -> None:
self._enabled = enabled
def isEnabled(self) -> bool:
return self._enabled
def _make_pyqt_signal(*args, **kwargs):
"""pyqtSignal Stub回傳可 connect/emit 的 MagicMock。"""
signal = MagicMock()
signal.connect = MagicMock()
signal.emit = MagicMock()
return signal
def _make_qthread():
"""QThread Stub。"""
class _StubQThread(_StubQObject):
started = MagicMock()
finished = MagicMock()
def start(self):
pass
def isRunning(self):
return False
def wait(self):
pass
def run(self):
pass
def deleteLater(self):
pass
return _StubQThread
# 建立 PyQt5.QtWidgets Mock 模組(保留 MagicMock 為底,覆蓋關鍵類別)
_qtwidgets_mock = MagicMock()
_qtwidgets_mock.QWidget = _StubQWidget
_qtwidgets_mock.QDialog = _StubQDialog
_qtwidgets_mock.QLabel = _StubQLabel
_qtwidgets_mock.QVBoxLayout = _StubQVBoxLayout
_qtwidgets_mock.QHBoxLayout = _StubQHBoxLayout
_qtwidgets_mock.QProgressBar = _StubQProgressBar
_qtwidgets_mock.QTableWidget = _StubQTableWidget
_qtwidgets_mock.QPushButton = _StubQPushButton
_qtwidgets_mock.QSizePolicy = MagicMock()
_qtwidgets_mock.QTableWidgetItem = MagicMock()
_qtwidgets_mock.QHeaderView = MagicMock()
_qtwidgets_mock.QMessageBox = MagicMock()
_qtwidgets_mock.QApplication = MagicMock()
_qtwidgets_mock.QGroupBox = _StubQWidget
_qtwidgets_mock.QFrame = _StubQWidget
_qtwidgets_mock.QScrollArea = _StubQWidget
_qtwidgets_mock.QSpinBox = _StubQWidget
_qtwidgets_mock.QComboBox = _StubQWidget
_qtwidgets_mock.QCheckBox = _StubQWidget
# 建立 PyQt5.QtCore Mock 模組
_qtcore_mock = MagicMock()
_qtcore_mock.pyqtSignal = _make_pyqt_signal
_qtcore_mock.QThread = _make_qthread()
_qtcore_mock.Qt = MagicMock()
_qtcore_mock.QTimer = MagicMock()
_qtcore_mock.QObject = _StubQObject
# 建立 PyQt5.QtGui Mock 模組
_qtgui_mock = MagicMock()
# 建立頂層 PyQt5 Mock
_pyqt5_mock = MagicMock()
_pyqt5_mock.QtWidgets = _qtwidgets_mock
_pyqt5_mock.QtCore = _qtcore_mock
_pyqt5_mock.QtGui = _qtgui_mock
sys.modules["PyQt5"] = _pyqt5_mock
sys.modules["PyQt5.QtWidgets"] = _qtwidgets_mock
sys.modules["PyQt5.QtCore"] = _qtcore_mock
sys.modules["PyQt5.QtGui"] = _qtgui_mock
sys.modules["PyQt5.QtChart"] = MagicMock()
# pyqtgraph選配
_install_mock("pyqtgraph")

View File

@ -0,0 +1,134 @@
"""
BenchmarkDialog 的單元測試
測試策略
- PyQt5 CI 環境中不可用透過 conftest.py Stub 注入繞過 import
- 測試驗證 BenchmarkDialog 的行為邏輯
- 對話框可正常建立
- pipeline_config 為空時開始按鈕被禁用
- show_result 正確顯示加速倍數文字
- update_progress 更新進度條值
"""
import pytest
from unittest.mock import MagicMock
# ---------------------------------------------------------------------------
# 測試BenchmarkDialog 可以建立
# ---------------------------------------------------------------------------
class TestBenchmarkDialogInit:
def should_be_importable(self):
"""BenchmarkDialog 模組應可匯入(即使 PyQt5 被 Stub"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
assert BenchmarkDialog is not None
def should_instantiate_with_valid_config(self):
"""提供非空 pipeline_config 時BenchmarkDialog 應可正常建立。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage_config = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage_config])
assert dialog is not None
def should_instantiate_with_empty_config(self):
"""pipeline_config 為空時BenchmarkDialog 應可建立(不應拋出例外)。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
dialog = BenchmarkDialog(parent=None, pipeline_config=[])
assert dialog is not None
# ---------------------------------------------------------------------------
# 測試pipeline_config 為空時禁用開始按鈕
# ---------------------------------------------------------------------------
class TestStartButtonDisabledWhenEmptyConfig:
def should_disable_start_button_when_pipeline_config_is_empty(self):
"""pipeline_config 為空時start_button 應被禁用。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
dialog = BenchmarkDialog(parent=None, pipeline_config=[])
assert dialog.start_button.isEnabled() is False
def should_enable_start_button_when_pipeline_config_has_stages(self):
"""pipeline_config 有 Stage 時start_button 應為啟用狀態。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage_config = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage_config])
assert dialog.start_button.isEnabled() is True
def should_show_info_label_when_pipeline_config_is_empty(self):
"""pipeline_config 為空時,應有提示訊息 label 顯示。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
dialog = BenchmarkDialog(parent=None, pipeline_config=[])
assert hasattr(dialog, "info_label")
# ---------------------------------------------------------------------------
# 測試show_result 顯示加速倍數
# ---------------------------------------------------------------------------
class TestShowResult:
def should_display_speedup_text_with_x_suffix(self):
"""show_result 後speedup_label 的文字應包含倍數數值與 'x'"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage])
seq_result = MagicMock()
par_result = MagicMock()
dialog.show_result(seq_result, par_result, speedup=3.2)
call_arg = dialog.speedup_label.setText.call_args[0][0]
assert "3.2" in call_arg
assert "x" in call_arg.lower() or "X" in call_arg
def should_display_faster_in_speedup_text(self):
"""show_result 後speedup_label 文字應包含 'FASTER''faster'"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage])
seq_result = MagicMock()
par_result = MagicMock()
dialog.show_result(seq_result, par_result, speedup=2.5)
call_arg = dialog.speedup_label.setText.call_args[0][0]
assert "FASTER" in call_arg or "faster" in call_arg
def should_store_seq_result(self):
"""show_result 後seq_result 應儲存在 dialog 上。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage])
seq_result = MagicMock()
par_result = MagicMock()
dialog.show_result(seq_result, par_result, speedup=1.8)
assert dialog.seq_result is seq_result
def should_store_par_result(self):
"""show_result 後par_result 應儲存在 dialog 上。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage])
seq_result = MagicMock()
par_result = MagicMock()
dialog.show_result(seq_result, par_result, speedup=1.8)
assert dialog.par_result is par_result
# ---------------------------------------------------------------------------
# 測試update_progress 更新進度條
# ---------------------------------------------------------------------------
class TestUpdateProgress:
def should_update_progress_bar_value(self):
"""update_progress 應將進度條值更新為傳入的 value。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage])
dialog.progress_bar.setValue.reset_mock()
dialog.update_progress("warmup", 42)
dialog.progress_bar.setValue.assert_called_once_with(42)
def should_store_current_phase(self):
"""update_progress 應儲存當前 phase 名稱。"""
from ui.dialogs.benchmark_dialog import BenchmarkDialog
stage = MagicMock()
dialog = BenchmarkDialog(parent=None, pipeline_config=[stage])
dialog.update_progress("sequential", 70)
assert dialog.current_phase == "sequential"

View File

@ -0,0 +1,282 @@
"""
PerformanceBenchmarker 的單元測試
測試策略
- BenchmarkConfig / BenchmarkResult 資料結構驗證
- calculate_speedup() 純計算邏輯
- run_sequential_benchmark() / run_parallel_benchmark() 透過注入的
inference_runner callable 進行 Mock不需要實際硬體
- run_full_benchmark() 整合流程
"""
import time
import pytest
from unittest.mock import MagicMock, patch
from core.performance.benchmarker import (
BenchmarkConfig,
BenchmarkResult,
PerformanceBenchmarker,
)
# ---------------------------------------------------------------------------
# 輔助:建立測試用資料結構
# ---------------------------------------------------------------------------
def make_config(**kwargs) -> BenchmarkConfig:
"""建立測試用 BenchmarkConfig提供合理的預設值。"""
defaults = dict(
pipeline_config=[],
test_duration_seconds=1.0,
warmup_frames=2,
test_input_source="test_video.mp4",
)
defaults.update(kwargs)
return BenchmarkConfig(**defaults)
def make_result(mode: str = "sequential", fps: float = 30.0) -> BenchmarkResult:
"""建立測試用 BenchmarkResult。"""
avg_latency_ms = (1000.0 / fps) if fps > 0 else 0.0
return BenchmarkResult(
mode=mode,
fps=fps,
avg_latency_ms=avg_latency_ms,
p95_latency_ms=avg_latency_ms * 1.5,
total_frames=int(fps * 30),
timestamp=time.time(),
device_config={"KL520": 1},
)
# ---------------------------------------------------------------------------
# 測試BenchmarkConfig 資料結構
# ---------------------------------------------------------------------------
class TestBenchmarkConfig:
def should_have_default_duration_30_seconds(self):
"""test_duration_seconds 預設值應為 30.0。"""
config = BenchmarkConfig(
pipeline_config=[],
test_input_source="video.mp4",
)
assert config.test_duration_seconds == 30.0
def should_have_default_warmup_50_frames(self):
"""warmup_frames 預設值應為 50。"""
config = BenchmarkConfig(
pipeline_config=[],
test_input_source="video.mp4",
)
assert config.warmup_frames == 50
def should_allow_custom_duration(self):
"""應可自訂 test_duration_seconds。"""
config = BenchmarkConfig(
pipeline_config=[],
test_input_source="video.mp4",
test_duration_seconds=10.0,
)
assert config.test_duration_seconds == 10.0
# ---------------------------------------------------------------------------
# 測試BenchmarkResult 資料結構
# ---------------------------------------------------------------------------
class TestBenchmarkResult:
def should_store_all_required_fields(self):
"""BenchmarkResult 應儲存所有規格要求的欄位。"""
ts = time.time()
result = BenchmarkResult(
mode="parallel",
fps=45.2,
avg_latency_ms=22.1,
p95_latency_ms=35.0,
total_frames=1356,
timestamp=ts,
device_config={"KL720": 2},
)
assert result.mode == "parallel"
assert result.fps == pytest.approx(45.2)
assert result.avg_latency_ms == pytest.approx(22.1)
assert result.p95_latency_ms == pytest.approx(35.0)
assert result.total_frames == 1356
assert result.timestamp == pytest.approx(ts)
assert result.device_config == {"KL720": 2}
def should_accept_sequential_mode(self):
"""mode 欄位應接受 'sequential'"""
result = make_result(mode="sequential")
assert result.mode == "sequential"
def should_accept_parallel_mode(self):
"""mode 欄位應接受 'parallel'"""
result = make_result(mode="parallel")
assert result.mode == "parallel"
# ---------------------------------------------------------------------------
# 測試calculate_speedup純計算無外部依賴
# ---------------------------------------------------------------------------
class TestCalculateSpeedup:
def should_return_ratio_of_parallel_to_sequential_fps(self):
"""calculate_speedup 應回傳 par.fps / seq.fps。"""
benchmarker = PerformanceBenchmarker()
seq = make_result(mode="sequential", fps=20.0)
par = make_result(mode="parallel", fps=60.0)
speedup = benchmarker.calculate_speedup(seq, par)
assert speedup == pytest.approx(3.0)
def should_return_one_when_same_fps(self):
"""相同 FPS 時 speedup 應為 1.0。"""
benchmarker = PerformanceBenchmarker()
result = make_result(fps=30.0)
speedup = benchmarker.calculate_speedup(result, result)
assert speedup == pytest.approx(1.0)
def should_raise_when_sequential_fps_is_zero(self):
"""seq.fps 為 0 時應引發 ValueError避免除以零。"""
benchmarker = PerformanceBenchmarker()
seq = make_result(fps=0.0)
par = make_result(fps=30.0)
with pytest.raises(ValueError):
benchmarker.calculate_speedup(seq, par)
# ---------------------------------------------------------------------------
# 測試run_sequential_benchmarkMock inference_runner
# ---------------------------------------------------------------------------
class TestRunSequentialBenchmark:
def should_return_benchmark_result_with_sequential_mode(self):
"""run_sequential_benchmark() 應回傳 mode='sequential' 的 BenchmarkResult。"""
benchmarker = PerformanceBenchmarker()
config = make_config(warmup_frames=1, test_duration_seconds=0.1)
# Mock inference_runner每次呼叫模擬 10ms 推論
def fake_runner(frame_data):
time.sleep(0.01)
return {"result": "ok"}
result = benchmarker.run_sequential_benchmark(config, inference_runner=fake_runner)
assert isinstance(result, BenchmarkResult)
assert result.mode == "sequential"
def should_report_positive_fps(self):
"""FPS 應大於 0。"""
benchmarker = PerformanceBenchmarker()
config = make_config(warmup_frames=1, test_duration_seconds=0.1)
def fake_runner(frame_data):
time.sleep(0.01)
return {}
result = benchmarker.run_sequential_benchmark(config, inference_runner=fake_runner)
assert result.fps > 0
def should_report_positive_latency(self):
"""avg_latency_ms 和 p95_latency_ms 應大於 0。"""
benchmarker = PerformanceBenchmarker()
config = make_config(warmup_frames=1, test_duration_seconds=0.1)
def fake_runner(frame_data):
time.sleep(0.01)
return {}
result = benchmarker.run_sequential_benchmark(config, inference_runner=fake_runner)
assert result.avg_latency_ms > 0
assert result.p95_latency_ms > 0
def should_count_frames_excluding_warmup(self):
"""total_frames 不應包含暖機幀數。"""
benchmarker = PerformanceBenchmarker()
call_times = []
def fake_runner(frame_data):
call_times.append(time.time())
time.sleep(0.005)
return {}
config = make_config(warmup_frames=3, test_duration_seconds=0.1)
result = benchmarker.run_sequential_benchmark(config, inference_runner=fake_runner)
# warmup 幀不計入 total_frames
assert result.total_frames < len(call_times)
assert result.total_frames > 0
def should_use_device_config_from_benchmarker(self):
"""BenchmarkResult.device_config 應由 PerformanceBenchmarker 填寫。"""
benchmarker = PerformanceBenchmarker(device_config={"KL520": 1})
config = make_config(warmup_frames=1, test_duration_seconds=0.05)
def fake_runner(frame_data):
return {}
result = benchmarker.run_sequential_benchmark(config, inference_runner=fake_runner)
assert result.device_config == {"KL520": 1}
# ---------------------------------------------------------------------------
# 測試run_parallel_benchmarkMock inference_runner
# ---------------------------------------------------------------------------
class TestRunParallelBenchmark:
def should_return_benchmark_result_with_parallel_mode(self):
"""run_parallel_benchmark() 應回傳 mode='parallel' 的 BenchmarkResult。"""
benchmarker = PerformanceBenchmarker()
config = make_config(warmup_frames=1, test_duration_seconds=0.1)
def fake_runner(frame_data):
time.sleep(0.01)
return {}
result = benchmarker.run_parallel_benchmark(config, inference_runner=fake_runner)
assert isinstance(result, BenchmarkResult)
assert result.mode == "parallel"
# ---------------------------------------------------------------------------
# 測試run_full_benchmark
# ---------------------------------------------------------------------------
class TestRunFullBenchmark:
def should_return_tuple_of_seq_par_speedup(self):
"""run_full_benchmark() 應回傳 (BenchmarkResult, BenchmarkResult, float)。"""
benchmarker = PerformanceBenchmarker()
config = make_config(warmup_frames=1, test_duration_seconds=0.05)
def fast_runner(frame_data):
time.sleep(0.005)
return {}
seq_result, par_result, speedup = benchmarker.run_full_benchmark(
config, inference_runner=fast_runner
)
assert isinstance(seq_result, BenchmarkResult)
assert isinstance(par_result, BenchmarkResult)
assert isinstance(speedup, float)
assert seq_result.mode == "sequential"
assert par_result.mode == "parallel"
def should_calculate_speedup_consistently(self):
"""speedup 應與 calculate_speedup(seq, par) 的結果一致。"""
benchmarker = PerformanceBenchmarker()
config = make_config(warmup_frames=1, test_duration_seconds=0.05)
def fake_runner(frame_data):
time.sleep(0.005)
return {}
seq_result, par_result, speedup = benchmarker.run_full_benchmark(
config, inference_runner=fake_runner
)
expected_speedup = benchmarker.calculate_speedup(seq_result, par_result)
assert speedup == pytest.approx(expected_speedup)

View File

@ -0,0 +1,43 @@
"""
tests/unit/test_bottleneck.py
Unit tests for the BottleneckAlert dataclass.
TDD: Red phase tests written before implementation.
"""
import pytest
from core.device.bottleneck import BottleneckAlert
class TestBottleneckAlert:
def test_fields_accessible(self):
alert = BottleneckAlert(
stage_id="stage-1",
queue_fill_rate=0.85,
suggested_action="Add more Dongles to this stage",
severity="warning",
)
assert alert.stage_id == "stage-1"
assert alert.queue_fill_rate == 0.85
assert alert.suggested_action == "Add more Dongles to this stage"
assert alert.severity == "warning"
def test_severity_critical(self):
alert = BottleneckAlert(
stage_id="stage-2",
queue_fill_rate=0.95,
suggested_action="Urgent: add Dongles",
severity="critical",
)
assert alert.severity == "critical"
def test_dataclass_equality(self):
a = BottleneckAlert("s1", 0.9, "action", "warning")
b = BottleneckAlert("s1", 0.9, "action", "warning")
assert a == b
def test_dataclass_inequality(self):
a = BottleneckAlert("s1", 0.9, "action", "warning")
b = BottleneckAlert("s1", 0.5, "action", "warning")
assert a != b

View File

@ -0,0 +1,106 @@
"""
tests/unit/test_device_management_panel.py
Unit tests for DeviceManagementPanel QWidget.
TDD: Red phase tests written before implementation.
Uses conftest.py Stubs for PyQt5 so no display hardware is needed.
"""
from unittest.mock import MagicMock, patch
import pytest
from core.device.device_manager import DeviceInfo, DeviceManager
from ui.components.device_management_panel import DeviceManagementPanel
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _make_device_manager(devices=None):
"""Return a DeviceManager-like mock with controllable scan_devices()."""
mgr = MagicMock(spec=DeviceManager)
if devices is None:
devices = [
DeviceInfo(
device_id="usb-1",
series="KL520",
product_id=0x100,
status="online",
gops=2,
assigned_stage=None,
current_fps=15.0,
utilization_pct=50.0,
)
]
mgr.scan_devices.return_value = devices
mgr.get_device_statistics.return_value = {d.device_id: d for d in devices}
mgr.get_load_balance_recommendation.return_value = {}
return mgr
# ---------------------------------------------------------------------------
# Panel instantiation
# ---------------------------------------------------------------------------
class TestDeviceManagementPanelInit:
def test_panel_creates_without_error(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
assert panel is not None
def test_panel_has_auto_balance_button(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
# auto_balance_button must exist
assert hasattr(panel, "auto_balance_button")
def test_auto_balance_button_text(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
assert panel.auto_balance_button._text == "Auto Balance"
# ---------------------------------------------------------------------------
# refresh()
# ---------------------------------------------------------------------------
class TestDeviceManagementPanelRefresh:
def test_refresh_calls_scan_devices(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
mgr.scan_devices.reset_mock()
panel.refresh()
mgr.scan_devices.assert_called_once()
def test_refresh_updates_known_devices(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
panel.refresh()
# After refresh, panel should have device data accessible
assert len(panel._devices) == 1
assert panel._devices[0].device_id == "usb-1"
def test_refresh_with_no_devices_sets_empty_list(self):
mgr = _make_device_manager(devices=[])
panel = DeviceManagementPanel(device_manager=mgr)
panel.refresh()
assert panel._devices == []
# ---------------------------------------------------------------------------
# set_auto_refresh()
# ---------------------------------------------------------------------------
class TestSetAutoRefresh:
def test_set_auto_refresh_stores_interval(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
panel.set_auto_refresh(interval_ms=3000)
assert panel._auto_refresh_interval_ms == 3000
def test_set_auto_refresh_default_interval(self):
mgr = _make_device_manager()
panel = DeviceManagementPanel(device_manager=mgr)
panel.set_auto_refresh()
assert panel._auto_refresh_interval_ms == 2000

View File

@ -0,0 +1,291 @@
"""
tests/unit/test_device_manager.py
Unit tests for DeviceManager, DeviceInfo, DeviceHealth.
TDD: Red phase tests written before implementation.
"""
from unittest.mock import MagicMock
import pytest
from core.device.device_manager import DeviceInfo, DeviceHealth, DeviceManager
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
def _make_mock_kp_api(devices):
"""Build a minimal kp API mock whose scan_devices() returns a descriptor list."""
descriptor_list = MagicMock()
descriptor_list.device_descriptor_number = len(devices)
mock_descs = []
for d in devices:
desc = MagicMock()
desc.usb_port_id = d["port_id"]
desc.product_id = d["product_id"]
desc.kn_number = d.get("kn_number", 0)
mock_descs.append(desc)
descriptor_list.device_descriptor_list = mock_descs
kp_api = MagicMock()
kp_api.core.scan_devices.return_value = descriptor_list
return kp_api
@pytest.fixture
def two_device_kp():
"""Mock kp API returning one KL520 and one KL720."""
return _make_mock_kp_api([
{"port_id": 1, "product_id": 0x100}, # KL520
{"port_id": 2, "product_id": 0x720}, # KL720
])
@pytest.fixture
def empty_kp():
"""Mock kp API returning no devices."""
descriptor_list = MagicMock()
descriptor_list.device_descriptor_number = 0
descriptor_list.device_descriptor_list = []
kp_api = MagicMock()
kp_api.core.scan_devices.return_value = descriptor_list
return kp_api
# ---------------------------------------------------------------------------
# DeviceInfo dataclass
# ---------------------------------------------------------------------------
class TestDeviceInfo:
def test_fields_accessible(self):
info = DeviceInfo(
device_id="usb-1",
series="KL520",
product_id=0x100,
status="online",
gops=2,
assigned_stage=None,
current_fps=0.0,
utilization_pct=0.0,
)
assert info.device_id == "usb-1"
assert info.series == "KL520"
assert info.product_id == 0x100
assert info.status == "online"
assert info.gops == 2
assert info.assigned_stage is None
assert info.current_fps == 0.0
assert info.utilization_pct == 0.0
# ---------------------------------------------------------------------------
# DeviceHealth dataclass
# ---------------------------------------------------------------------------
class TestDeviceHealth:
def test_fields_accessible(self):
health = DeviceHealth(
device_id="usb-1",
temperature_celsius=None,
error_count=0,
last_error=None,
uptime_seconds=120.0,
)
assert health.device_id == "usb-1"
assert health.temperature_celsius is None
assert health.error_count == 0
assert health.last_error is None
assert health.uptime_seconds == 120.0
# ---------------------------------------------------------------------------
# DeviceManager.scan_devices
# ---------------------------------------------------------------------------
class TestScanDevices:
def test_returns_list_of_device_info(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
devices = mgr.scan_devices()
assert isinstance(devices, list)
assert len(devices) == 2
assert all(isinstance(d, DeviceInfo) for d in devices)
def test_kl520_properties(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
devices = mgr.scan_devices()
kl520 = next(d for d in devices if d.series == "KL520")
assert kl520.product_id == 0x100
assert kl520.gops == 2
assert kl520.status == "online"
def test_kl720_properties(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
devices = mgr.scan_devices()
kl720 = next(d for d in devices if d.series == "KL720")
assert kl720.product_id == 0x720
assert kl720.gops == 28
assert kl720.status == "online"
def test_empty_returns_empty_list(self, empty_kp):
mgr = DeviceManager(kp_api=empty_kp)
devices = mgr.scan_devices()
assert devices == []
def test_device_id_uses_port(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
devices = mgr.scan_devices()
ids = {d.device_id for d in devices}
assert "usb-1" in ids
assert "usb-2" in ids
# ---------------------------------------------------------------------------
# DeviceManager.assign_device / unassign_device
# ---------------------------------------------------------------------------
class TestAssignDevice:
def test_assign_online_device_returns_true(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
result = mgr.assign_device("usb-1", "stage-A")
assert result is True
def test_assigned_device_shows_stage(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
mgr.assign_device("usb-1", "stage-A")
devices = mgr.get_device_statistics()
assert devices["usb-1"].assigned_stage == "stage-A"
def test_assign_already_assigned_device_returns_false(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
mgr.assign_device("usb-1", "stage-A")
result = mgr.assign_device("usb-1", "stage-B")
assert result is False
def test_assign_unknown_device_returns_false(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
result = mgr.assign_device("usb-99", "stage-A")
assert result is False
def test_unassign_frees_device(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
mgr.assign_device("usb-1", "stage-A")
result = mgr.unassign_device("usb-1")
assert result is True
devices = mgr.get_device_statistics()
assert devices["usb-1"].assigned_stage is None
def test_unassign_unknown_device_returns_false(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
result = mgr.unassign_device("usb-99")
assert result is False
def test_reassign_after_unassign_succeeds(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
mgr.assign_device("usb-1", "stage-A")
mgr.unassign_device("usb-1")
result = mgr.assign_device("usb-1", "stage-B")
assert result is True
def test_should_reject_assignment_for_offline_device(self):
"""assign_device returns False when the device status is offline."""
kp_api = _make_mock_kp_api([{"port_id": 5, "product_id": 0x100}])
mgr = DeviceManager(kp_api=kp_api)
mgr.scan_devices()
mgr._devices["usb-5"].status = "offline"
result = mgr.assign_device("usb-5", "stage-A")
assert result is False
def test_should_allow_reassignment_to_same_stage(self, two_device_kp):
"""Assigning a device to the same stage twice is idempotent and returns True."""
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
mgr.assign_device("usb-1", "stage-A")
result = mgr.assign_device("usb-1", "stage-A")
assert result is True
def test_should_reject_reassignment_to_different_stage(self, two_device_kp):
"""Assigning a device already assigned to a different stage returns False."""
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
mgr.assign_device("usb-1", "stage-A")
result = mgr.assign_device("usb-1", "stage-B")
assert result is False
# ---------------------------------------------------------------------------
# DeviceManager.get_load_balance_recommendation
# ---------------------------------------------------------------------------
class TestLoadBalanceRecommendation:
def test_returns_dict_mapping_stage_to_device(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
rec = mgr.get_load_balance_recommendation(["stage-A", "stage-B"])
assert isinstance(rec, dict)
assert "stage-A" in rec
assert "stage-B" in rec
def test_high_gops_assigned_to_first_stage(self, two_device_kp):
"""KL720 (28 GOPS) should be recommended for the first stage."""
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
rec = mgr.get_load_balance_recommendation(["stage-A", "stage-B"])
# The device recommended for stage-A should be the higher-gops one
stats = mgr.get_device_statistics()
first_device_id = rec["stage-A"]
assert stats[first_device_id].gops == 28 # KL720
def test_recommendation_with_more_stages_than_devices(self, two_device_kp):
"""Extra stages beyond available devices map to empty string."""
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
rec = mgr.get_load_balance_recommendation(["s1", "s2", "s3"])
assert rec["s3"] == ""
def test_recommendation_with_no_devices(self, empty_kp):
mgr = DeviceManager(kp_api=empty_kp)
mgr.scan_devices()
rec = mgr.get_load_balance_recommendation(["stage-A"])
assert rec["stage-A"] == ""
# ---------------------------------------------------------------------------
# DeviceManager.get_device_health
# ---------------------------------------------------------------------------
class TestGetDeviceHealth:
def test_returns_device_health(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
health = mgr.get_device_health("usb-1")
assert isinstance(health, DeviceHealth)
assert health.device_id == "usb-1"
assert health.temperature_celsius is None # SDK does not support it
assert health.error_count == 0
# ---------------------------------------------------------------------------
# DeviceManager.get_device_statistics
# ---------------------------------------------------------------------------
class TestGetDeviceStatistics:
def test_returns_all_known_devices(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
stats = mgr.get_device_statistics()
assert isinstance(stats, dict)
assert "usb-1" in stats
assert "usb-2" in stats
def test_values_are_device_info(self, two_device_kp):
mgr = DeviceManager(kp_api=two_device_kp)
mgr.scan_devices()
stats = mgr.get_device_statistics()
assert all(isinstance(v, DeviceInfo) for v in stats.values())

View File

@ -0,0 +1,179 @@
"""
tests/unit/test_export_report_dialog.py ExportReportDialog 單元測試
在無 PyQt5 環境下使用 conftest.py 中的 Stub 進行測試
"""
from unittest.mock import MagicMock, patch
import pytest
from core.performance.benchmarker import BenchmarkResult
from core.performance.report_exporter import DeviceSummary, ReportData
from ui.dialogs.export_report_dialog import ExportReportDialog
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
def _make_benchmark_result(mode: str = "sequential", fps: float = 14.2) -> BenchmarkResult:
return BenchmarkResult(
mode=mode,
fps=fps,
avg_latency_ms=70.4,
p95_latency_ms=95.0,
total_frames=426,
timestamp=1743856222.0,
device_config={"KL720": 1},
id=f"benchmark_20260405_143022_{mode}",
)
def _make_dialog(
benchmarker=None,
history=None,
device_manager=None,
dashboard=None,
) -> ExportReportDialog:
"""建立 ExportReportDialog所有依賴預設為 MagicMock。"""
if benchmarker is None:
benchmarker = MagicMock()
benchmarker.history = []
if history is None:
history = MagicMock()
history.get_history.return_value = []
if device_manager is None:
device_manager = MagicMock()
device_manager.scan_devices.return_value = []
if dashboard is None:
dashboard = MagicMock()
return ExportReportDialog(
parent=None,
benchmarker=benchmarker,
history=history,
device_manager=device_manager,
dashboard=dashboard,
)
# ---------------------------------------------------------------------------
# 基本建立
# ---------------------------------------------------------------------------
class TestExportReportDialogCreation:
def test_dialog_can_be_created(self):
"""ExportReportDialog 應可正常建立"""
dialog = _make_dialog()
assert dialog is not None
def test_dialog_is_instance_of_qdialog(self):
"""ExportReportDialog 應繼承自 QDialog或其 Stub"""
from PyQt5.QtWidgets import QDialog
dialog = _make_dialog()
assert isinstance(dialog, QDialog)
def test_dialog_default_format_is_pdf(self):
"""格式選擇預設應為 PDF"""
dialog = _make_dialog()
assert dialog._selected_format == "pdf"
# ---------------------------------------------------------------------------
# _collect_report_data
# ---------------------------------------------------------------------------
class TestCollectReportData:
def test_returns_report_data_instance(self):
"""_collect_report_data() 應回傳 ReportData 型別"""
dialog = _make_dialog()
result = dialog._collect_report_data()
assert isinstance(result, ReportData)
def test_uses_history_records(self):
"""_collect_report_data() 應使用 history.get_history() 的結果"""
history = MagicMock()
records = [_make_benchmark_result("parallel")]
history.get_history.return_value = records
dialog = _make_dialog(history=history)
result = dialog._collect_report_data()
history.get_history.assert_called_once()
assert result.history_records == records
def test_uses_device_manager_scan(self):
"""_collect_report_data() 應呼叫 device_manager.scan_devices()"""
device_manager = MagicMock()
device_manager.scan_devices.return_value = []
dialog = _make_dialog(device_manager=device_manager)
dialog._collect_report_data()
device_manager.scan_devices.assert_called_once()
def test_handles_history_failure_gracefully(self):
"""history.get_history() 拋出例外時,應回傳空的 history_records"""
history = MagicMock()
history.get_history.side_effect = Exception("history error")
dialog = _make_dialog(history=history)
result = dialog._collect_report_data()
assert result.history_records == []
def test_handles_device_manager_failure_gracefully(self):
"""device_manager.scan_devices() 拋出例外時devices 應為空列表"""
device_manager = MagicMock()
device_manager.scan_devices.side_effect = Exception("device error")
dialog = _make_dialog(device_manager=device_manager)
result = dialog._collect_report_data()
assert result.devices == []
def test_uses_latest_benchmark_from_history_as_parallel_result(self):
"""benchmarker.history 有記錄時,應使用最新一筆作為 parallel_result"""
benchmarker = MagicMock()
latest = _make_benchmark_result("parallel", fps=45.6)
benchmarker.history = [_make_benchmark_result("sequential"), latest]
dialog = _make_dialog(benchmarker=benchmarker)
result = dialog._collect_report_data()
# parallel_result 應為最新一筆index -1
assert result.parallel_result == latest
def test_parallel_result_is_none_when_history_empty(self):
"""benchmarker.history 為空時parallel_result 應為 None"""
benchmarker = MagicMock()
benchmarker.history = []
dialog = _make_dialog(benchmarker=benchmarker)
result = dialog._collect_report_data()
assert result.parallel_result is None
def test_chart_image_bytes_is_none(self):
"""chart_image_bytes 應為 None截圖整合留未來"""
dialog = _make_dialog()
result = dialog._collect_report_data()
assert result.chart_image_bytes is None
# ---------------------------------------------------------------------------
# 格式選擇
# ---------------------------------------------------------------------------
class TestFormatSelection:
def test_set_format_to_csv(self):
"""可將格式設為 CSV"""
dialog = _make_dialog()
dialog._set_format("csv")
assert dialog._selected_format == "csv"
def test_set_format_to_pdf(self):
"""可將格式設回 PDF"""
dialog = _make_dialog()
dialog._set_format("csv")
dialog._set_format("pdf")
assert dialog._selected_format == "pdf"

224
tests/unit/test_history.py Normal file
View File

@ -0,0 +1,224 @@
"""
PerformanceHistory 的單元測試
測試覆蓋
- 記錄 BenchmarkResult
- 依條件查詢歷史記錄limit / mode 過濾
- 回歸比較報告
- 持久化JSON 讀寫
"""
import json
import os
import time
import tempfile
import pytest
from core.performance.benchmarker import BenchmarkResult
from core.performance.history import PerformanceHistory
# ---------------------------------------------------------------------------
# 輔助函式
# ---------------------------------------------------------------------------
def make_result(mode: str = "sequential", fps: float = 30.0, avg_latency_ms: float = 33.3,
p95_latency_ms: float = 50.0, total_frames: int = 900) -> BenchmarkResult:
"""建立測試用的 BenchmarkResult。"""
return BenchmarkResult(
mode=mode,
fps=fps,
avg_latency_ms=avg_latency_ms,
p95_latency_ms=p95_latency_ms,
total_frames=total_frames,
timestamp=time.time(),
device_config={"KL520": 1},
)
# ---------------------------------------------------------------------------
# Fixture
# ---------------------------------------------------------------------------
@pytest.fixture
def tmp_history(tmp_path):
"""回傳一個使用暫存路徑的 PerformanceHistory 實例。"""
storage_path = str(tmp_path / "benchmark_history.json")
return PerformanceHistory(storage_path=storage_path)
# ---------------------------------------------------------------------------
# 測試:基本記錄功能
# ---------------------------------------------------------------------------
class TestRecord:
def should_record_result_to_storage(self, tmp_history):
"""record() 應將結果寫入 JSON 儲存。"""
result = make_result()
tmp_history.record(result)
records = tmp_history.get_history()
assert len(records) == 1
def should_persist_across_instances(self, tmp_path):
"""record() 應將資料持久化,重新建立實例後仍可讀取。"""
storage_path = str(tmp_path / "benchmark_history.json")
history1 = PerformanceHistory(storage_path=storage_path)
result = make_result(fps=42.0)
history1.record(result)
history2 = PerformanceHistory(storage_path=storage_path)
records = history2.get_history()
assert len(records) == 1
assert records[0].fps == 42.0
def should_assign_unique_id_to_each_record(self, tmp_history):
"""每筆記錄應有唯一的 id。"""
tmp_history.record(make_result())
time.sleep(0.01)
tmp_history.record(make_result())
records = tmp_history.get_history()
ids = [r.id for r in records]
assert len(set(ids)) == 2
def should_store_all_benchmark_fields(self, tmp_history):
"""record() 應完整儲存所有欄位。"""
result = make_result(
mode="parallel",
fps=60.5,
avg_latency_ms=16.5,
p95_latency_ms=25.0,
total_frames=1815,
)
tmp_history.record(result)
saved = tmp_history.get_history()[0]
assert saved.mode == "parallel"
assert saved.fps == pytest.approx(60.5)
assert saved.avg_latency_ms == pytest.approx(16.5)
assert saved.p95_latency_ms == pytest.approx(25.0)
assert saved.total_frames == 1815
# ---------------------------------------------------------------------------
# 測試get_history 查詢
# ---------------------------------------------------------------------------
class TestGetHistory:
def should_return_records_in_reverse_chronological_order(self, tmp_history):
"""get_history() 應以最新優先的順序回傳記錄。"""
base_time = 1000000.0
for i, fps in enumerate([10.0, 20.0, 30.0]):
result = make_result(fps=fps)
result.timestamp = base_time + i # 確保時間戳遞增
tmp_history.record(result)
records = tmp_history.get_history()
fps_values = [r.fps for r in records]
# 最新優先fps=30 (timestamp最大) 排第一
assert fps_values == [30.0, 20.0, 10.0]
def should_respect_limit_parameter(self, tmp_history):
"""get_history(limit=N) 應只回傳最新的 N 筆記錄。"""
for i in range(5):
tmp_history.record(make_result(fps=float(i + 1)))
records = tmp_history.get_history(limit=3)
assert len(records) == 3
def should_filter_by_mode(self, tmp_history):
"""get_history(mode='parallel') 應只回傳 parallel 模式的記錄。"""
tmp_history.record(make_result(mode="sequential"))
tmp_history.record(make_result(mode="parallel"))
tmp_history.record(make_result(mode="sequential"))
records = tmp_history.get_history(mode="parallel")
assert len(records) == 1
assert records[0].mode == "parallel"
def should_return_empty_list_when_no_records(self, tmp_history):
"""空儲存應回傳空列表。"""
records = tmp_history.get_history()
assert records == []
def should_apply_limit_after_mode_filter(self, tmp_history):
"""limit 應在 mode 過濾之後套用。"""
for _ in range(4):
tmp_history.record(make_result(mode="sequential"))
for _ in range(4):
tmp_history.record(make_result(mode="parallel"))
records = tmp_history.get_history(limit=2, mode="parallel")
assert len(records) == 2
assert all(r.mode == "parallel" for r in records)
# ---------------------------------------------------------------------------
# 測試:回歸報告
# ---------------------------------------------------------------------------
class TestGetRegressionReport:
def should_report_fps_improvement(self, tmp_history):
"""get_regression_report() 應計算 FPS 改善百分比。"""
baseline = make_result(fps=30.0, avg_latency_ms=33.3, p95_latency_ms=50.0)
tmp_history.record(baseline)
baseline_id = tmp_history.get_history()[0].id
compare = make_result(fps=45.0, avg_latency_ms=22.2, p95_latency_ms=35.0)
tmp_history.record(compare)
compare_id = tmp_history.get_history()[0].id # 最新一筆
report = tmp_history.get_regression_report(baseline_id, compare_id)
assert "fps_change_pct" in report
assert report["fps_change_pct"] == pytest.approx(50.0, rel=1e-2)
def should_report_latency_change(self, tmp_history):
"""get_regression_report() 應計算延遲變化百分比。"""
baseline = make_result(avg_latency_ms=40.0, p95_latency_ms=60.0)
tmp_history.record(baseline)
baseline_id = tmp_history.get_history()[0].id
compare = make_result(avg_latency_ms=20.0, p95_latency_ms=30.0)
tmp_history.record(compare)
compare_id = tmp_history.get_history()[0].id
report = tmp_history.get_regression_report(baseline_id, compare_id)
assert "avg_latency_change_pct" in report
assert report["avg_latency_change_pct"] == pytest.approx(-50.0, rel=1e-2)
def should_raise_error_for_invalid_id(self, tmp_history):
"""無效的 id 應引發 ValueError。"""
with pytest.raises(ValueError):
tmp_history.get_regression_report("nonexistent_baseline", "nonexistent_compare")
# ---------------------------------------------------------------------------
# 測試JSON 檔案格式
# ---------------------------------------------------------------------------
class TestStorageFormat:
def should_produce_valid_json_file(self, tmp_path):
"""儲存的檔案應為合法的 JSON 並符合規格格式。"""
storage_path = str(tmp_path / "benchmark_history.json")
history = PerformanceHistory(storage_path=storage_path)
history.record(make_result(mode="parallel", fps=45.2))
with open(storage_path, "r", encoding="utf-8") as f:
data = json.load(f)
assert "records" in data
assert len(data["records"]) == 1
record = data["records"][0]
for field in ("id", "mode", "fps", "avg_latency_ms", "p95_latency_ms",
"total_frames", "timestamp", "device_config"):
assert field in record, f"缺少欄位:{field}"
def should_create_parent_directory_if_not_exists(self, tmp_path):
"""若父目錄不存在,應自動建立。"""
storage_path = str(tmp_path / "deep" / "nested" / "history.json")
history = PerformanceHistory(storage_path=storage_path)
history.record(make_result())
assert os.path.exists(storage_path)

View File

@ -0,0 +1,364 @@
"""
tests/unit/test_optimization_engine.py
TDD Phase 3.3.1 OptimizationEngine 單元測試
覆蓋範圍
- analyze_pipeline 的三條優化規則含邊界值測試
- predict_performance 計算邏輯
- apply_suggestion rebalance_devices 呼叫 device_manager
"""
import pytest
from unittest.mock import MagicMock, call
from core.optimization.engine import OptimizationEngine, OptimizationSuggestion
from core.device.device_manager import DeviceInfo
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def engine():
return OptimizationEngine()
def _make_stats(
stage_fill_rates=None,
stage_avg_times=None,
device_utilizations=None,
):
"""建立 analyze_pipeline 接受的 stats 字典。"""
stage_fill_rates = stage_fill_rates or {}
stage_avg_times = stage_avg_times or {}
device_utilizations = device_utilizations or {}
stages = {}
all_stage_ids = set(stage_fill_rates) | set(stage_avg_times)
for sid in all_stage_ids:
stages[sid] = {
"queue_fill_rate": stage_fill_rates.get(sid, 0.0),
"avg_processing_time": stage_avg_times.get(sid, 10.0),
"fps": 30.0,
}
devices = {}
for did, util in device_utilizations.items():
devices[did] = {
"utilization_pct": util,
"series": "KL720",
}
return {"stages": stages, "devices": devices}
def _make_device_info(device_id="usb-1", gops=28, series="KL720"):
return DeviceInfo(
device_id=device_id,
series=series,
product_id=0x720,
status="online",
gops=gops,
assigned_stage=None,
current_fps=0.0,
utilization_pct=0.0,
)
# ---------------------------------------------------------------------------
# analyze_pipeline — rule 1: rebalance_devices
# ---------------------------------------------------------------------------
class TestAnalyzePipelineRebalanceDevices:
"""queue_fill_rate > 0.70 應觸發 rebalance_devices 建議。"""
def test_should_suggest_rebalance_when_fill_rate_above_threshold(self, engine):
stats = _make_stats(stage_fill_rates={"stage_0": 0.71})
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "rebalance_devices" in types
def test_should_not_suggest_rebalance_when_fill_rate_at_threshold(self, engine):
"""恰好等於 0.70 不觸發(需 > 0.70)。"""
stats = _make_stats(stage_fill_rates={"stage_0": 0.70})
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "rebalance_devices" not in types
def test_should_not_suggest_rebalance_when_fill_rate_below_threshold(self, engine):
stats = _make_stats(stage_fill_rates={"stage_0": 0.50})
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "rebalance_devices" not in types
def test_rebalance_suggestion_has_required_fields(self, engine):
stats = _make_stats(stage_fill_rates={"stage_0": 0.85})
suggestions = engine.analyze_pipeline(stats)
rebalance = next(s for s in suggestions if s.type == "rebalance_devices")
assert rebalance.suggestion_id
assert rebalance.description
assert 0.0 <= rebalance.estimated_improvement_pct
assert rebalance.confidence in ("high", "medium", "low")
assert isinstance(rebalance.action_params, dict)
def test_rebalance_action_params_includes_stage_id(self, engine):
stats = _make_stats(stage_fill_rates={"stage_0": 0.85})
suggestions = engine.analyze_pipeline(stats)
rebalance = next(s for s in suggestions if s.type == "rebalance_devices")
assert "stage_id" in rebalance.action_params
# ---------------------------------------------------------------------------
# analyze_pipeline — rule 2: adjust_queue
# ---------------------------------------------------------------------------
class TestAnalyzePipelineAdjustQueue:
"""avg_processing_time 最大/最小比值 > 2.0 應觸發 adjust_queue 建議。"""
def test_should_suggest_adjust_queue_when_ratio_above_threshold(self, engine):
stats = _make_stats(
stage_avg_times={"stage_0": 10.0, "stage_1": 25.0}
)
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "adjust_queue" in types
def test_should_not_suggest_adjust_queue_when_ratio_at_threshold(self, engine):
"""恰好等於 2.0 不觸發(需 > 2.0)。"""
stats = _make_stats(
stage_avg_times={"stage_0": 10.0, "stage_1": 20.0}
)
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "adjust_queue" not in types
def test_should_not_suggest_adjust_queue_when_ratio_below_threshold(self, engine):
stats = _make_stats(
stage_avg_times={"stage_0": 10.0, "stage_1": 15.0}
)
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "adjust_queue" not in types
def test_should_not_suggest_adjust_queue_with_single_stage(self, engine):
"""只有一個 Stage 時無法計算比值,不觸發。"""
stats = _make_stats(stage_avg_times={"stage_0": 100.0})
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "adjust_queue" not in types
def test_adjust_queue_suggestion_has_required_fields(self, engine):
stats = _make_stats(
stage_avg_times={"stage_0": 10.0, "stage_1": 25.0}
)
suggestions = engine.analyze_pipeline(stats)
adj = next(s for s in suggestions if s.type == "adjust_queue")
assert adj.suggestion_id
assert adj.description
assert adj.confidence in ("high", "medium", "low")
assert isinstance(adj.action_params, dict)
def should_not_suggest_adjust_queue_when_min_processing_time_is_zero(self, engine):
# stage avg_processing_time 為 0 時,比值計算無意義,不應觸發規則
stats = _make_stats(stage_avg_times={"stage_0": 0.0, "stage_1": 50.0})
suggestions = engine.analyze_pipeline(stats)
adjust = [s for s in suggestions if s.type == "adjust_queue"]
assert len(adjust) == 0
# ---------------------------------------------------------------------------
# analyze_pipeline — rule 3: add_devices
# ---------------------------------------------------------------------------
class TestAnalyzePipelineAddDevices:
"""所有 Dongle 使用率 > 85% 應觸發 add_devices 建議。"""
def test_should_suggest_add_devices_when_all_above_threshold(self, engine):
stats = _make_stats(
device_utilizations={"usb-1": 86.0, "usb-2": 90.0}
)
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "add_devices" in types
def test_should_not_suggest_add_devices_when_one_device_below_threshold(self, engine):
stats = _make_stats(
device_utilizations={"usb-1": 90.0, "usb-2": 80.0}
)
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "add_devices" not in types
def test_should_not_suggest_add_devices_when_all_at_threshold(self, engine):
"""恰好等於 85% 不觸發(需 > 85%)。"""
stats = _make_stats(
device_utilizations={"usb-1": 85.0, "usb-2": 85.0}
)
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "add_devices" not in types
def test_should_not_suggest_add_devices_when_no_devices(self, engine):
"""沒有裝置資訊時不觸發。"""
stats = _make_stats(device_utilizations={})
suggestions = engine.analyze_pipeline(stats)
types = [s.type for s in suggestions]
assert "add_devices" not in types
def test_add_devices_suggestion_has_required_fields(self, engine):
stats = _make_stats(
device_utilizations={"usb-1": 90.0, "usb-2": 92.0}
)
suggestions = engine.analyze_pipeline(stats)
add = next(s for s in suggestions if s.type == "add_devices")
assert add.suggestion_id
assert add.description
assert add.confidence in ("high", "medium", "low")
# ---------------------------------------------------------------------------
# analyze_pipeline — empty stats
# ---------------------------------------------------------------------------
class TestAnalyzePipelineEmptyStats:
def test_should_return_empty_list_when_stats_empty(self, engine):
suggestions = engine.analyze_pipeline({"stages": {}, "devices": {}})
assert suggestions == []
# ---------------------------------------------------------------------------
# predict_performance
# ---------------------------------------------------------------------------
class TestPredictPerformance:
"""predict_performance 使用 sum(gops) / num_stages * 0.6 計算 FPS。"""
def test_should_return_expected_fps_with_single_device_single_stage(self, engine):
devices = [_make_device_info(gops=28)]
# estimated_fps = 28 / 1 * 0.6 = 16.8
config = [MagicMock()] # 1 stage
result = engine.predict_performance(config, devices)
assert result["estimated_fps"] == pytest.approx(16.8)
def test_should_return_expected_latency(self, engine):
devices = [_make_device_info(gops=28)]
config = [MagicMock()] # 1 stage
result = engine.predict_performance(config, devices)
# estimated_latency_ms = 1000 / 16.8
assert result["estimated_latency_ms"] == pytest.approx(1000.0 / 16.8, rel=1e-4)
def test_should_return_confidence_range_as_tuple(self, engine):
devices = [_make_device_info(gops=28)]
config = [MagicMock()] # 1 stage
result = engine.predict_performance(config, devices)
low, high = result["confidence_range"]
fps = result["estimated_fps"]
assert low == pytest.approx(fps * 0.8)
assert high == pytest.approx(fps * 1.2)
def test_should_scale_fps_with_multiple_devices(self, engine):
devices = [
_make_device_info("usb-1", gops=28),
_make_device_info("usb-2", gops=28),
]
config = [MagicMock(), MagicMock()] # 2 stages
result = engine.predict_performance(config, devices)
# estimated_fps = (28 + 28) / 2 * 0.6 = 16.8
assert result["estimated_fps"] == pytest.approx(16.8)
def test_should_decrease_fps_with_more_stages(self, engine):
devices = [_make_device_info(gops=28)]
config_1 = [MagicMock()] # 1 stage
config_4 = [MagicMock()] * 4 # 4 stages
result_1 = engine.predict_performance(config_1, devices)
result_4 = engine.predict_performance(config_4, devices)
assert result_4["estimated_fps"] < result_1["estimated_fps"]
def test_should_handle_zero_stages_without_crash(self, engine):
"""num_stages = 0 時回傳 0 FPS不拋錯"""
devices = [_make_device_info(gops=28)]
result = engine.predict_performance([], devices)
assert result["estimated_fps"] == 0.0
def test_should_return_zero_fps_with_no_devices(self, engine):
config = [MagicMock()]
result = engine.predict_performance(config, [])
assert result["estimated_fps"] == 0.0
# ---------------------------------------------------------------------------
# apply_suggestion
# ---------------------------------------------------------------------------
class TestApplySuggestion:
def _make_rebalance_suggestion(self, stage_id="stage_0", device_id="usb-1"):
return OptimizationSuggestion(
suggestion_id="test-001",
type="rebalance_devices",
description="Rebalance test",
estimated_improvement_pct=10.0,
confidence="medium",
action_params={"stage_id": stage_id, "device_id": device_id},
)
def test_should_call_assign_device_for_rebalance_suggestion(self, engine):
dm = MagicMock()
dm.assign_device.return_value = True
suggestion = self._make_rebalance_suggestion("stage_0", "usb-1")
result = engine.apply_suggestion(suggestion, dm)
dm.assign_device.assert_called_once_with("usb-1", "stage_0")
assert result is True
def test_should_return_false_when_assign_device_fails(self, engine):
dm = MagicMock()
dm.assign_device.return_value = False
suggestion = self._make_rebalance_suggestion()
result = engine.apply_suggestion(suggestion, dm)
assert result is False
def test_should_return_true_for_add_devices_without_calling_assign(self, engine):
dm = MagicMock()
suggestion = OptimizationSuggestion(
suggestion_id="test-002",
type="add_devices",
description="Add more dongles",
estimated_improvement_pct=20.0,
confidence="high",
action_params={},
)
result = engine.apply_suggestion(suggestion, dm)
dm.assign_device.assert_not_called()
assert result is True
def test_should_return_true_for_adjust_queue_without_calling_assign(self, engine):
dm = MagicMock()
suggestion = OptimizationSuggestion(
suggestion_id="test-003",
type="adjust_queue",
description="Adjust queue size",
estimated_improvement_pct=5.0,
confidence="low",
action_params={},
)
result = engine.apply_suggestion(suggestion, dm)
dm.assign_device.assert_not_called()
assert result is True
def should_call_assign_device_with_empty_device_id_when_not_populated(self, engine):
# analyze_pipeline 產生的 rebalance 建議 device_id 預設為空字串
# apply_suggestion 應如實傳遞空字串給 device_manager行為可預期
suggestion = OptimizationSuggestion(
suggestion_id="test",
type="rebalance_devices",
description="test",
estimated_improvement_pct=10.0,
confidence="medium",
action_params={"device_id": "", "stage_id": "stage_0"}
)
mock_dm = MagicMock()
mock_dm.assign_device.return_value = False # 空 device_id 通常回傳 False
result = engine.apply_suggestion(suggestion, mock_dm)
mock_dm.assign_device.assert_called_once_with("", "stage_0")
# result 取決於 assign_device 回傳值
assert result == False

View File

@ -0,0 +1,152 @@
"""
PerformanceDashboard 的單元測試
測試策略
- PyQt5 CI 環境中不可用透過 conftest.py Mock 注入繞過 import
- 測試驗證 PerformanceDashboard 的行為邏輯
update_stats 是否更新顯示值reset 是否歸零set_display_window 是否儲存設定
- 使用 MagicMock 取代真實 QLabel透過記錄 setText 呼叫來驗證
"""
import sys
import pytest
from unittest.mock import MagicMock, patch, call
# ---------------------------------------------------------------------------
# 測試PerformanceDashboard 可以建立
# ---------------------------------------------------------------------------
class TestPerformanceDashboardInit:
def should_be_importable(self):
"""PerformanceDashboard 模組應可匯入(即使 PyQt5 被 Mock"""
from ui.components.performance_dashboard import PerformanceDashboard
assert PerformanceDashboard is not None
def should_instantiate_without_error(self):
"""PerformanceDashboard() 應可無錯誤地建立實例。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
assert dashboard is not None
# ---------------------------------------------------------------------------
# 測試update_stats 更新顯示值
# ---------------------------------------------------------------------------
class TestUpdateStats:
def should_store_fps_after_update(self):
"""update_stats 後current_fps 屬性應更新為傳入的值。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.update_stats({"fps": 30.5, "avg_latency_ms": 10.0, "p95_latency_ms": 15.0})
assert dashboard.current_fps == pytest.approx(30.5)
def should_store_avg_latency_after_update(self):
"""update_stats 後current_avg_latency_ms 屬性應更新。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.update_stats({"fps": 30.0, "avg_latency_ms": 12.3, "p95_latency_ms": 20.0})
assert dashboard.current_avg_latency_ms == pytest.approx(12.3)
def should_store_p95_latency_after_update(self):
"""update_stats 後current_p95_latency_ms 屬性應更新。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.update_stats({"fps": 30.0, "avg_latency_ms": 12.0, "p95_latency_ms": 25.7})
assert dashboard.current_p95_latency_ms == pytest.approx(25.7)
def should_call_fps_label_setText(self):
"""update_stats 應對 fps_label 呼叫 setText包含 fps 數值。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.fps_label.setText.reset_mock()
dashboard.update_stats({"fps": 45.0, "avg_latency_ms": 10.0, "p95_latency_ms": 15.0})
dashboard.fps_label.setText.assert_called_once()
call_arg = dashboard.fps_label.setText.call_args[0][0]
assert "45" in call_arg
def should_call_avg_latency_label_setText(self):
"""update_stats 應對 avg_latency_label 呼叫 setText包含延遲數值。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.avg_latency_label.setText.reset_mock()
dashboard.update_stats({"fps": 30.0, "avg_latency_ms": 8.5, "p95_latency_ms": 12.0})
dashboard.avg_latency_label.setText.assert_called_once()
call_arg = dashboard.avg_latency_label.setText.call_args[0][0]
assert "8.5" in call_arg or "8" in call_arg
def should_call_p95_latency_label_setText(self):
"""update_stats 應對 p95_latency_label 呼叫 setText包含 p95 數值。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.p95_latency_label.setText.reset_mock()
dashboard.update_stats({"fps": 30.0, "avg_latency_ms": 8.0, "p95_latency_ms": 19.2})
dashboard.p95_latency_label.setText.assert_called_once()
call_arg = dashboard.p95_latency_label.setText.call_args[0][0]
assert "19" in call_arg
# ---------------------------------------------------------------------------
# 測試reset 歸零
# ---------------------------------------------------------------------------
class TestReset:
def should_reset_fps_to_zero(self):
"""reset() 後 current_fps 應歸零。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.update_stats({"fps": 55.0, "avg_latency_ms": 5.0, "p95_latency_ms": 8.0})
dashboard.reset()
assert dashboard.current_fps == 0.0
def should_reset_avg_latency_to_zero(self):
"""reset() 後 current_avg_latency_ms 應歸零。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.update_stats({"fps": 30.0, "avg_latency_ms": 12.0, "p95_latency_ms": 18.0})
dashboard.reset()
assert dashboard.current_avg_latency_ms == 0.0
def should_reset_p95_latency_to_zero(self):
"""reset() 後 current_p95_latency_ms 應歸零。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.update_stats({"fps": 30.0, "avg_latency_ms": 12.0, "p95_latency_ms": 18.0})
dashboard.reset()
assert dashboard.current_p95_latency_ms == 0.0
def should_call_label_setText_with_zero_on_reset(self):
"""reset() 應對 fps_label 呼叫 setText更新為 0 值。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.fps_label.setText.reset_mock()
dashboard.reset()
dashboard.fps_label.setText.assert_called_once()
# ---------------------------------------------------------------------------
# 測試set_display_window 儲存設定
# ---------------------------------------------------------------------------
class TestSetDisplayWindow:
def should_store_display_window_seconds(self):
"""set_display_window(120) 後display_window_seconds 應為 120。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.set_display_window(120)
assert dashboard.display_window_seconds == 120
def should_default_to_60_seconds(self):
"""不傳參數時 display_window_seconds 預設應為 60。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.set_display_window()
assert dashboard.display_window_seconds == 60
def should_update_display_window_on_second_call(self):
"""連續呼叫 set_display_window 應覆蓋舊值。"""
from ui.components.performance_dashboard import PerformanceDashboard
dashboard = PerformanceDashboard()
dashboard.set_display_window(30)
dashboard.set_display_window(90)
assert dashboard.display_window_seconds == 90

View File

@ -0,0 +1,250 @@
"""
tests/unit/test_report_exporter.py ReportExporter 單元測試
按照 TDD 3.4.9 的測試清單實作
"""
import csv
import io
import time
from pathlib import Path
from unittest.mock import patch, MagicMock
import pytest
from core.performance.benchmarker import BenchmarkResult
from core.performance.report_exporter import DeviceSummary, ReportData, ReportExporter
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
def _make_benchmark_result(mode: str = "sequential", fps: float = 14.2) -> BenchmarkResult:
return BenchmarkResult(
mode=mode,
fps=fps,
avg_latency_ms=70.4,
p95_latency_ms=95.0,
total_frames=426,
timestamp=1743856222.0,
device_config={"KL720": 1},
id=f"benchmark_20260405_143022_{mode}",
)
def _make_report_data_with_benchmark() -> ReportData:
seq = _make_benchmark_result("sequential", fps=14.2)
par = _make_benchmark_result("parallel", fps=45.6)
return ReportData(
report_title="Test Report",
pipeline_name="test_pipeline",
sequential_result=seq,
parallel_result=par,
speedup=45.6 / 14.2,
history_records=[seq, par],
)
# ---------------------------------------------------------------------------
# _get_timestamp_str
# ---------------------------------------------------------------------------
class TestGetTimestampStr:
def test_format_is_yyyy_mm_dd_hh_mm_ss(self):
"""_get_timestamp_str 應回傳 'YYYY-MM-DD HH:MM:SS' 格式的字串"""
ts = 1743856222.0
result = ReportExporter._get_timestamp_str(ts)
# 驗證格式:長度固定為 19包含 '-' 和 ':'
assert len(result) == 19
assert result[4] == "-"
assert result[7] == "-"
assert result[10] == " "
assert result[13] == ":"
assert result[16] == ":"
def test_all_parts_are_digits(self):
"""timestamp 各欄位均應為數字"""
ts = 1743856222.0
result = ReportExporter._get_timestamp_str(ts)
parts = result.replace("-", "").replace(":", "").replace(" ", "")
assert parts.isdigit()
# ---------------------------------------------------------------------------
# ReportData 預設值
# ---------------------------------------------------------------------------
class TestReportDataDefaults:
def test_report_title_is_non_empty(self):
"""ReportData 預設 report_title 應非空"""
data = ReportData()
assert data.report_title
assert len(data.report_title) > 0
def test_generated_at_is_close_to_now(self):
"""ReportData 預設 generated_at 應接近當下時間(誤差 < 5 秒)"""
before = time.time()
data = ReportData()
after = time.time()
assert before <= data.generated_at <= after + 5
def test_history_records_defaults_to_empty_list(self):
"""ReportData 預設 history_records 應為空列表"""
data = ReportData()
assert data.history_records == []
def test_devices_defaults_to_empty_list(self):
"""ReportData 預設 devices 應為空列表"""
data = ReportData()
assert data.devices == []
def test_sequential_result_defaults_to_none(self):
data = ReportData()
assert data.sequential_result is None
def test_parallel_result_defaults_to_none(self):
data = ReportData()
assert data.parallel_result is None
# ---------------------------------------------------------------------------
# export_csv
# ---------------------------------------------------------------------------
class TestExportCsv:
def test_creates_file_at_given_path(self, tmp_path):
"""export_csv() 應在指定路徑建立 CSV 檔案"""
data = _make_report_data_with_benchmark()
output_path = tmp_path / "report.csv"
exporter = ReportExporter()
result = exporter.export_csv(data, output_path)
assert output_path.exists()
assert result == output_path
def test_contains_benchmark_summary_section(self, tmp_path):
"""CSV 應包含完整的 benchmark_summary header 行"""
data = _make_report_data_with_benchmark()
output_path = tmp_path / "report.csv"
exporter = ReportExporter()
exporter.export_csv(data, output_path)
content = output_path.read_text(encoding="utf-8")
assert "section,metric,sequential,parallel,diff_pct" in content
def test_contains_history_section(self, tmp_path):
"""CSV 應包含完整的歷史記錄 header 行"""
data = _make_report_data_with_benchmark()
output_path = tmp_path / "report.csv"
exporter = ReportExporter()
exporter.export_csv(data, output_path)
content = output_path.read_text(encoding="utf-8")
assert "id,timestamp,mode,fps,avg_latency_ms,p95_latency_ms,total_frames" in content
# 歷史記錄有 2 筆,驗證資料行數
lines = [l for l in content.splitlines() if l.strip()]
history_data_lines = [l for l in lines if l.startswith("benchmark_2")]
assert len(history_data_lines) == len(data.history_records)
def test_two_sections_separated_by_blank_line(self, tmp_path):
"""CSV 的兩個 header 行之間恰有一行空行"""
data = _make_report_data_with_benchmark()
output_path = tmp_path / "report.csv"
exporter = ReportExporter()
exporter.export_csv(data, output_path)
content = output_path.read_text(encoding="utf-8")
lines = content.splitlines()
summary_header = "section,metric,sequential,parallel,diff_pct"
history_header = "id,timestamp,mode,fps,avg_latency_ms,p95_latency_ms,total_frames"
idx_summary = next(i for i, l in enumerate(lines) if l == summary_header)
idx_history = next(i for i, l in enumerate(lines) if l == history_header)
# 兩個 header 行之間,緊鄰 history header 的前一行必須是空行
assert idx_history > idx_summary + 1
assert lines[idx_history - 1] == ""
def test_no_benchmark_result_raises_value_error(self, tmp_path):
"""sequential_result 或 parallel_result 為 None 時,應拋出 ValueError"""
data = ReportData() # sequential_result=None, parallel_result=None
output_path = tmp_path / "report.csv"
exporter = ReportExporter()
with pytest.raises(ValueError):
exporter.export_csv(data, output_path)
def test_empty_history_produces_only_summary(self, tmp_path):
"""history_records 為空時CSV 只輸出 Benchmark 摘要區塊,歷史記錄表為空"""
seq = _make_benchmark_result("sequential", fps=14.2)
par = _make_benchmark_result("parallel", fps=45.6)
data = ReportData(
sequential_result=seq,
parallel_result=par,
speedup=45.6 / 14.2,
history_records=[],
)
output_path = tmp_path / "report.csv"
exporter = ReportExporter()
exporter.export_csv(data, output_path)
content = output_path.read_text(encoding="utf-8")
assert "benchmark_summary" in content
# 沒有歷史資料行id 開頭的行)
data_lines = [l for l in content.splitlines() if l.startswith("benchmark_2")]
assert len(data_lines) == 0
def test_auto_creates_parent_directory(self, tmp_path):
"""若輸出路徑的父目錄不存在export_csv() 應自動建立"""
data = _make_report_data_with_benchmark()
output_path = tmp_path / "subdir" / "report.csv"
exporter = ReportExporter()
exporter.export_csv(data, output_path)
assert output_path.exists()
# ---------------------------------------------------------------------------
# export_pdf
# ---------------------------------------------------------------------------
class TestExportPdf:
def test_creates_file_at_given_path(self, tmp_path):
"""export_pdf() 應在指定路徑建立 PDF 檔案(不驗證內容,只驗證存在)"""
reportlab = pytest.importorskip("reportlab")
data = _make_report_data_with_benchmark()
output_path = tmp_path / "report.pdf"
exporter = ReportExporter()
result = exporter.export_pdf(data, output_path)
assert output_path.exists()
assert result == output_path
def test_auto_creates_parent_directory(self, tmp_path):
"""若輸出路徑的父目錄不存在export_pdf() 應自動建立"""
pytest.importorskip("reportlab")
data = _make_report_data_with_benchmark()
output_path = tmp_path / "subdir" / "report.pdf"
exporter = ReportExporter()
exporter.export_pdf(data, output_path)
assert output_path.exists()
def test_without_chart_image_does_not_raise(self, tmp_path):
"""chart_image_bytes 為 None 時PDF 匯出不應拋出例外"""
pytest.importorskip("reportlab")
data = _make_report_data_with_benchmark()
data.chart_image_bytes = None
output_path = tmp_path / "report.pdf"
exporter = ReportExporter()
# 不應拋出例外
exporter.export_pdf(data, output_path)
def test_raises_import_error_when_reportlab_missing(self, tmp_path):
"""reportlab 未安裝時export_pdf() 應拋出 ImportError"""
import core.performance.report_exporter as re_mod
data = _make_report_data_with_benchmark()
output_path = tmp_path / "report.pdf"
exporter = ReportExporter()
with patch.object(re_mod, "_REPORTLAB_AVAILABLE", False):
with pytest.raises(ImportError, match="reportlab"):
exporter.export_pdf(data, output_path)

View File

@ -0,0 +1,88 @@
"""
Tests for ResultSerializer JSON serialization of inference result objects.
"""
import dataclasses
import pytest
from unittest.mock import MagicMock
from core.functions.result_handler import ResultSerializer
# Minimal stand-ins for the SDK dataclasses (no kp import needed)
@dataclasses.dataclass
class FakeBoundingBox:
x1: int = 0
y1: int = 0
x2: int = 100
y2: int = 100
class_name: str = "fire"
score: float = 0.9
@dataclasses.dataclass
class FakeObjectDetectionResult:
class_count: int = 1
box_count: int = 1
box_list: list = dataclasses.field(default_factory=list)
@dataclasses.dataclass
class FakeClassificationResult:
probability: float = 0.85
class_name: str = "fire"
class_num: int = 0
class TestResultSerializerToJson:
def setup_method(self):
self.serializer = ResultSerializer()
def should_serialize_plain_dict(self):
data = {"fps": 30.0, "pipeline_id": "p1"}
result = self.serializer.to_json(data)
assert '"fps"' in result
assert "30.0" in result
def should_serialize_dict_containing_dataclass_object(self):
"""Bug reproduction: ObjectDetectionResult in result dict caused TypeError."""
det = FakeObjectDetectionResult(
class_count=1,
box_count=1,
box_list=[FakeBoundingBox()]
)
data = {"stage_results": {"stage_0": det}}
# Should NOT raise TypeError: Object of type FakeObjectDetectionResult is not JSON serializable
result = self.serializer.to_json(data)
assert result is not None
assert "stage_0" in result
def should_serialize_dict_containing_classification_result(self):
"""ClassificationResult must also be handled."""
clf = FakeClassificationResult(probability=0.85, class_name="fire")
data = {"stage_results": {"stage_0": clf}}
result = self.serializer.to_json(data)
assert "stage_0" in result
def should_serialize_nested_dataclass_in_list(self):
"""box_list inside ObjectDetectionResult contains BoundingBox dataclasses."""
det = FakeObjectDetectionResult(
box_count=1,
box_list=[FakeBoundingBox(x1=10, y1=20, x2=110, y2=120, class_name="fire")]
)
data = {"detections": det}
result = self.serializer.to_json(data)
assert "fire" in result
def should_preserve_primitive_values_unchanged(self):
data = {"fps": 45.2, "count": 3, "name": "test", "flag": True}
import json
result = json.loads(self.serializer.to_json(data))
assert result["fps"] == 45.2
assert result["count"] == 3
assert result["name"] == "test"
assert result["flag"] is True
def should_handle_none_values(self):
data = {"result": None, "stage": "stage_0"}
result = self.serializer.to_json(data)
assert "null" in result

View File

@ -0,0 +1,231 @@
"""
tests/unit/test_template_manager.py
TDD Phase 3.3.2 TemplateManager 單元測試
覆蓋範圍
- get_builtin_templates 回傳 3 個範本
- load_template 正確載入內建範本
- load_template 對不存在的 ID 拋出 ValueError
- save_as_template 建立新範本並可被 load_template 讀取
"""
import pytest
from core.templates.manager import TemplateManager, PipelineTemplate
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture
def manager():
return TemplateManager()
# ---------------------------------------------------------------------------
# get_builtin_templates
# ---------------------------------------------------------------------------
class TestGetBuiltinTemplates:
def test_should_return_exactly_three_builtin_templates(self, manager):
templates = manager.get_builtin_templates()
assert len(templates) == 3
def test_should_return_list_of_pipeline_template_instances(self, manager):
templates = manager.get_builtin_templates()
for t in templates:
assert isinstance(t, PipelineTemplate)
def test_should_include_yolov5_detection_template(self, manager):
templates = manager.get_builtin_templates()
ids = [t.template_id for t in templates]
assert "yolov5_detection" in ids
def test_should_include_fire_detection_template(self, manager):
templates = manager.get_builtin_templates()
ids = [t.template_id for t in templates]
assert "fire_detection" in ids
def test_should_include_dual_model_cascade_template(self, manager):
templates = manager.get_builtin_templates()
ids = [t.template_id for t in templates]
assert "dual_model_cascade" in ids
def test_each_template_has_non_empty_name_and_description(self, manager):
templates = manager.get_builtin_templates()
for t in templates:
assert t.name
assert t.description
def test_each_template_has_nodes_list(self, manager):
templates = manager.get_builtin_templates()
for t in templates:
assert isinstance(t.nodes, list)
assert len(t.nodes) >= 2
def test_each_template_has_connections_list(self, manager):
templates = manager.get_builtin_templates()
for t in templates:
assert isinstance(t.connections, list)
# ---------------------------------------------------------------------------
# load_template — 內建範本
# ---------------------------------------------------------------------------
class TestLoadTemplate:
def test_should_load_yolov5_detection_by_id(self, manager):
t = manager.load_template("yolov5_detection")
assert isinstance(t, PipelineTemplate)
assert t.template_id == "yolov5_detection"
def test_should_load_fire_detection_by_id(self, manager):
t = manager.load_template("fire_detection")
assert t.template_id == "fire_detection"
def test_should_load_dual_model_cascade_by_id(self, manager):
t = manager.load_template("dual_model_cascade")
assert t.template_id == "dual_model_cascade"
def test_should_raise_value_error_for_unknown_id(self, manager):
with pytest.raises(ValueError, match="not found"):
manager.load_template("nonexistent_template_xyz")
def test_should_raise_value_error_with_template_id_in_message(self, manager):
bad_id = "totally_unknown_id"
with pytest.raises(ValueError, match=bad_id):
manager.load_template(bad_id)
# ---------------------------------------------------------------------------
# yolov5_detection 節點結構驗證
# ---------------------------------------------------------------------------
class TestYolov5DetectionStructure:
"""Input → Preprocess → Model → Postprocess → Output 順序。"""
def test_should_have_five_nodes(self, manager):
t = manager.load_template("yolov5_detection")
assert len(t.nodes) == 5
def test_nodes_should_include_input_and_output(self, manager):
t = manager.load_template("yolov5_detection")
node_types = [n["type"] for n in t.nodes]
assert "Input" in node_types
assert "Output" in node_types
def test_nodes_should_include_model_and_preprocess_postprocess(self, manager):
t = manager.load_template("yolov5_detection")
node_types = [n["type"] for n in t.nodes]
assert "Model" in node_types
assert "Preprocess" in node_types
assert "Postprocess" in node_types
# ---------------------------------------------------------------------------
# fire_detection 節點結構驗證
# ---------------------------------------------------------------------------
class TestFireDetectionStructure:
"""Input → Model → Postprocess → Output 順序。"""
def test_should_have_four_nodes(self, manager):
t = manager.load_template("fire_detection")
assert len(t.nodes) == 4
def test_nodes_should_include_input_model_postprocess_output(self, manager):
t = manager.load_template("fire_detection")
node_types = [n["type"] for n in t.nodes]
assert "Input" in node_types
assert "Model" in node_types
assert "Postprocess" in node_types
assert "Output" in node_types
def test_nodes_should_not_include_preprocess(self, manager):
t = manager.load_template("fire_detection")
node_types = [n["type"] for n in t.nodes]
assert "Preprocess" not in node_types
# ---------------------------------------------------------------------------
# dual_model_cascade 節點結構驗證
# ---------------------------------------------------------------------------
class TestDualModelCascadeStructure:
"""Input → Model1 → Postprocess1 → Model2 → Postprocess2 → Output 順序。"""
def test_should_have_six_nodes(self, manager):
t = manager.load_template("dual_model_cascade")
assert len(t.nodes) == 6
def test_should_have_two_model_nodes(self, manager):
t = manager.load_template("dual_model_cascade")
model_nodes = [n for n in t.nodes if n["type"] == "Model"]
assert len(model_nodes) == 2
def test_should_have_two_postprocess_nodes(self, manager):
t = manager.load_template("dual_model_cascade")
pp_nodes = [n for n in t.nodes if n["type"] == "Postprocess"]
assert len(pp_nodes) == 2
# ---------------------------------------------------------------------------
# save_as_template
# ---------------------------------------------------------------------------
class TestSaveAsTemplate:
def _sample_config(self):
return {
"nodes": [
{"id": "n1", "type": "Input"},
{"id": "n2", "type": "Output"},
],
"connections": [
{"from": "n1", "to": "n2"},
],
}
def test_should_return_pipeline_template_instance(self, manager):
t = manager.save_as_template(
self._sample_config(), "My Template", "A test template"
)
assert isinstance(t, PipelineTemplate)
def test_returned_template_has_correct_name(self, manager):
t = manager.save_as_template(self._sample_config(), "Custom Pipeline", "desc")
assert t.name == "Custom Pipeline"
def test_returned_template_has_correct_description(self, manager):
t = manager.save_as_template(self._sample_config(), "name", "My description")
assert t.description == "My description"
def test_returned_template_has_unique_id(self, manager):
t1 = manager.save_as_template(self._sample_config(), "T1", "desc")
t2 = manager.save_as_template(self._sample_config(), "T2", "desc")
assert t1.template_id != t2.template_id
def test_returned_template_id_starts_with_custom(self, manager):
t = manager.save_as_template(self._sample_config(), "My Template", "desc")
assert t.template_id.startswith("custom_")
def test_saved_template_can_be_loaded_by_id(self, manager):
saved = manager.save_as_template(self._sample_config(), "Loadable", "desc")
loaded = manager.load_template(saved.template_id)
assert loaded.template_id == saved.template_id
assert loaded.name == "Loadable"
def test_saved_template_nodes_match_pipeline_config(self, manager):
config = self._sample_config()
saved = manager.save_as_template(config, "Node Test", "desc")
assert saved.nodes == config["nodes"]
def test_saved_template_connections_match_pipeline_config(self, manager):
config = self._sample_config()
saved = manager.save_as_template(config, "Conn Test", "desc")
assert saved.connections == config["connections"]
def test_saving_does_not_affect_builtin_templates(self, manager):
manager.save_as_template(self._sample_config(), "Extra", "desc")
builtins = manager.get_builtin_templates()
assert len(builtins) == 3

View File

@ -0,0 +1,123 @@
"""
ui/components/device_management_panel.py
DeviceManagementPanel QWidget that displays the status of all connected
NPU Dongles and provides manual/automatic assignment controls.
"""
from __future__ import annotations
from typing import List, Optional
from PyQt5.QtCore import QTimer, pyqtSignal
from PyQt5.QtWidgets import (
QHBoxLayout,
QLabel,
QPushButton,
QVBoxLayout,
QWidget,
)
from core.device.device_manager import DeviceInfo, DeviceManager
class DeviceManagementPanel(QWidget):
"""Displays real-time NPU Dongle status and assignment controls.
Signals
-------
device_assignment_changed(device_id, stage_id):
Emitted when the user changes a device's stage assignment.
"""
device_assignment_changed = pyqtSignal(str, str)
def __init__(
self,
device_manager: DeviceManager,
parent: Optional[QWidget] = None,
) -> None:
super().__init__(parent)
self._device_manager = device_manager
self._devices: List[DeviceInfo] = []
self._auto_refresh_interval_ms: int = 0
self._timer: Optional[QTimer] = None
self._setup_ui()
self.refresh()
# ------------------------------------------------------------------
# UI construction
# ------------------------------------------------------------------
def _setup_ui(self) -> None:
layout = QVBoxLayout()
# Toolbar row: Auto Balance button
toolbar = QHBoxLayout()
self.auto_balance_button = QPushButton("Auto Balance")
self.auto_balance_button.clicked.connect(self._on_auto_balance)
toolbar.addWidget(self.auto_balance_button)
toolbar.addStretch()
# Device cards area
self._cards_layout = QVBoxLayout()
self._no_device_label = QLabel("No devices connected")
layout.addLayout(toolbar)
layout.addWidget(self._no_device_label)
layout.addLayout(self._cards_layout)
self.setLayout(layout)
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
def refresh(self) -> None:
"""Re-scan devices and update the displayed cards."""
self._devices = self._device_manager.scan_devices()
self._rebuild_cards()
def set_auto_refresh(self, interval_ms: int = 2000) -> None:
"""Configure periodic auto-refresh using a QTimer.
Parameters
----------
interval_ms:
Refresh interval in milliseconds. Defaults to 2 000 ms.
"""
if interval_ms <= 0:
if self._timer is not None:
self._timer.stop()
return
self._auto_refresh_interval_ms = interval_ms
if self._timer is None:
self._timer = QTimer(self)
self._timer.timeout.connect(self.refresh)
self._timer.start(interval_ms)
# ------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------
def _rebuild_cards(self) -> None:
"""Recreate device card widgets from the current device list."""
if not self._devices:
self._no_device_label.setVisible(True)
return
self._no_device_label.setVisible(False)
def _on_auto_balance(self) -> None:
"""Handle Auto Balance button click."""
if not self._devices:
return
stage_ids = [
d.assigned_stage for d in self._devices if d.assigned_stage
]
if not stage_ids:
return
recommendations = self._device_manager.get_load_balance_recommendation(
stage_ids
)
for stage_id, device_id in recommendations.items():
if device_id:
self.device_assignment_changed.emit(device_id, stage_id)

View File

@ -0,0 +1,97 @@
"""
ui/components/performance_dashboard.py
PerformanceDashboard 顯示即時 FPS 與延遲數值的 QWidget
使用 pyqtgraph 繪製折線圖如可用否則降級為純 QLabel 顯示數值
避免 import error 導致應用崩潰
"""
from typing import Any, Dict, Optional
from PyQt5.QtCore import pyqtSignal
from PyQt5.QtWidgets import QHBoxLayout, QLabel, QVBoxLayout, QWidget
try:
import pyqtgraph as pg # type: ignore
_PYQTGRAPH_AVAILABLE = True
except ImportError:
_PYQTGRAPH_AVAILABLE = False
# TODO: Phase 2 - 當 pyqtgraph 可用時,改用折線圖顯示歷史 FPS/Latency
class PerformanceDashboard(QWidget):
"""即時效能儀錶板元件。
顯示當前 FPS平均延遲與 p95 延遲
接受 update_stats(stats) 推送的數據並更新 QLabel 顯示值
"""
update_requested = pyqtSignal(dict)
def __init__(self, parent: Optional[QWidget] = None) -> None:
super().__init__(parent)
# 內部狀態
self.current_fps: float = 0.0
self.current_avg_latency_ms: float = 0.0
self.current_p95_latency_ms: float = 0.0
self.display_window_seconds: int = 60
# UI 元件(動態值 label前綴由靜態 label 負責)
self.fps_label = QLabel("0.0")
self.avg_latency_label = QLabel("0.0")
self.p95_latency_label = QLabel("0.0")
self._setup_ui()
def _setup_ui(self) -> None:
layout = QVBoxLayout()
fps_row = QHBoxLayout()
fps_row.addWidget(QLabel("FPS:"))
fps_row.addWidget(self.fps_label)
avg_row = QHBoxLayout()
avg_row.addWidget(QLabel("Avg Latency:"))
avg_row.addWidget(self.avg_latency_label)
p95_row = QHBoxLayout()
p95_row.addWidget(QLabel("P95 Latency:"))
p95_row.addWidget(self.p95_latency_label)
layout.addLayout(fps_row)
layout.addLayout(avg_row)
layout.addLayout(p95_row)
self.setLayout(layout)
def update_stats(self, stats: Dict[str, Any]) -> None:
"""接收效能數據並更新顯示。
Args:
stats: 包含 "fps""avg_latency_ms""p95_latency_ms" 的字典
"""
self.current_fps = float(stats.get("fps", 0.0))
self.current_avg_latency_ms = float(stats.get("avg_latency_ms", 0.0))
self.current_p95_latency_ms = float(stats.get("p95_latency_ms", 0.0))
self.fps_label.setText(f"{self.current_fps:.1f} FPS")
self.avg_latency_label.setText(f"{self.current_avg_latency_ms:.1f} ms")
self.p95_latency_label.setText(f"{self.current_p95_latency_ms:.1f} ms")
def reset(self) -> None:
"""清空所有顯示值回到初始狀態0"""
self.current_fps = 0.0
self.current_avg_latency_ms = 0.0
self.current_p95_latency_ms = 0.0
self.fps_label.setText("0.0 FPS")
self.avg_latency_label.setText("0.0 ms")
self.p95_latency_label.setText("0.0 ms")
def set_display_window(self, seconds: int = 60) -> None:
"""設定圖表顯示的時間視窗(秒)。
Args:
seconds: 要顯示的歷史時間範圍預設 60
"""
self.display_window_seconds = seconds

View File

@ -0,0 +1,207 @@
"""
ui/dialogs/benchmark_dialog.py
BenchmarkDialog 一鍵啟動 Benchmark QDialog
顯示三階段進度條熱機/循序/平行即時 FPS完成後加速倍數大字體
以及循序 vs 平行的 FPS 與延遲對比表
Benchmark 執行透過 QThread 進行避免 UI 凍結
pipeline_config 為空顯示提示訊息並禁用開始按鈕
"""
from typing import Any, List, Optional
from PyQt5.QtCore import QThread, pyqtSignal
from PyQt5.QtWidgets import (
QDialog,
QHBoxLayout,
QLabel,
QProgressBar,
QPushButton,
QTableWidget,
QTableWidgetItem,
QVBoxLayout,
QWidget,
)
class _BenchmarkWorker(QThread):
"""在背景執行緒執行 benchmark避免 UI 凍結。"""
progress_updated = pyqtSignal(str, int)
result_ready = pyqtSignal(object, object, float)
error_occurred = pyqtSignal(str)
def __init__(self, benchmarker: Any) -> None:
super().__init__()
self._benchmarker = benchmarker
def run(self) -> None:
try:
seq_result, par_result, speedup = self._benchmarker.run_full_benchmark(
progress_callback=self._on_progress
)
self.result_ready.emit(seq_result, par_result, speedup)
except Exception as exc:
self.error_occurred.emit(str(exc))
def _on_progress(self, phase: str, value: int) -> None:
self.progress_updated.emit(phase, value)
class BenchmarkDialog(QDialog):
"""Benchmark 觸發與結果顯示對話框。
Args:
parent: 父視窗
pipeline_config: 目前的 pipeline Stage 設定列表若為空禁用開始按鈕
"""
def __init__(
self,
parent: Optional[QWidget],
pipeline_config: List[Any],
) -> None:
super().__init__(parent)
self._pipeline_config = pipeline_config
self.seq_result: Optional[Any] = None
self.par_result: Optional[Any] = None
self.current_phase: str = ""
self._worker: Optional[_BenchmarkWorker] = None
self.setWindowTitle("Performance Benchmark")
# UI 元件
self.info_label = QLabel("")
self.progress_bar = QProgressBar()
self.progress_bar.setMinimum(0)
self.progress_bar.setMaximum(100)
self.fps_label = QLabel("FPS: —")
self.phase_label = QLabel("")
self.speedup_label = QLabel("")
self.result_table = QTableWidget(2, 3)
self.result_table.setHorizontalHeaderLabels(["模式", "FPS", "Avg Latency (ms)"])
self.start_button = QPushButton("開始 Benchmark")
self.close_button = QPushButton("關閉")
self._setup_ui()
self._apply_initial_state()
def _setup_ui(self) -> None:
layout = QVBoxLayout()
layout.addWidget(self.info_label)
progress_row = QHBoxLayout()
progress_row.addWidget(self.progress_bar)
progress_row.addWidget(self.phase_label)
layout.addLayout(progress_row)
fps_row = QHBoxLayout()
fps_row.addWidget(QLabel("即時 FPS"))
fps_row.addWidget(self.fps_label)
layout.addLayout(fps_row)
layout.addWidget(self.speedup_label)
layout.addWidget(self.result_table)
btn_row = QHBoxLayout()
btn_row.addWidget(self.start_button)
btn_row.addWidget(self.close_button)
layout.addLayout(btn_row)
self.setLayout(layout)
def _apply_initial_state(self) -> None:
if not self._pipeline_config:
self.info_label.setText("尚未設定 Pipeline請先在 Pipeline Editor 中建立 Stage。")
self.start_button.setEnabled(False)
else:
self.info_label.setText(f"已載入 {len(self._pipeline_config)} 個 Stage可開始 Benchmark。")
self.start_button.setEnabled(True)
def start_benchmark(self, benchmarker: Any) -> None:
"""在 QThread 中執行 benchmark避免 UI 凍結。
Args:
benchmarker: PerformanceBenchmarker 實例
"""
self._worker = _BenchmarkWorker(benchmarker)
self._worker.progress_updated.connect(self.update_progress)
self._worker.result_ready.connect(self._on_result_ready)
self._worker.error_occurred.connect(self._on_error)
self._worker.finished.connect(self._worker.deleteLater)
self.start_button.setEnabled(False)
self._worker.start()
def update_progress(self, phase: str, value: int) -> None:
"""更新進度條與當前階段。
Args:
phase: 當前階段名稱"warmup" / "sequential" / "parallel"
value: 進度值0100
"""
_PHASE_LABELS = {
"warmup": "熱機中...",
"sequential": "循序測試...",
"parallel": "平行測試...",
}
self.current_phase = phase
self.progress_bar.setValue(value)
self.phase_label.setText(_PHASE_LABELS.get(phase, phase))
def show_result(
self,
seq_result: Any,
par_result: Any,
speedup: float,
) -> None:
"""顯示 benchmark 結果。
Args:
seq_result: 循序模式的 BenchmarkResult
par_result: 平行模式的 BenchmarkResult
speedup: 加速倍數par.fps / seq.fps
"""
self.seq_result = seq_result
self.par_result = par_result
font = self.speedup_label.font()
font.setPointSize(20)
font.setBold(True)
self.speedup_label.setFont(font)
self.speedup_label.setText(f"{speedup:.1f}x FASTER")
self._populate_table(seq_result, par_result)
def _populate_table(self, seq_result: Any, par_result: Any) -> None:
rows = [
("循序", seq_result),
("平行", par_result),
]
for row_idx, (mode_label, result) in enumerate(rows):
self.result_table.setItem(row_idx, 0, QTableWidgetItem(mode_label))
try:
self.result_table.setItem(row_idx, 1, QTableWidgetItem(f"{result.fps:.1f}"))
self.result_table.setItem(
row_idx, 2, QTableWidgetItem(f"{result.avg_latency_ms:.1f}")
)
except (AttributeError, TypeError):
pass
def _on_result_ready(
self,
seq_result: Any,
par_result: Any,
speedup: float,
) -> None:
self.show_result(seq_result, par_result, speedup)
def _on_error(self, message: str) -> None:
self.info_label.setText(f"Benchmark 失敗:{message}")
self.progress_bar.setValue(0)
self._worker = None
self.start_button.setEnabled(True)

View File

@ -1163,22 +1163,22 @@ Stage Configurations:
def update_terminal_output(self, terminal_text: str):
"""Update the terminal output display with new text."""
try:
# Use append() instead of setPlainText() for better performance and no truncation
self.terminal_output_display.append(terminal_text.rstrip('\n'))
# Auto-scroll to bottom
scrollbar = self.terminal_output_display.verticalScrollBar()
scrollbar.setValue(scrollbar.maximum())
# Optional: Limit total lines to prevent excessive memory usage
# Only trim if we have way too many lines (e.g., > 1000)
# Limit total lines to prevent excessive memory usage.
# Use toPlainText/setPlainText to avoid QTextCursor cross-thread warnings.
document = self.terminal_output_display.document()
if document.lineCount() > 1000:
cursor = self.terminal_output_display.textCursor()
cursor.movePosition(cursor.Start)
cursor.movePosition(cursor.Down, cursor.KeepAnchor, 200) # Select first 200 lines
cursor.removeSelectedText()
lines = self.terminal_output_display.toPlainText().split('\n')
trimmed = '\n'.join(lines[-800:]) # Keep last 800 lines
self.terminal_output_display.setPlainText(trimmed)
# Restore scroll to bottom after setPlainText resets it
scrollbar.setValue(scrollbar.maximum())
except Exception as e:
print(f"Error updating terminal output: {e}")

View File

@ -0,0 +1,238 @@
"""
ui/dialogs/export_report_dialog.py 效能報告匯出對話框
提供 ExportReportDialog(QDialog)讓使用者選擇報告格式PDF/CSV與儲存路徑
然後觸發 ReportExporter 執行匯出
設計重點
- _collect_report_data() 從各模組收集資料每個來源都用 try/except 保護
- 不在此模組執行實際 benchmark只使用 history 的最新一筆作為 parallel_result
- chart_image_bytes None截圖整合留未來
"""
from __future__ import annotations
from typing import TYPE_CHECKING, List, Optional
from PyQt5.QtWidgets import (
QDialog,
QFileDialog,
QHBoxLayout,
QLabel,
QPushButton,
QRadioButton,
QVBoxLayout,
QWidget,
QLineEdit,
QGroupBox,
QProgressBar,
)
from PyQt5.QtCore import Qt
from core.performance.report_exporter import DeviceSummary, ReportData, ReportExporter
if TYPE_CHECKING:
from core.performance.benchmarker import PerformanceBenchmarker
from core.performance.history import PerformanceHistory
class ExportReportDialog(QDialog):
"""
效能報告匯出對話框
使用者可選擇格式PDF / CSV指定儲存路徑後按匯出
對話框會呼叫 ReportExporter 產出檔案並顯示結果
"""
def __init__(
self,
parent: Optional[QWidget],
benchmarker, # PerformanceBenchmarker | None
history, # PerformanceHistory | None
device_manager, # DeviceManager | None
dashboard, # PerformanceDashboard | None
) -> None:
super().__init__(parent)
self._benchmarker = benchmarker
self._history = history
self._device_manager = device_manager
self._dashboard = dashboard
self._exporter = ReportExporter()
# 預設格式為 PDF
self._selected_format: str = "pdf"
self._setup_ui()
# ------------------------------------------------------------------
# UI 建立
# ------------------------------------------------------------------
def _setup_ui(self) -> None:
"""建立對話框 UI。"""
self.setWindowTitle("匯出效能報告")
main_layout = QVBoxLayout()
# 格式選擇
format_group = QGroupBox("匯出格式")
format_layout = QHBoxLayout()
self._pdf_radio = QRadioButton("PDF")
self._csv_radio = QRadioButton("CSV")
self._pdf_radio.setChecked(True)
self._pdf_radio.clicked.connect(lambda: self._set_format("pdf"))
self._csv_radio.clicked.connect(lambda: self._set_format("csv"))
format_layout.addWidget(self._pdf_radio)
format_layout.addWidget(self._csv_radio)
format_group.setLayout(format_layout)
main_layout.addWidget(format_group)
# 儲存路徑
path_layout = QHBoxLayout()
self._path_input = QLineEdit()
self._path_input.setPlaceholderText("儲存路徑…")
self._browse_btn = QPushButton("瀏覽")
self._browse_btn.clicked.connect(self._on_browse)
path_layout.addWidget(self._path_input)
path_layout.addWidget(self._browse_btn)
main_layout.addLayout(path_layout)
# 進度條
self._progress_bar = QProgressBar()
self._progress_bar.setVisible(False)
main_layout.addWidget(self._progress_bar)
# 匯出按鈕
self._export_btn = QPushButton("匯出")
self._export_btn.clicked.connect(self._on_export)
main_layout.addWidget(self._export_btn)
# 狀態標籤
self._status_label = QLabel("")
main_layout.addWidget(self._status_label)
self.setLayout(main_layout)
# ------------------------------------------------------------------
# 格式設定
# ------------------------------------------------------------------
def _set_format(self, fmt: str) -> None:
"""設定匯出格式('pdf''csv')。"""
self._selected_format = fmt
# ------------------------------------------------------------------
# 事件處理
# ------------------------------------------------------------------
def _on_browse(self) -> None:
"""開啟 QFileDialog 讓使用者選擇儲存路徑。"""
if self._selected_format == "pdf":
file_filter = "PDF 檔案 (*.pdf)"
default_suffix = ".pdf"
else:
file_filter = "CSV 檔案 (*.csv)"
default_suffix = ".csv"
path, _ = QFileDialog.getSaveFileName(
self,
"選擇儲存位置",
f"performance_report{default_suffix}",
file_filter,
)
if path:
self._path_input.setText(path)
def _on_export(self) -> None:
"""執行匯出:收集資料 -> 呼叫 ReportExporter。"""
output_path = self._path_input.text().strip()
if not output_path:
self._status_label.setText("請先指定儲存路徑。")
return
data = self._collect_report_data()
try:
if self._selected_format == "pdf":
result = self._exporter.export_pdf(data, output_path)
else:
result = self._exporter.export_csv(data, output_path)
self._status_label.setText(f"匯出成功:{result}")
except ImportError as e:
self._status_label.setText(f"匯出失敗(缺少函式庫):{e}")
except ValueError as e:
self._status_label.setText(f"匯出失敗(資料不足):{e}")
except Exception as e:
self._status_label.setText(f"匯出失敗:{e}")
# ------------------------------------------------------------------
# 資料收集
# ------------------------------------------------------------------
def _collect_report_data(self) -> ReportData:
"""
從各模組收集資料組裝 ReportData
每個來源都用 try/except 保護失敗時使用 None / 空值
不實際執行 benchmark只使用 history 的最新一筆作為 parallel_result
"""
# 歷史記錄,同時從中取最近一筆 sequential / parallel 作為 result
history_records: list = []
seq_result = None
par_result = None
try:
records = self._history.get_history(limit=20) if self._history else []
history_records = list(records) if records else []
seq_result = next((r for r in history_records if r.mode == "sequential"), None)
par_result = next((r for r in history_records if r.mode == "parallel"), None)
except Exception:
history_records, seq_result, par_result = [], None, None
# 從 benchmarker.history 取最新一筆作為 parallel_resultfallback不執行新的 benchmark
if par_result is None:
try:
if self._benchmarker is not None:
hist = self._benchmarker.history
if hist:
par_result = hist[-1]
except Exception:
par_result = None
# 裝置資訊
devices: List[DeviceSummary] = []
try:
if self._device_manager is not None:
raw_devices = self._device_manager.scan_devices() or []
devices = self._convert_devices(raw_devices)
except Exception:
devices = []
return ReportData(
sequential_result=seq_result,
parallel_result=par_result,
speedup=None,
history_records=history_records,
devices=devices,
chart_image_bytes=None, # 截圖整合留未來
)
@staticmethod
def _convert_devices(raw_devices: list) -> List[DeviceSummary]:
"""
DeviceManager 回傳的裝置列表轉換為 DeviceSummary 列表
若轉換失敗略過該裝置
"""
result: List[DeviceSummary] = []
for dev in raw_devices:
try:
result.append(DeviceSummary(
device_id=str(getattr(dev, "device_id", getattr(dev, "id", "unknown"))),
product_name=str(getattr(dev, "product_name", getattr(dev, "model", "unknown"))),
firmware_version=str(getattr(dev, "firmware_version", "unknown")),
is_active=bool(getattr(dev, "is_active", True)),
))
except Exception:
continue
return result

View File

@ -59,6 +59,25 @@ from core.nodes.exact_nodes import (
ExactPostprocessNode, ExactOutputNode, EXACT_NODE_TYPES
)
try:
from ui.components.performance_dashboard import PerformanceDashboard
PERFORMANCE_DASHBOARD_AVAILABLE = True
except ImportError:
PERFORMANCE_DASHBOARD_AVAILABLE = False
try:
from ui.components.device_management_panel import DeviceManagementPanel
from core.device.device_manager import DeviceManager
DEVICE_MANAGEMENT_AVAILABLE = True
except ImportError:
DEVICE_MANAGEMENT_AVAILABLE = False
try:
from ui.dialogs.export_report_dialog import ExportReportDialog
EXPORT_REPORT_AVAILABLE = True
except ImportError:
EXPORT_REPORT_AVAILABLE = False
# Import pipeline analysis functions
try:
from core.pipeline import get_stage_count, analyze_pipeline_stages, get_pipeline_summary
@ -158,6 +177,8 @@ class IntegratedPipelineDashboard(QMainWindow):
self.props_instructions = None
self.node_props_container = None
self.node_props_layout = None
self.device_manager = None
self.device_management_panel = None
self.fps_label = None
self.latency_label = None
self.memory_label = None
@ -895,7 +916,20 @@ class IntegratedPipelineDashboard(QMainWindow):
metrics_layout.addRow("Memory Usage:", self.memory_label)
layout.addWidget(metrics_group)
# Real-time performance monitor
perf_dashboard_group = QGroupBox("即時效能監控")
perf_dashboard_layout = QVBoxLayout(perf_dashboard_group)
if PERFORMANCE_DASHBOARD_AVAILABLE:
self.performance_dashboard = PerformanceDashboard()
else:
self.performance_dashboard = None
if self.performance_dashboard:
perf_dashboard_layout.addWidget(self.performance_dashboard)
else:
perf_dashboard_layout.addWidget(QLabel("PerformanceDashboard 不可用"))
layout.addWidget(perf_dashboard_group)
# Suggestions
suggestions_group = QGroupBox("Optimization Suggestions")
suggestions_layout = QVBoxLayout(suggestions_group)
@ -907,6 +941,26 @@ class IntegratedPipelineDashboard(QMainWindow):
layout.addWidget(suggestions_group)
# Benchmark section
benchmark_group = QGroupBox("效能 Benchmark")
benchmark_layout = QVBoxLayout(benchmark_group)
self.benchmark_button = QPushButton("執行 Benchmark")
self.benchmark_button.setToolTip("比較單 Dongle vs 多 Dongle 的效能差異")
self.benchmark_button.clicked.connect(self.open_benchmark_dialog)
benchmark_layout.addWidget(self.benchmark_button)
layout.addWidget(benchmark_group)
if EXPORT_REPORT_AVAILABLE:
export_group = QGroupBox("報告匯出")
export_layout = QVBoxLayout(export_group)
self.export_report_button = QPushButton("匯出效能報告PDF/CSV")
self.export_report_button.setToolTip("將 Benchmark 結果與歷史記錄匯出為 PDF 或 CSV")
self.export_report_button.clicked.connect(self.open_export_report_dialog)
export_layout.addWidget(self.export_report_button)
layout.addWidget(export_group)
# Deploy section
deploy_group = QGroupBox("Pipeline Deployment")
deploy_layout = QVBoxLayout(deploy_group)
@ -977,12 +1031,23 @@ class IntegratedPipelineDashboard(QMainWindow):
self.dongles_list.addItem("No dongles detected. Click 'Detect Dongles' to scan.")
layout.addWidget(self.dongles_list)
if DEVICE_MANAGEMENT_AVAILABLE:
try:
self.device_manager = DeviceManager()
self.device_management_panel = DeviceManagementPanel(self.device_manager)
self.device_management_panel.set_auto_refresh(3000)
layout.addWidget(self.device_management_panel)
except Exception as e:
err_label = QLabel(f"裝置管理面板初始化失敗:{e}")
err_label.setStyleSheet("color: #f38ba8; font-size: 11px;")
layout.addWidget(err_label)
layout.addStretch()
widget.setWidget(content)
widget.setWidgetResizable(True)
return widget
def setup_menu(self):
"""Setup the menu bar."""
menubar = self.menuBar()
@ -1925,7 +1990,58 @@ class IntegratedPipelineDashboard(QMainWindow):
suggestions.append("Pipeline configuration looks good for optimal performance.")
self.suggestions_text.setPlainText("\n".join(suggestions))
# Update PerformanceDashboard (if available)
if hasattr(self, 'performance_dashboard') and self.performance_dashboard:
self.performance_dashboard.update_stats({
"fps": float(estimated_fps),
"avg_latency_ms": float(estimated_latency),
"p95_latency_ms": float(estimated_latency * 1.5) # 估算 p95
})
def open_benchmark_dialog(self):
"""開啟 Benchmark 對話框。"""
try:
from ui.dialogs.benchmark_dialog import BenchmarkDialog
from core.pipeline import analyze_pipeline_stages
if not self.graph:
QMessageBox.warning(self, "無 Pipeline", "請先建立 Pipeline 再執行 Benchmark。")
return
stages = analyze_pipeline_stages(self.graph)
# analyze_pipeline_stages 回傳 List[PipelineStage]
pipeline_config = stages if stages else []
dialog = BenchmarkDialog(self, pipeline_config)
dialog.exec_()
except ImportError as e:
QMessageBox.warning(self, "功能未啟用", f"Benchmark 功能暫不可用:{e}")
def open_export_report_dialog(self):
"""開啟效能報告匯出對話框。"""
try:
from ui.dialogs.export_report_dialog import ExportReportDialog
from core.performance.benchmarker import PerformanceBenchmarker
from core.performance.history import PerformanceHistory
from core.device.device_manager import DeviceManager
benchmarker = getattr(self, '_benchmarker', None)
history = getattr(self, '_perf_history', None)
device_manager = getattr(self, 'device_manager', None)
dashboard = getattr(self, 'performance_dashboard', None)
dialog = ExportReportDialog(
parent=self,
benchmarker=benchmarker,
history=history,
device_manager=device_manager,
dashboard=dashboard,
)
dialog.exec_()
except Exception as e:
QMessageBox.warning(self, "匯出功能", f"無法開啟報告匯出:{e}")
def delete_selected_nodes(self):
"""Delete selected nodes from the graph."""
if not self.graph: