Add comprehensive inference pipeline system with UI framework

- Add InferencePipeline: Multi-stage inference orchestrator with thread-safe queue management
- Add Multidongle: Hardware abstraction layer for Kneron NPU devices
- Add comprehensive UI framework with node-based pipeline editor
- Add performance estimation and monitoring capabilities
- Add extensive documentation and examples
- Update project structure and dependencies

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Masonmason 2025-07-04 23:33:16 +08:00
parent c85407c074
commit 0ae1f1c0e2
12 changed files with 6502 additions and 2 deletions

191
CLAUDE.md Normal file
View File

@ -0,0 +1,191 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
**cluster4npu** is a high-performance multi-stage inference pipeline system for Kneron NPU dongles. The project enables flexible single-stage and cascaded multi-stage AI inference workflows optimized for real-time video processing and high-throughput scenarios.
### Core Architecture
- **InferencePipeline**: Main orchestrator managing multi-stage workflows with automatic queue management and thread coordination
- **MultiDongle**: Hardware abstraction layer for Kneron NPU devices (KL520, KL720, etc.)
- **StageConfig**: Configuration system for individual pipeline stages
- **PipelineData**: Data structure that flows through pipeline stages, accumulating results
- **PreProcessor/PostProcessor**: Flexible data transformation components for inter-stage processing
### Key Design Patterns
- **Producer-Consumer**: Each stage runs in separate threads with input/output queues
- **Pipeline Architecture**: Linear data flow through configurable stages with result accumulation
- **Hardware Abstraction**: MultiDongle encapsulates Kneron SDK complexity
- **Callback-Based**: Asynchronous result handling via configurable callbacks
## Development Commands
### Environment Setup
```bash
# Setup virtual environment with uv
uv venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install dependencies
uv pip install -r requirements.txt
```
### Running Examples
```bash
# Single-stage pipeline
uv run python src/cluster4npu/test.py --example single
# Two-stage cascade pipeline
uv run python src/cluster4npu/test.py --example cascade
# Complex multi-stage pipeline
uv run python src/cluster4npu/test.py --example complex
# Basic MultiDongle usage
uv run python src/cluster4npu/Multidongle.py
# Complete UI application with full workflow
uv run python UI.py
# UI integration examples
uv run python ui_integration_example.py
# Test UI configuration system
uv run python ui_config.py
```
### UI Application Workflow
The UI.py provides a complete visual workflow:
1. **Dashboard/Home** - Main entry point with recent files
2. **Pipeline Editor** - Visual node-based pipeline design
3. **Stage Configuration** - Dongle allocation and hardware setup
4. **Performance Estimation** - FPS calculations and optimization
5. **Save & Deploy** - Export configurations and cost estimation
6. **Monitoring & Management** - Real-time pipeline monitoring
```bash
# Access different workflow stages directly:
# 1. Create new pipeline → Pipeline Editor
# 2. Configure Stages & Deploy → Stage Configuration
# 3. Pipeline menu → Performance Analysis → Performance Panel
# 4. Pipeline menu → Deploy Pipeline → Save & Deploy Dialog
```
### Testing
```bash
# Run pipeline tests
uv run python test_pipeline.py
# Test MultiDongle functionality
uv run python src/cluster4npu/test.py
```
## Hardware Requirements
- **Kneron NPU dongles**: KL520, KL720, etc.
- **Firmware files**: `fw_scpu.bin`, `fw_ncpu.bin`
- **Models**: `.nef` format files
- **USB ports**: Multiple ports required for multi-dongle setups
## Critical Implementation Notes
### Pipeline Configuration
- Each stage requires unique `stage_id` and dedicated `port_ids`
- Queue sizes (`max_queue_size`) must be balanced between memory usage and throughput
- Stages process sequentially - output from stage N becomes input to stage N+1
### Thread Safety
- All pipeline operations are thread-safe
- Each stage runs in isolated worker threads
- Use callbacks for result handling, not direct queue access
### Data Flow
```
Input → Stage1 → Stage2 → ... → StageN → Output
↓ ↓ ↓ ↓
Queue Process Process Result
+ Results + Results Callback
```
### Hardware Management
- Always call `initialize()` before `start()`
- Always call `stop()` for clean shutdown
- Firmware upload (`upload_fw=True`) only needed once per session
- Port IDs must match actual USB connections
### Error Handling
- Pipeline continues on individual stage errors
- Failed stages return error results rather than blocking
- Comprehensive statistics available via `get_pipeline_statistics()`
## UI Application Architecture
### Complete Workflow Components
- **DashboardLogin**: Main entry point with project management
- **PipelineEditor**: Node-based visual pipeline design using NodeGraphQt
- **StageConfigurationDialog**: Hardware allocation and dongle assignment
- **PerformanceEstimationPanel**: Real-time performance analysis and optimization
- **SaveDeployDialog**: Export configurations and deployment cost estimation
- **MonitoringDashboard**: Live pipeline monitoring and cluster management
### UI Integration System
- **ui_config.py**: Configuration management and UI/core integration
- **ui_integration_example.py**: Demonstrates conversion from UI to core tools
- **UIIntegration class**: Bridges UI configurations to InferencePipeline
### Key UI Features
- **Auto-dongle allocation**: Smart assignment of dongles to pipeline stages
- **Performance estimation**: Real-time FPS and latency calculations
- **Cost analysis**: Hardware and operational cost projections
- **Export formats**: Python scripts, JSON configs, YAML, Docker containers
- **Live monitoring**: Real-time metrics and cluster scaling controls
## Code Patterns
### Basic Pipeline Setup
```python
config = StageConfig(
stage_id="unique_name",
port_ids=[28, 32],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="model.nef",
upload_fw=True
)
pipeline = InferencePipeline([config])
pipeline.initialize()
pipeline.start()
pipeline.set_result_callback(callback_func)
# ... processing ...
pipeline.stop()
```
### Inter-Stage Processing
```python
# Custom preprocessing for stage input
preprocessor = PreProcessor(resize_fn=custom_resize_func)
# Custom postprocessing for stage output
postprocessor = PostProcessor(process_fn=custom_process_func)
config = StageConfig(
# ... basic config ...
input_preprocessor=preprocessor,
output_postprocessor=postprocessor
)
```
## Performance Considerations
- **Queue Sizing**: Smaller queues = lower latency, larger queues = higher throughput
- **Dongle Distribution**: Spread dongles across stages for optimal parallelization
- **Processing Functions**: Keep preprocessors/postprocessors lightweight
- **Memory Management**: Monitor queue sizes to prevent memory buildup

BIN
Flowchart.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

489
README.md
View File

@ -1 +1,488 @@
# Cluster4NPU # InferencePipeline
A high-performance multi-stage inference pipeline system designed for Kneron NPU dongles, enabling flexible single-stage and cascaded multi-stage AI inference workflows.
<!-- ## Features
- **Single-stage inference**: Direct replacement for MultiDongle with enhanced features
- **Multi-stage cascaded pipelines**: Chain multiple AI models for complex workflows
- **Flexible preprocessing/postprocessing**: Custom data transformation between stages
- **Thread-safe design**: Concurrent processing with automatic queue management
- **Real-time performance**: Optimized for live video streams and high-throughput scenarios
- **Comprehensive statistics**: Built-in performance monitoring and metrics -->
## Installation
This project uses [uv](https://github.com/astral-sh/uv) for fast Python package management.
```bash
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create and activate virtual environment
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
uv pip install -r requirements.txt
```
### Requirements
```txt
"numpy>=2.2.6",
"opencv-python>=4.11.0.86",
```
### Hardware Requirements
- Kneron AI dongles (KL520, KL720, etc.)
- USB ports for device connections
- Compatible firmware files (`fw_scpu.bin`, `fw_ncpu.bin`)
- Trained model files (`.nef` format)
## Quick Start
### Single-Stage Pipeline
Replace your existing MultiDongle usage with InferencePipeline for enhanced features:
```python
from InferencePipeline import InferencePipeline, StageConfig
# Configure single stage
stage_config = StageConfig(
stage_id="fire_detection",
port_ids=[28, 32], # USB port IDs for your dongles
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fire_detection_520.nef",
upload_fw=True
)
# Create and start pipeline
pipeline = InferencePipeline([stage_config], pipeline_name="FireDetection")
pipeline.initialize()
pipeline.start()
# Set up result callback
def handle_result(pipeline_data):
result = pipeline_data.stage_results.get("fire_detection", {})
print(f"🔥 Detection: {result.get('result', 'Unknown')} "
f"(Probability: {result.get('probability', 0.0):.3f})")
pipeline.set_result_callback(handle_result)
# Process frames
import cv2
cap = cv2.VideoCapture(0)
try:
while True:
ret, frame = cap.read()
if ret:
pipeline.put_data(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
finally:
cap.release()
pipeline.stop()
```
### Multi-Stage Cascade Pipeline
Chain multiple models for complex workflows:
```python
from InferencePipeline import InferencePipeline, StageConfig
from Multidongle import PreProcessor, PostProcessor
# Custom preprocessing for second stage
def roi_extraction(frame, target_size):
"""Extract region of interest from detection results"""
# Extract center region as example
h, w = frame.shape[:2]
center_crop = frame[h//4:3*h//4, w//4:3*w//4]
return cv2.resize(center_crop, target_size)
# Custom result fusion
def combine_results(raw_output, **kwargs):
"""Combine detection + classification results"""
classification_prob = float(raw_output[0]) if raw_output.size > 0 else 0.0
detection_conf = kwargs.get('detection_conf', 0.5)
# Weighted combination
combined_score = (classification_prob * 0.7) + (detection_conf * 0.3)
return {
'combined_probability': combined_score,
'classification_prob': classification_prob,
'detection_conf': detection_conf,
'result': 'Fire Detected' if combined_score > 0.6 else 'No Fire',
'confidence': 'High' if combined_score > 0.8 else 'Low'
}
# Stage 1: Object Detection
detection_stage = StageConfig(
stage_id="object_detection",
port_ids=[28, 30],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="object_detection_520.nef",
upload_fw=True
)
# Stage 2: Fire Classification with preprocessing
classification_stage = StageConfig(
stage_id="fire_classification",
port_ids=[32, 34],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fire_classification_520.nef",
upload_fw=True,
input_preprocessor=PreProcessor(resize_fn=roi_extraction),
output_postprocessor=PostProcessor(process_fn=combine_results)
)
# Create two-stage pipeline
pipeline = InferencePipeline(
[detection_stage, classification_stage],
pipeline_name="DetectionClassificationCascade"
)
# Enhanced result handler
def handle_cascade_result(pipeline_data):
detection = pipeline_data.stage_results.get("object_detection", {})
classification = pipeline_data.stage_results.get("fire_classification", {})
print(f"🎯 Detection: {detection.get('result', 'Unknown')} "
f"(Conf: {detection.get('probability', 0.0):.3f})")
print(f"🔥 Classification: {classification.get('result', 'Unknown')} "
f"(Combined: {classification.get('combined_probability', 0.0):.3f})")
print(f"⏱️ Processing Time: {pipeline_data.metadata.get('total_processing_time', 0.0):.3f}s")
print("-" * 50)
pipeline.set_result_callback(handle_cascade_result)
pipeline.initialize()
pipeline.start()
# Your processing loop here...
```
## Usage Examples
### Example 1: Real-time Webcam Processing
```python
from InferencePipeline import InferencePipeline, StageConfig
from Multidongle import WebcamSource
def run_realtime_detection():
# Configure pipeline
config = StageConfig(
stage_id="realtime_detection",
port_ids=[28, 32],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="your_model.nef",
upload_fw=True,
max_queue_size=30 # Prevent memory buildup
)
pipeline = InferencePipeline([config])
pipeline.initialize()
pipeline.start()
# Use webcam source
source = WebcamSource(camera_id=0)
source.start()
def display_results(pipeline_data):
result = pipeline_data.stage_results["realtime_detection"]
probability = result.get('probability', 0.0)
detection = result.get('result', 'Unknown')
# Your visualization logic here
print(f"Detection: {detection} ({probability:.3f})")
pipeline.set_result_callback(display_results)
try:
while True:
frame = source.get_frame()
if frame is not None:
pipeline.put_data(frame)
time.sleep(0.033) # ~30 FPS
except KeyboardInterrupt:
print("Stopping...")
finally:
source.stop()
pipeline.stop()
if __name__ == "__main__":
run_realtime_detection()
```
### Example 2: Complex Multi-Modal Pipeline
```python
def run_multimodal_pipeline():
"""Multi-modal fire detection with RGB, edge, and thermal-like analysis"""
def edge_preprocessing(frame, target_size):
"""Extract edge features"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
edges_3ch = cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR)
return cv2.resize(edges_3ch, target_size)
def thermal_preprocessing(frame, target_size):
"""Simulate thermal processing"""
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
thermal_like = hsv[:, :, 2] # Value channel
thermal_3ch = cv2.cvtColor(thermal_like, cv2.COLOR_GRAY2BGR)
return cv2.resize(thermal_3ch, target_size)
def fusion_postprocessing(raw_output, **kwargs):
"""Fuse results from multiple modalities"""
if raw_output.size > 0:
current_prob = float(raw_output[0])
rgb_conf = kwargs.get('rgb_conf', 0.5)
edge_conf = kwargs.get('edge_conf', 0.5)
# Weighted fusion
fused_prob = (current_prob * 0.5) + (rgb_conf * 0.3) + (edge_conf * 0.2)
return {
'fused_probability': fused_prob,
'modality_scores': {
'thermal': current_prob,
'rgb': rgb_conf,
'edge': edge_conf
},
'result': 'Fire Detected' if fused_prob > 0.6 else 'No Fire',
'confidence': 'Very High' if fused_prob > 0.9 else 'High' if fused_prob > 0.7 else 'Medium'
}
return {'fused_probability': 0.0, 'result': 'No Fire'}
# Define stages
stages = [
StageConfig("rgb_analysis", [28, 30], "fw_scpu.bin", "fw_ncpu.bin", "rgb_model.nef", True),
StageConfig("edge_analysis", [32, 34], "fw_scpu.bin", "fw_ncpu.bin", "edge_model.nef", True,
input_preprocessor=PreProcessor(resize_fn=edge_preprocessing)),
StageConfig("thermal_analysis", [36, 38], "fw_scpu.bin", "fw_ncpu.bin", "thermal_model.nef", True,
input_preprocessor=PreProcessor(resize_fn=thermal_preprocessing)),
StageConfig("fusion", [40, 42], "fw_scpu.bin", "fw_ncpu.bin", "fusion_model.nef", True,
output_postprocessor=PostProcessor(process_fn=fusion_postprocessing))
]
pipeline = InferencePipeline(stages, pipeline_name="MultiModalFireDetection")
def handle_multimodal_result(pipeline_data):
print(f"\n🔥 Multi-Modal Fire Detection Results:")
for stage_id, result in pipeline_data.stage_results.items():
if 'probability' in result:
print(f" {stage_id}: {result['result']} ({result['probability']:.3f})")
if 'fusion' in pipeline_data.stage_results:
fusion = pipeline_data.stage_results['fusion']
print(f" 🎯 FINAL: {fusion['result']} (Fused: {fusion['fused_probability']:.3f})")
print(f" Confidence: {fusion.get('confidence', 'Unknown')}")
pipeline.set_result_callback(handle_multimodal_result)
# Start pipeline
pipeline.initialize()
pipeline.start()
# Your processing logic here...
```
### Example 3: Batch Processing
```python
def process_image_batch(image_paths):
"""Process a batch of images through pipeline"""
config = StageConfig(
stage_id="batch_processing",
port_ids=[28, 32],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="batch_model.nef",
upload_fw=True
)
pipeline = InferencePipeline([config])
pipeline.initialize()
pipeline.start()
results = []
def collect_result(pipeline_data):
result = pipeline_data.stage_results["batch_processing"]
results.append({
'pipeline_id': pipeline_data.pipeline_id,
'result': result,
'processing_time': pipeline_data.metadata.get('total_processing_time', 0.0)
})
pipeline.set_result_callback(collect_result)
# Submit all images
for img_path in image_paths:
image = cv2.imread(img_path)
if image is not None:
pipeline.put_data(image)
# Wait for all results
import time
while len(results) < len(image_paths):
time.sleep(0.1)
pipeline.stop()
return results
```
## Configuration
### StageConfig Parameters
```python
StageConfig(
stage_id="unique_stage_name", # Required: Unique identifier
port_ids=[28, 32], # Required: USB port IDs for dongles
scpu_fw_path="fw_scpu.bin", # Required: SCPU firmware path
ncpu_fw_path="fw_ncpu.bin", # Required: NCPU firmware path
model_path="model.nef", # Required: Model file path
upload_fw=True, # Upload firmware on init
max_queue_size=50, # Queue size limit
input_preprocessor=None, # Optional: Inter-stage preprocessing
output_postprocessor=None, # Optional: Inter-stage postprocessing
stage_preprocessor=None, # Optional: MultiDongle preprocessing
stage_postprocessor=None # Optional: MultiDongle postprocessing
)
```
### Performance Tuning
```python
# For high-throughput scenarios
config = StageConfig(
stage_id="high_performance",
port_ids=[28, 30, 32, 34], # Use more dongles
max_queue_size=100, # Larger queues
# ... other params
)
# For low-latency scenarios
config = StageConfig(
stage_id="low_latency",
port_ids=[28, 32],
max_queue_size=10, # Smaller queues
# ... other params
)
```
## Statistics and Monitoring
```python
# Enable statistics reporting
def print_stats(stats):
print(f"\n📊 Pipeline Statistics:")
print(f" Input: {stats['pipeline_input_submitted']}")
print(f" Completed: {stats['pipeline_completed']}")
print(f" Success Rate: {stats['pipeline_completed']/max(stats['pipeline_input_submitted'], 1)*100:.1f}%")
for stage_stat in stats['stage_statistics']:
print(f" Stage {stage_stat['stage_id']}: "
f"Processed={stage_stat['processed_count']}, "
f"AvgTime={stage_stat['avg_processing_time']:.3f}s")
pipeline.set_stats_callback(print_stats)
pipeline.start_stats_reporting(interval=5.0) # Report every 5 seconds
```
## Running Examples
The project includes comprehensive examples in `test.py`:
```bash
# Single-stage pipeline
uv run python test.py --example single
# Two-stage cascade pipeline
uv run python test.py --example cascade
# Complex multi-stage pipeline
uv run python test.py --example complex
```
## API Reference
### InferencePipeline
Main pipeline orchestrator class.
**Methods:**
- `initialize()`: Initialize all pipeline stages
- `start()`: Start pipeline processing threads
- `stop()`: Gracefully stop pipeline
- `put_data(data, timeout=1.0)`: Submit data for processing
- `get_result(timeout=0.1)`: Get processed results
- `set_result_callback(callback)`: Set success callback
- `set_error_callback(callback)`: Set error callback
- `get_pipeline_statistics()`: Get performance metrics
### StageConfig
Configuration for individual pipeline stages.
### PipelineData
Data structure flowing through pipeline stages.
**Attributes:**
- `data`: Main data payload
- `metadata`: Processing metadata
- `stage_results`: Results from each stage
- `pipeline_id`: Unique identifier
- `timestamp`: Creation timestamp
## Performance Considerations
1. **Queue Sizing**: Balance memory usage vs. throughput with `max_queue_size`
2. **Dongle Distribution**: Distribute dongles across stages for optimal performance
3. **Preprocessing**: Minimize expensive operations in preprocessors
4. **Memory Management**: Monitor queue sizes and processing times
5. **Threading**: Pipeline uses multiple threads - ensure thread-safe operations
## Troubleshooting
### Common Issues
**Pipeline hangs or stops processing:**
- Check dongle connections and firmware compatibility
- Monitor queue sizes for bottlenecks
- Verify model file paths and formats
**High memory usage:**
- Reduce `max_queue_size` parameters
- Ensure proper cleanup in custom processors
- Monitor statistics for processing times
**Poor performance:**
- Distribute dongles optimally across stages
- Profile preprocessing/postprocessing functions
- Consider batch processing for high throughput
### Debug Mode
Enable detailed logging for troubleshooting:
```python
import logging
logging.basicConfig(level=logging.DEBUG)
# Pipeline will output detailed processing information
```

3346
UI.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -3,8 +3,11 @@ name = "cluster4npu"
version = "0.1.0" version = "0.1.0"
description = "Add your description here" description = "Add your description here"
readme = "README.md" readme = "README.md"
requires-python = ">=3.12" requires-python = "<=3.12"
dependencies = [ dependencies = [
"nodegraphqt>=0.6.38",
"numpy>=2.2.6", "numpy>=2.2.6",
"odengraphqt>=0.7.4",
"opencv-python>=4.11.0.86", "opencv-python>=4.11.0.86",
"pyqt5>=5.15.11",
] ]

View File

@ -0,0 +1,563 @@
from typing import List, Dict, Any, Optional, Callable, Union
import threading
import queue
import time
import traceback
from dataclasses import dataclass
from concurrent.futures import ThreadPoolExecutor
import numpy as np
from Multidongle import MultiDongle, PreProcessor, PostProcessor, DataProcessor
@dataclass
class StageConfig:
"""Configuration for a single pipeline stage"""
stage_id: str
port_ids: List[int]
scpu_fw_path: str
ncpu_fw_path: str
model_path: str
upload_fw: bool = False
max_queue_size: int = 50
# Inter-stage processing
input_preprocessor: Optional[PreProcessor] = None # Before this stage
output_postprocessor: Optional[PostProcessor] = None # After this stage
# Stage-specific processing
stage_preprocessor: Optional[PreProcessor] = None # MultiDongle preprocessor
stage_postprocessor: Optional[PostProcessor] = None # MultiDongle postprocessor
@dataclass
class PipelineData:
"""Data structure flowing through pipeline"""
data: Any # Main data (image, features, etc.)
metadata: Dict[str, Any] # Additional info
stage_results: Dict[str, Any] # Results from each stage
pipeline_id: str # Unique identifier for this data flow
timestamp: float
class PipelineStage:
"""Single stage in the inference pipeline"""
def __init__(self, config: StageConfig):
self.config = config
self.stage_id = config.stage_id
# Initialize MultiDongle for this stage
self.multidongle = MultiDongle(
port_id=config.port_ids,
scpu_fw_path=config.scpu_fw_path,
ncpu_fw_path=config.ncpu_fw_path,
model_path=config.model_path,
upload_fw=config.upload_fw,
preprocessor=config.stage_preprocessor,
postprocessor=config.stage_postprocessor,
max_queue_size=config.max_queue_size
)
# Inter-stage processors
self.input_preprocessor = config.input_preprocessor
self.output_postprocessor = config.output_postprocessor
# Threading for this stage
self.input_queue = queue.Queue(maxsize=config.max_queue_size)
self.output_queue = queue.Queue(maxsize=config.max_queue_size)
self.worker_thread = None
self.running = False
self._stop_event = threading.Event()
# Statistics
self.processed_count = 0
self.error_count = 0
self.processing_times = []
def initialize(self):
"""Initialize the stage"""
print(f"[Stage {self.stage_id}] Initializing...")
try:
self.multidongle.initialize()
self.multidongle.start()
print(f"[Stage {self.stage_id}] Initialized successfully")
except Exception as e:
print(f"[Stage {self.stage_id}] Initialization failed: {e}")
raise
def start(self):
"""Start the stage worker thread"""
if self.worker_thread and self.worker_thread.is_alive():
return
self.running = True
self._stop_event.clear()
self.worker_thread = threading.Thread(target=self._worker_loop, daemon=True)
self.worker_thread.start()
print(f"[Stage {self.stage_id}] Worker thread started")
def stop(self):
"""Stop the stage gracefully"""
print(f"[Stage {self.stage_id}] Stopping...")
self.running = False
self._stop_event.set()
# Put sentinel to unblock worker
try:
self.input_queue.put(None, timeout=1.0)
except queue.Full:
pass
# Wait for worker thread
if self.worker_thread and self.worker_thread.is_alive():
self.worker_thread.join(timeout=3.0)
if self.worker_thread.is_alive():
print(f"[Stage {self.stage_id}] Warning: Worker thread didn't stop cleanly")
# Stop MultiDongle
self.multidongle.stop()
print(f"[Stage {self.stage_id}] Stopped")
def _worker_loop(self):
"""Main worker loop for processing data"""
print(f"[Stage {self.stage_id}] Worker loop started")
while self.running and not self._stop_event.is_set():
try:
# Get input data
try:
pipeline_data = self.input_queue.get(timeout=0.1)
if pipeline_data is None: # Sentinel value
continue
except queue.Empty:
continue
start_time = time.time()
# Process data through this stage
processed_data = self._process_data(pipeline_data)
# Record processing time
processing_time = time.time() - start_time
self.processing_times.append(processing_time)
if len(self.processing_times) > 1000: # Keep only recent times
self.processing_times = self.processing_times[-500:]
self.processed_count += 1
# Put result to output queue
try:
self.output_queue.put(processed_data, block=False)
except queue.Full:
# Drop oldest and add new
try:
self.output_queue.get_nowait()
self.output_queue.put(processed_data, block=False)
except queue.Empty:
pass
except Exception as e:
self.error_count += 1
print(f"[Stage {self.stage_id}] Processing error: {e}")
traceback.print_exc()
print(f"[Stage {self.stage_id}] Worker loop stopped")
def _process_data(self, pipeline_data: PipelineData) -> PipelineData:
"""Process data through this stage"""
try:
current_data = pipeline_data.data
# Debug: Print data info
if isinstance(current_data, np.ndarray):
print(f"[Stage {self.stage_id}] Input data: shape={current_data.shape}, dtype={current_data.dtype}")
# Step 1: Input preprocessing (inter-stage)
if self.input_preprocessor:
if isinstance(current_data, np.ndarray):
print(f"[Stage {self.stage_id}] Applying input preprocessor...")
current_data = self.input_preprocessor.process(
current_data,
self.multidongle.model_input_shape,
'BGR565' # Default format
)
print(f"[Stage {self.stage_id}] After input preprocess: shape={current_data.shape}, dtype={current_data.dtype}")
# Step 2: Always preprocess image data for MultiDongle
processed_data = None
if isinstance(current_data, np.ndarray) and len(current_data.shape) == 3:
# Always use MultiDongle's preprocess_frame to ensure correct format
print(f"[Stage {self.stage_id}] Preprocessing frame for MultiDongle...")
processed_data = self.multidongle.preprocess_frame(current_data, 'BGR565')
print(f"[Stage {self.stage_id}] After MultiDongle preprocess: shape={processed_data.shape}, dtype={processed_data.dtype}")
# Validate processed data
if processed_data is None:
raise ValueError("MultiDongle preprocess_frame returned None")
if not isinstance(processed_data, np.ndarray):
raise ValueError(f"MultiDongle preprocess_frame returned {type(processed_data)}, expected np.ndarray")
elif isinstance(current_data, dict) and 'raw_output' in current_data:
# This is result from previous stage, not suitable for direct inference
print(f"[Stage {self.stage_id}] Warning: Received processed result instead of image data")
processed_data = current_data
else:
print(f"[Stage {self.stage_id}] Warning: Unexpected data type: {type(current_data)}")
processed_data = current_data
# Step 3: MultiDongle inference
if isinstance(processed_data, np.ndarray):
print(f"[Stage {self.stage_id}] Sending to MultiDongle: shape={processed_data.shape}, dtype={processed_data.dtype}")
self.multidongle.put_input(processed_data, 'BGR565')
# Get inference result with timeout
inference_result = {}
timeout_start = time.time()
while time.time() - timeout_start < 5.0: # 5 second timeout
result = self.multidongle.get_latest_inference_result(timeout=0.1)
if result:
inference_result = result
break
time.sleep(0.01)
if not inference_result:
print(f"[Stage {self.stage_id}] Warning: No inference result received")
inference_result = {'probability': 0.0, 'result': 'No Result'}
# Step 3: Output postprocessing (inter-stage)
processed_result = inference_result
if self.output_postprocessor:
if 'raw_output' in inference_result:
processed_result = self.output_postprocessor.process(
inference_result['raw_output']
)
# Merge with original result
processed_result.update(inference_result)
# Step 4: Update pipeline data
pipeline_data.stage_results[self.stage_id] = processed_result
pipeline_data.data = processed_result # Pass result as data to next stage
pipeline_data.metadata[f'{self.stage_id}_timestamp'] = time.time()
return pipeline_data
except Exception as e:
print(f"[Stage {self.stage_id}] Data processing error: {e}")
# Return data with error info
pipeline_data.stage_results[self.stage_id] = {
'error': str(e),
'probability': 0.0,
'result': 'Processing Error'
}
return pipeline_data
def put_data(self, data: PipelineData, timeout: float = 1.0) -> bool:
"""Put data into this stage's input queue"""
try:
self.input_queue.put(data, timeout=timeout)
return True
except queue.Full:
return False
def get_result(self, timeout: float = 0.1) -> Optional[PipelineData]:
"""Get result from this stage's output queue"""
try:
return self.output_queue.get(timeout=timeout)
except queue.Empty:
return None
def get_statistics(self) -> Dict[str, Any]:
"""Get stage statistics"""
avg_processing_time = (
sum(self.processing_times) / len(self.processing_times)
if self.processing_times else 0.0
)
multidongle_stats = self.multidongle.get_statistics()
return {
'stage_id': self.stage_id,
'processed_count': self.processed_count,
'error_count': self.error_count,
'avg_processing_time': avg_processing_time,
'input_queue_size': self.input_queue.qsize(),
'output_queue_size': self.output_queue.qsize(),
'multidongle_stats': multidongle_stats
}
class InferencePipeline:
"""Multi-stage inference pipeline"""
def __init__(self, stage_configs: List[StageConfig],
final_postprocessor: Optional[PostProcessor] = None,
pipeline_name: str = "InferencePipeline"):
"""
Initialize inference pipeline
:param stage_configs: List of stage configurations
:param final_postprocessor: Final postprocessor after all stages
:param pipeline_name: Name for this pipeline instance
"""
self.pipeline_name = pipeline_name
self.stage_configs = stage_configs
self.final_postprocessor = final_postprocessor
# Create stages
self.stages: List[PipelineStage] = []
for config in stage_configs:
stage = PipelineStage(config)
self.stages.append(stage)
# Pipeline coordinator
self.coordinator_thread = None
self.running = False
self._stop_event = threading.Event()
# Input/Output queues for the entire pipeline
self.pipeline_input_queue = queue.Queue(maxsize=100)
self.pipeline_output_queue = queue.Queue(maxsize=100)
# Callbacks
self.result_callback = None
self.error_callback = None
self.stats_callback = None
# Statistics
self.pipeline_counter = 0
self.completed_counter = 0
self.error_counter = 0
def initialize(self):
"""Initialize all stages"""
print(f"[{self.pipeline_name}] Initializing pipeline with {len(self.stages)} stages...")
for i, stage in enumerate(self.stages):
try:
stage.initialize()
print(f"[{self.pipeline_name}] Stage {i+1}/{len(self.stages)} initialized")
except Exception as e:
print(f"[{self.pipeline_name}] Failed to initialize stage {stage.stage_id}: {e}")
# Cleanup already initialized stages
for j in range(i):
self.stages[j].stop()
raise
print(f"[{self.pipeline_name}] All stages initialized successfully")
def start(self):
"""Start the pipeline"""
print(f"[{self.pipeline_name}] Starting pipeline...")
# Start all stages
for stage in self.stages:
stage.start()
# Start coordinator
self.running = True
self._stop_event.clear()
self.coordinator_thread = threading.Thread(target=self._coordinator_loop, daemon=True)
self.coordinator_thread.start()
print(f"[{self.pipeline_name}] Pipeline started successfully")
def stop(self):
"""Stop the pipeline gracefully"""
print(f"[{self.pipeline_name}] Stopping pipeline...")
self.running = False
self._stop_event.set()
# Stop coordinator
if self.coordinator_thread and self.coordinator_thread.is_alive():
try:
self.pipeline_input_queue.put(None, timeout=1.0)
except queue.Full:
pass
self.coordinator_thread.join(timeout=3.0)
# Stop all stages
for stage in self.stages:
stage.stop()
print(f"[{self.pipeline_name}] Pipeline stopped")
def _coordinator_loop(self):
"""Coordinate data flow between stages"""
print(f"[{self.pipeline_name}] Coordinator started")
while self.running and not self._stop_event.is_set():
try:
# Get input data
try:
input_data = self.pipeline_input_queue.get(timeout=0.1)
if input_data is None: # Sentinel
continue
except queue.Empty:
continue
# Create pipeline data
pipeline_data = PipelineData(
data=input_data,
metadata={'start_timestamp': time.time()},
stage_results={},
pipeline_id=f"pipeline_{self.pipeline_counter}",
timestamp=time.time()
)
self.pipeline_counter += 1
# Process through each stage
current_data = pipeline_data
success = True
for i, stage in enumerate(self.stages):
# Send data to stage
if not stage.put_data(current_data, timeout=1.0):
print(f"[{self.pipeline_name}] Stage {stage.stage_id} input queue full, dropping data")
success = False
break
# Get result from stage
result_data = None
timeout_start = time.time()
while time.time() - timeout_start < 10.0: # 10 second timeout per stage
result_data = stage.get_result(timeout=0.1)
if result_data:
break
if self._stop_event.is_set():
break
time.sleep(0.01)
if not result_data:
print(f"[{self.pipeline_name}] Stage {stage.stage_id} timeout")
success = False
break
current_data = result_data
# Final postprocessing
if success and self.final_postprocessor:
try:
if isinstance(current_data.data, dict) and 'raw_output' in current_data.data:
final_result = self.final_postprocessor.process(current_data.data['raw_output'])
current_data.stage_results['final'] = final_result
current_data.data = final_result
except Exception as e:
print(f"[{self.pipeline_name}] Final postprocessing error: {e}")
# Output result
if success:
current_data.metadata['end_timestamp'] = time.time()
current_data.metadata['total_processing_time'] = (
current_data.metadata['end_timestamp'] -
current_data.metadata['start_timestamp']
)
try:
self.pipeline_output_queue.put(current_data, block=False)
self.completed_counter += 1
# Call result callback
if self.result_callback:
self.result_callback(current_data)
except queue.Full:
# Drop oldest and add new
try:
self.pipeline_output_queue.get_nowait()
self.pipeline_output_queue.put(current_data, block=False)
except queue.Empty:
pass
else:
self.error_counter += 1
if self.error_callback:
self.error_callback(current_data)
except Exception as e:
print(f"[{self.pipeline_name}] Coordinator error: {e}")
traceback.print_exc()
self.error_counter += 1
print(f"[{self.pipeline_name}] Coordinator stopped")
def put_data(self, data: Any, timeout: float = 1.0) -> bool:
"""Put data into pipeline"""
try:
self.pipeline_input_queue.put(data, timeout=timeout)
return True
except queue.Full:
return False
def get_result(self, timeout: float = 0.1) -> Optional[PipelineData]:
"""Get result from pipeline"""
try:
return self.pipeline_output_queue.get(timeout=timeout)
except queue.Empty:
return None
def set_result_callback(self, callback: Callable[[PipelineData], None]):
"""Set callback for successful results"""
self.result_callback = callback
def set_error_callback(self, callback: Callable[[PipelineData], None]):
"""Set callback for errors"""
self.error_callback = callback
def set_stats_callback(self, callback: Callable[[Dict[str, Any]], None]):
"""Set callback for statistics"""
self.stats_callback = callback
def get_pipeline_statistics(self) -> Dict[str, Any]:
"""Get comprehensive pipeline statistics"""
stage_stats = []
for stage in self.stages:
stage_stats.append(stage.get_statistics())
return {
'pipeline_name': self.pipeline_name,
'total_stages': len(self.stages),
'pipeline_input_submitted': self.pipeline_counter,
'pipeline_completed': self.completed_counter,
'pipeline_errors': self.error_counter,
'pipeline_input_queue_size': self.pipeline_input_queue.qsize(),
'pipeline_output_queue_size': self.pipeline_output_queue.qsize(),
'stage_statistics': stage_stats
}
def start_stats_reporting(self, interval: float = 5.0):
"""Start periodic statistics reporting"""
def stats_loop():
while self.running:
if self.stats_callback:
stats = self.get_pipeline_statistics()
self.stats_callback(stats)
time.sleep(interval)
stats_thread = threading.Thread(target=stats_loop, daemon=True)
stats_thread.start()
# Utility functions for common inter-stage processing
def create_feature_extractor_preprocessor() -> PreProcessor:
"""Create preprocessor for feature extraction stage"""
def extract_features(frame, target_size):
# Example: extract edges, keypoints, etc.
import cv2
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
return cv2.resize(edges, target_size)
return PreProcessor(resize_fn=extract_features)
def create_result_aggregator_postprocessor() -> PostProcessor:
"""Create postprocessor for aggregating multiple stage results"""
def aggregate_results(raw_output, **kwargs):
# Example: combine results from multiple stages
if isinstance(raw_output, dict):
# If raw_output is already processed results
return raw_output
# Standard processing
if raw_output.size > 0:
probability = float(raw_output[0])
return {
'aggregated_probability': probability,
'confidence': 'High' if probability > 0.8 else 'Medium' if probability > 0.5 else 'Low',
'result': 'Detected' if probability > 0.5 else 'Not Detected'
}
return {'aggregated_probability': 0.0, 'confidence': 'Low', 'result': 'Not Detected'}
return PostProcessor(process_fn=aggregate_results)

View File

@ -0,0 +1,505 @@
from typing import Union, Tuple
import os
import sys
import argparse
import time
import threading
import queue
import numpy as np
import kp
import cv2
import time
from abc import ABC, abstractmethod
from typing import Callable, Optional, Any, Dict
class PreProcessor(DataProcessor): # type: ignore
def __init__(self, resize_fn: Optional[Callable] = None,
format_convert_fn: Optional[Callable] = None):
self.resize_fn = resize_fn or self._default_resize
self.format_convert_fn = format_convert_fn or self._default_format_convert
def process(self, frame: np.ndarray, target_size: tuple, target_format: str) -> np.ndarray:
"""Main processing pipeline"""
resized = self.resize_fn(frame, target_size)
return self.format_convert_fn(resized, target_format)
def _default_resize(self, frame: np.ndarray, target_size: tuple) -> np.ndarray:
"""Default resize implementation"""
return cv2.resize(frame, target_size)
def _default_format_convert(self, frame: np.ndarray, target_format: str) -> np.ndarray:
"""Default format conversion"""
if target_format == 'BGR565':
return cv2.cvtColor(frame, cv2.COLOR_BGR2BGR565)
elif target_format == 'RGB8888':
return cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
return frame
class MultiDongle:
# Curently, only BGR565, RGB8888, YUYV, and RAW8 formats are supported
_FORMAT_MAPPING = {
'BGR565': kp.ImageFormat.KP_IMAGE_FORMAT_RGB565,
'RGB8888': kp.ImageFormat.KP_IMAGE_FORMAT_RGBA8888,
'YUYV': kp.ImageFormat.KP_IMAGE_FORMAT_YUYV,
'RAW8': kp.ImageFormat.KP_IMAGE_FORMAT_RAW8,
# 'YCBCR422_CRY1CBY0': kp.ImageFormat.KP_IMAGE_FORMAT_YCBCR422_CRY1CBY0,
# 'YCBCR422_CBY1CRY0': kp.ImageFormat.KP_IMAGE_FORMAT_CBY1CRY0,
# 'YCBCR422_Y1CRY0CB': kp.ImageFormat.KP_IMAGE_FORMAT_Y1CRY0CB,
# 'YCBCR422_Y1CBY0CR': kp.ImageFormat.KP_IMAGE_FORMAT_Y1CBY0CR,
# 'YCBCR422_CRY0CBY1': kp.ImageFormat.KP_IMAGE_FORMAT_CRY0CBY1,
# 'YCBCR422_CBY0CRY1': kp.ImageFormat.KP_IMAGE_FORMAT_CBY0CRY1,
# 'YCBCR422_Y0CRY1CB': kp.ImageFormat.KP_IMAGE_FORMAT_Y0CRY1CB,
# 'YCBCR422_Y0CBY1CR': kp.ImageFormat.KP_IMAGE_FORMAT_Y0CBY1CR,
}
def __init__(self, port_id: list, scpu_fw_path: str, ncpu_fw_path: str, model_path: str, upload_fw: bool = False):
"""
Initialize the MultiDongle class.
:param port_id: List of USB port IDs for the same layer's devices.
:param scpu_fw_path: Path to the SCPU firmware file.
:param ncpu_fw_path: Path to the NCPU firmware file.
:param model_path: Path to the model file.
:param upload_fw: Flag to indicate whether to upload firmware.
"""
self.port_id = port_id
self.upload_fw = upload_fw
# Check if the firmware is needed
if self.upload_fw:
self.scpu_fw_path = scpu_fw_path
self.ncpu_fw_path = ncpu_fw_path
self.model_path = model_path
self.device_group = None
# generic_inference_input_descriptor will be prepared in initialize
self.model_nef_descriptor = None
self.generic_inference_input_descriptor = None
# Queues for data
# Input queue for images to be sent
self._input_queue = queue.Queue()
# Output queue for received results
self._output_queue = queue.Queue()
# Threading attributes
self._send_thread = None
self._receive_thread = None
self._stop_event = threading.Event() # Event to signal threads to stop
self._inference_counter = 0
def initialize(self):
"""
Connect devices, upload firmware (if upload_fw is True), and upload model.
Must be called before start().
"""
# Connect device and assign to self.device_group
try:
print('[Connect Device]')
self.device_group = kp.core.connect_devices(usb_port_ids=self.port_id)
print(' - Success')
except kp.ApiKPException as exception:
print('Error: connect device fail, port ID = \'{}\', error msg: [{}]'.format(self.port_id, str(exception)))
sys.exit(1)
# setting timeout of the usb communication with the device
# print('[Set Device Timeout]')
# kp.core.set_timeout(device_group=self.device_group, milliseconds=5000)
# print(' - Success')
if self.upload_fw:
try:
print('[Upload Firmware]')
kp.core.load_firmware_from_file(device_group=self.device_group,
scpu_fw_path=self.scpu_fw_path,
ncpu_fw_path=self.ncpu_fw_path)
print(' - Success')
except kp.ApiKPException as exception:
print('Error: upload firmware failed, error = \'{}\''.format(str(exception)))
sys.exit(1)
# upload model to device
try:
print('[Upload Model]')
self.model_nef_descriptor = kp.core.load_model_from_file(device_group=self.device_group,
file_path=self.model_path)
print(' - Success')
except kp.ApiKPException as exception:
print('Error: upload model failed, error = \'{}\''.format(str(exception)))
sys.exit(1)
# Extract model input dimensions automatically from model metadata
if self.model_nef_descriptor and self.model_nef_descriptor.models:
model = self.model_nef_descriptor.models[0]
if hasattr(model, 'input_nodes') and model.input_nodes:
input_node = model.input_nodes[0]
# From your JSON: "shape_npu": [1, 3, 128, 128] -> (width, height)
shape = input_node.tensor_shape_info.data.shape_npu
self.model_input_shape = (shape[3], shape[2]) # (width, height)
self.model_input_channels = shape[1] # 3 for RGB
print(f"Model input shape detected: {self.model_input_shape}, channels: {self.model_input_channels}")
else:
self.model_input_shape = (128, 128) # fallback
self.model_input_channels = 3
print("Using default input shape (128, 128)")
else:
self.model_input_shape = (128, 128)
self.model_input_channels = 3
print("Model info not available, using default shape")
# Prepare generic inference input descriptor after model is loaded
if self.model_nef_descriptor:
self.generic_inference_input_descriptor = kp.GenericImageInferenceDescriptor(
model_id=self.model_nef_descriptor.models[0].id,
)
else:
print("Warning: Could not get generic inference input descriptor from model.")
self.generic_inference_input_descriptor = None
def preprocess_frame(self, frame: np.ndarray, target_format: str = 'BGR565') -> np.ndarray:
"""
Preprocess frame for inference
"""
resized_frame = cv2.resize(frame, self.model_input_shape)
if target_format == 'BGR565':
return cv2.cvtColor(resized_frame, cv2.COLOR_BGR2BGR565)
elif target_format == 'RGB8888':
return cv2.cvtColor(resized_frame, cv2.COLOR_BGR2RGBA)
elif target_format == 'YUYV':
return cv2.cvtColor(resized_frame, cv2.COLOR_BGR2YUV_YUYV)
else:
return resized_frame # RAW8 or other formats
def get_latest_inference_result(self, timeout: float = 0.01) -> Tuple[float, str]:
"""
Get the latest inference result
Returns: (probability, result_string) or (None, None) if no result
"""
output_descriptor = self.get_output(timeout=timeout)
if not output_descriptor:
return None, None
# Process the output descriptor
if hasattr(output_descriptor, 'header') and \
hasattr(output_descriptor.header, 'num_output_node') and \
hasattr(output_descriptor.header, 'inference_number'):
inf_node_output_list = []
retrieval_successful = True
for node_idx in range(output_descriptor.header.num_output_node):
try:
inference_float_node_output = kp.inference.generic_inference_retrieve_float_node(
node_idx=node_idx,
generic_raw_result=output_descriptor,
channels_ordering=kp.ChannelOrdering.KP_CHANNEL_ORDERING_CHW
)
inf_node_output_list.append(inference_float_node_output.ndarray.copy())
except kp.ApiKPException as e:
retrieval_successful = False
break
except Exception as e:
retrieval_successful = False
break
if retrieval_successful and inf_node_output_list:
# Process output nodes
if output_descriptor.header.num_output_node == 1:
raw_output_array = inf_node_output_list[0].flatten()
else:
concatenated_outputs = [arr.flatten() for arr in inf_node_output_list]
raw_output_array = np.concatenate(concatenated_outputs) if concatenated_outputs else np.array([])
if raw_output_array.size > 0:
probability = postprocess(raw_output_array)
result_str = "Fire" if probability > 0.5 else "No Fire"
return probability, result_str
return None, None
# Modified _send_thread_func to get data from input queue
def _send_thread_func(self):
"""Internal function run by the send thread, gets images from input queue."""
print("Send thread started.")
while not self._stop_event.is_set():
if self.generic_inference_input_descriptor is None:
# Wait for descriptor to be ready or stop
self._stop_event.wait(0.1) # Avoid busy waiting
continue
try:
# Get image and format from the input queue
# Blocks until an item is available or stop event is set/timeout occurs
try:
# Use get with timeout or check stop event in a loop
# This pattern allows thread to check stop event while waiting on queue
item = self._input_queue.get(block=True, timeout=0.1)
# Check if this is our sentinel value
if item is None:
continue
# Now safely unpack the tuple
image_data, image_format_enum = item
except queue.Empty:
# If queue is empty after timeout, check stop event and continue loop
continue
# Configure and send the image
self._inference_counter += 1 # Increment counter for each image
self.generic_inference_input_descriptor.inference_number = self._inference_counter
self.generic_inference_input_descriptor.input_node_image_list = [kp.GenericInputNodeImage(
image=image_data,
image_format=image_format_enum, # Use the format from the queue
resize_mode=kp.ResizeMode.KP_RESIZE_ENABLE,
padding_mode=kp.PaddingMode.KP_PADDING_CORNER,
normalize_mode=kp.NormalizeMode.KP_NORMALIZE_KNERON
)]
kp.inference.generic_image_inference_send(device_group=self.device_group,
generic_inference_input_descriptor=self.generic_inference_input_descriptor)
# print("Image sent.") # Optional: add log
# No need for sleep here usually, as queue.get is blocking
except kp.ApiKPException as exception:
print(f' - Error in send thread: inference send failed, error = {exception}')
self._stop_event.set() # Signal other thread to stop
except Exception as e:
print(f' - Unexpected error in send thread: {e}')
self._stop_event.set()
print("Send thread stopped.")
# _receive_thread_func remains the same
def _receive_thread_func(self):
"""Internal function run by the receive thread, puts results into output queue."""
print("Receive thread started.")
while not self._stop_event.is_set():
try:
generic_inference_output_descriptor = kp.inference.generic_image_inference_receive(device_group=self.device_group)
self._output_queue.put(generic_inference_output_descriptor)
except kp.ApiKPException as exception:
if not self._stop_event.is_set(): # Avoid printing error if we are already stopping
print(f' - Error in receive thread: inference receive failed, error = {exception}')
self._stop_event.set()
except Exception as e:
print(f' - Unexpected error in receive thread: {e}')
self._stop_event.set()
print("Receive thread stopped.")
def start(self):
"""
Start the send and receive threads.
Must be called after initialize().
"""
if self.device_group is None:
raise RuntimeError("MultiDongle not initialized. Call initialize() first.")
if self._send_thread is None or not self._send_thread.is_alive():
self._stop_event.clear() # Clear stop event for a new start
self._send_thread = threading.Thread(target=self._send_thread_func, daemon=True)
self._send_thread.start()
print("Send thread started.")
if self._receive_thread is None or not self._receive_thread.is_alive():
self._receive_thread = threading.Thread(target=self._receive_thread_func, daemon=True)
self._receive_thread.start()
print("Receive thread started.")
def stop(self):
"""Improved stop method with better cleanup"""
if self._stop_event.is_set():
return # Already stopping
print("Stopping threads...")
self._stop_event.set()
# Clear queues to unblock threads
while not self._input_queue.empty():
try:
self._input_queue.get_nowait()
except queue.Empty:
break
# Signal send thread to wake up
self._input_queue.put(None)
# Join threads with timeout
for thread, name in [(self._send_thread, "Send"), (self._receive_thread, "Receive")]:
if thread and thread.is_alive():
thread.join(timeout=2.0)
if thread.is_alive():
print(f"Warning: {name} thread didn't stop cleanly")
def put_input(self, image: Union[str, np.ndarray], format: str, target_size: Tuple[int, int] = None):
"""
Put an image into the input queue with flexible preprocessing
"""
if isinstance(image, str):
image_data = cv2.imread(image)
if image_data is None:
raise FileNotFoundError(f"Image file not found at {image}")
if target_size:
image_data = cv2.resize(image_data, target_size)
elif isinstance(image, np.ndarray):
# Don't modify original array, make copy if needed
image_data = image.copy() if target_size is None else cv2.resize(image, target_size)
else:
raise ValueError("Image must be a file path (str) or a numpy array (ndarray).")
if format in self._FORMAT_MAPPING:
image_format_enum = self._FORMAT_MAPPING[format]
else:
raise ValueError(f"Unsupported format: {format}")
self._input_queue.put((image_data, image_format_enum))
def get_output(self, timeout: float = None):
"""
Get the next received data from the output queue.
This method is non-blocking by default unless a timeout is specified.
:param timeout: Time in seconds to wait for data. If None, it's non-blocking.
:return: Received data (e.g., kp.GenericInferenceOutputDescriptor) or None if no data available within timeout.
"""
try:
return self._output_queue.get(block=timeout is not None, timeout=timeout)
except queue.Empty:
return None
def __del__(self):
"""Ensure resources are released when the object is garbage collected."""
self.stop()
if self.device_group:
try:
kp.core.disconnect_devices(device_group=self.device_group)
print("Device group disconnected in destructor.")
except Exception as e:
print(f"Error disconnecting device group in destructor: {e}")
def postprocess(raw_model_output: list) -> float:
"""
Post-processes the raw model output.
Assumes the model output is a list/array where the first element is the desired probability.
"""
if raw_model_output and len(raw_model_output) > 0:
probability = raw_model_output[0]
return float(probability)
return 0.0 # Default or error value
class WebcamInferenceRunner:
def __init__(self, multidongle: MultiDongle, image_format: str = 'BGR565'):
self.multidongle = multidongle
self.image_format = image_format
self.latest_probability = 0.0
self.result_str = "No Fire"
# Statistics tracking
self.processed_inference_count = 0
self.inference_fps_start_time = None
self.display_fps_start_time = None
self.display_frame_counter = 0
def run(self, camera_id: int = 0):
cap = cv2.VideoCapture(camera_id)
if not cap.isOpened():
raise RuntimeError("Cannot open webcam")
try:
while True:
ret, frame = cap.read()
if not ret:
break
# Track display FPS
if self.display_fps_start_time is None:
self.display_fps_start_time = time.time()
self.display_frame_counter += 1
# Preprocess and send frame
processed_frame = self.multidongle.preprocess_frame(frame, self.image_format)
self.multidongle.put_input(processed_frame, self.image_format)
# Get inference result
prob, result = self.multidongle.get_latest_inference_result()
if prob is not None:
# Track inference FPS
if self.inference_fps_start_time is None:
self.inference_fps_start_time = time.time()
self.processed_inference_count += 1
self.latest_probability = prob
self.result_str = result
# Display frame with results
self._display_results(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
finally:
# self._print_statistics()
cap.release()
cv2.destroyAllWindows()
def _display_results(self, frame):
display_frame = frame.copy()
text_color = (0, 255, 0) if "Fire" in self.result_str else (0, 0, 255)
# Display inference result
cv2.putText(display_frame, f"{self.result_str} (Prob: {self.latest_probability:.2f})",
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, text_color, 2)
# Calculate and display inference FPS
if self.inference_fps_start_time and self.processed_inference_count > 0:
elapsed_time = time.time() - self.inference_fps_start_time
if elapsed_time > 0:
inference_fps = self.processed_inference_count / elapsed_time
cv2.putText(display_frame, f"Inference FPS: {inference_fps:.2f}",
(10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 255), 2)
cv2.imshow('Fire Detection', display_frame)
# def _print_statistics(self):
# """Print final statistics"""
# print(f"\n--- Summary ---")
# print(f"Total inferences processed: {self.processed_inference_count}")
# if self.inference_fps_start_time and self.processed_inference_count > 0:
# elapsed = time.time() - self.inference_fps_start_time
# if elapsed > 0:
# avg_inference_fps = self.processed_inference_count / elapsed
# print(f"Average Inference FPS: {avg_inference_fps:.2f}")
# if self.display_fps_start_time and self.display_frame_counter > 0:
# elapsed = time.time() - self.display_fps_start_time
# if elapsed > 0:
# avg_display_fps = self.display_frame_counter / elapsed
# print(f"Average Display FPS: {avg_display_fps:.2f}")
if __name__ == "__main__":
PORT_IDS = [28, 32]
SCPU_FW = r'fw_scpu.bin'
NCPU_FW = r'fw_ncpu.bin'
MODEL_PATH = r'fire_detection_520.nef'
try:
# Initialize inference engine
print("Initializing MultiDongle...")
multidongle = MultiDongle(PORT_IDS, SCPU_FW, NCPU_FW, MODEL_PATH, upload_fw=True)
multidongle.initialize()
multidongle.start()
# Run using the new runner class
print("Starting webcam inference...")
runner = WebcamInferenceRunner(multidongle, 'BGR565')
runner.run()
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
finally:
if 'multidongle' in locals():
multidongle.stop()

407
src/cluster4npu/test.py Normal file
View File

@ -0,0 +1,407 @@
"""
InferencePipeline Usage Examples
================================
This file demonstrates how to use the InferencePipeline for various scenarios:
1. Single stage (equivalent to MultiDongle)
2. Two-stage cascade (detection -> classification)
3. Multi-stage complex pipeline
"""
import cv2
import numpy as np
import time
from InferencePipeline import (
InferencePipeline, StageConfig,
create_feature_extractor_preprocessor,
create_result_aggregator_postprocessor
)
from Multidongle import PreProcessor, PostProcessor, WebcamSource, RTSPSource
# =============================================================================
# Example 1: Single Stage Pipeline (Basic Usage)
# =============================================================================
def example_single_stage():
"""Single stage pipeline - equivalent to using MultiDongle directly"""
print("=== Single Stage Pipeline Example ===")
# Create stage configuration
stage_config = StageConfig(
stage_id="fire_detection",
port_ids=[28, 32],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fire_detection_520.nef",
upload_fw=True,
max_queue_size=30
# Note: No inter-stage processors needed for single stage
# MultiDongle will handle internal preprocessing/postprocessing
)
# Create pipeline with single stage
pipeline = InferencePipeline(
stage_configs=[stage_config],
pipeline_name="SingleStageFireDetection"
)
# Initialize and start
pipeline.initialize()
pipeline.start()
# Process some data
data_source = WebcamSource(camera_id=0)
data_source.start()
def handle_result(pipeline_data):
result = pipeline_data.stage_results.get("fire_detection", {})
print(f"Fire Detection: {result.get('result', 'Unknown')} "
f"(Prob: {result.get('probability', 0.0):.3f})")
def handle_error(pipeline_data):
print(f"❌ Error: {pipeline_data.stage_results}")
pipeline.set_result_callback(handle_result)
pipeline.set_error_callback(handle_error)
try:
print("🚀 Starting single stage pipeline...")
for i in range(100): # Process 100 frames
frame = data_source.get_frame()
if frame is not None:
success = pipeline.put_data(frame, timeout=1.0)
if not success:
print("Pipeline input queue full, dropping frame")
time.sleep(0.1)
except KeyboardInterrupt:
print("\nStopping...")
finally:
data_source.stop()
pipeline.stop()
print("Single stage pipeline test completed")
# =============================================================================
# Example 2: Two-Stage Cascade Pipeline
# =============================================================================
def example_two_stage_cascade():
"""Two-stage cascade: Object Detection -> Fire Classification"""
print("=== Two-Stage Cascade Pipeline Example ===")
# Custom preprocessor for second stage
def roi_extraction_preprocess(frame, target_size):
"""Extract ROI from detection results and prepare for classification"""
# This would normally extract bounding box from first stage results
# For demo, we'll just do center crop
h, w = frame.shape[:2] if len(frame.shape) == 3 else frame.shape
center_x, center_y = w // 2, h // 2
crop_size = min(w, h) // 2
x1 = max(0, center_x - crop_size // 2)
y1 = max(0, center_y - crop_size // 2)
x2 = min(w, center_x + crop_size // 2)
y2 = min(h, center_y + crop_size // 2)
if len(frame.shape) == 3:
cropped = frame[y1:y2, x1:x2]
else:
cropped = frame[y1:y2, x1:x2]
return cv2.resize(cropped, target_size)
# Custom postprocessor for combining results
def combine_detection_classification(raw_output, **kwargs):
"""Combine detection and classification results"""
if raw_output.size > 0:
classification_prob = float(raw_output[0])
# Get detection result from metadata (would be passed from first stage)
detection_confidence = kwargs.get('detection_conf', 0.5)
# Combined confidence
combined_prob = (classification_prob * 0.7) + (detection_confidence * 0.3)
return {
'combined_probability': combined_prob,
'classification_prob': classification_prob,
'detection_conf': detection_confidence,
'result': 'Fire Detected' if combined_prob > 0.6 else 'No Fire',
'confidence': 'High' if combined_prob > 0.8 else 'Medium' if combined_prob > 0.5 else 'Low'
}
return {'combined_probability': 0.0, 'result': 'No Fire', 'confidence': 'Low'}
# Set up callbacks
def handle_cascade_result(pipeline_data):
"""Handle results from cascade pipeline"""
detection_result = pipeline_data.stage_results.get("object_detection", {})
classification_result = pipeline_data.stage_results.get("fire_classification", {})
print(f"Detection: {detection_result.get('result', 'Unknown')} "
f"(Prob: {detection_result.get('probability', 0.0):.3f})")
print(f"Classification: {classification_result.get('result', 'Unknown')} "
f"(Combined: {classification_result.get('combined_probability', 0.0):.3f})")
print(f"Processing Time: {pipeline_data.metadata.get('total_processing_time', 0.0):.3f}s")
print("-" * 50)
def handle_pipeline_stats(stats):
"""Handle pipeline statistics"""
print(f"\n📊 Pipeline Stats:")
print(f" Submitted: {stats['pipeline_input_submitted']}")
print(f" Completed: {stats['pipeline_completed']}")
print(f" Errors: {stats['pipeline_errors']}")
for stage_stat in stats['stage_statistics']:
print(f" Stage {stage_stat['stage_id']}: "
f"Processed={stage_stat['processed_count']}, "
f"AvgTime={stage_stat['avg_processing_time']:.3f}s")
# Stage 1: Object Detection
stage1_config = StageConfig(
stage_id="object_detection",
port_ids=[28, 30], # First set of dongles
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="object_detection_520.nef",
upload_fw=True,
max_queue_size=30
)
# Stage 2: Fire Classification
stage2_config = StageConfig(
stage_id="fire_classification",
port_ids=[32, 34], # Second set of dongles
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fire_classification_520.nef",
upload_fw=True,
max_queue_size=30,
# Inter-stage processing
input_preprocessor=PreProcessor(resize_fn=roi_extraction_preprocess),
output_postprocessor=PostProcessor(process_fn=combine_detection_classification)
)
# Create two-stage pipeline
pipeline = InferencePipeline(
stage_configs=[stage1_config, stage2_config],
pipeline_name="TwoStageCascade"
)
pipeline.set_result_callback(handle_cascade_result)
pipeline.set_stats_callback(handle_pipeline_stats)
# Initialize and start
pipeline.initialize()
pipeline.start()
pipeline.start_stats_reporting(interval=10.0) # Stats every 10 seconds
# Process data
# data_source = RTSPSource("rtsp://your-camera-url")
data_source = WebcamSource(0)
data_source.start()
try:
frame_count = 0
while frame_count < 200:
frame = data_source.get_frame()
if frame is not None:
if pipeline.put_data(frame, timeout=1.0):
frame_count += 1
else:
print("Pipeline input queue full, dropping frame")
time.sleep(0.05)
except KeyboardInterrupt:
print("\nStopping cascade pipeline...")
finally:
data_source.stop()
pipeline.stop()
# =============================================================================
# Example 3: Complex Multi-Stage Pipeline
# =============================================================================
def example_complex_pipeline():
"""Complex multi-stage pipeline with feature extraction and fusion"""
print("=== Complex Multi-Stage Pipeline Example ===")
# Custom processors for different stages
def edge_detection_preprocess(frame, target_size):
"""Extract edge features"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
edges_3ch = cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR)
return cv2.resize(edges_3ch, target_size)
def thermal_simulation_preprocess(frame, target_size):
"""Simulate thermal-like processing"""
# Convert to HSV and extract V channel as pseudo-thermal
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
thermal_like = hsv[:, :, 2] # Value channel
thermal_3ch = cv2.cvtColor(thermal_like, cv2.COLOR_GRAY2BGR)
return cv2.resize(thermal_3ch, target_size)
def fusion_postprocess(raw_output, **kwargs):
"""Fuse results from multiple modalities"""
if raw_output.size > 0:
current_prob = float(raw_output[0])
# This would get previous stage results from pipeline metadata
# For demo, we'll simulate
rgb_confidence = kwargs.get('rgb_conf', 0.5)
edge_confidence = kwargs.get('edge_conf', 0.5)
# Weighted fusion
fused_prob = (current_prob * 0.5) + (rgb_confidence * 0.3) + (edge_confidence * 0.2)
return {
'fused_probability': fused_prob,
'individual_probs': {
'thermal': current_prob,
'rgb': rgb_confidence,
'edge': edge_confidence
},
'result': 'Fire Detected' if fused_prob > 0.6 else 'No Fire',
'confidence': 'Very High' if fused_prob > 0.9 else 'High' if fused_prob > 0.7 else 'Medium' if fused_prob > 0.5 else 'Low'
}
return {'fused_probability': 0.0, 'result': 'No Fire', 'confidence': 'Low'}
# Stage 1: RGB Analysis
rgb_stage = StageConfig(
stage_id="rgb_analysis",
port_ids=[28, 30],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="rgb_fire_detection_520.nef",
upload_fw=True
)
# Stage 2: Edge Feature Analysis
edge_stage = StageConfig(
stage_id="edge_analysis",
port_ids=[32, 34],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="edge_fire_detection_520.nef",
upload_fw=True,
input_preprocessor=PreProcessor(resize_fn=edge_detection_preprocess)
)
# Stage 3: Thermal-like Analysis
thermal_stage = StageConfig(
stage_id="thermal_analysis",
port_ids=[36, 38],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="thermal_fire_detection_520.nef",
upload_fw=True,
input_preprocessor=PreProcessor(resize_fn=thermal_simulation_preprocess)
)
# Stage 4: Fusion
fusion_stage = StageConfig(
stage_id="result_fusion",
port_ids=[40, 42],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fusion_520.nef",
upload_fw=True,
output_postprocessor=PostProcessor(process_fn=fusion_postprocess)
)
# Create complex pipeline
pipeline = InferencePipeline(
stage_configs=[rgb_stage, edge_stage, thermal_stage, fusion_stage],
pipeline_name="ComplexMultiModalPipeline"
)
# Advanced result handling
def handle_complex_result(pipeline_data):
"""Handle complex pipeline results"""
print(f"\n🔥 Multi-Modal Fire Detection Results:")
print(f" Pipeline ID: {pipeline_data.pipeline_id}")
for stage_id, result in pipeline_data.stage_results.items():
if 'probability' in result:
print(f" {stage_id}: {result.get('result', 'Unknown')} "
f"(Prob: {result.get('probability', 0.0):.3f})")
# Final fused result
if 'result_fusion' in pipeline_data.stage_results:
fusion_result = pipeline_data.stage_results['result_fusion']
print(f" 🎯 FINAL: {fusion_result.get('result', 'Unknown')} "
f"(Fused: {fusion_result.get('fused_probability', 0.0):.3f})")
print(f" Confidence: {fusion_result.get('confidence', 'Unknown')}")
print(f" Total Processing Time: {pipeline_data.metadata.get('total_processing_time', 0.0):.3f}s")
print("=" * 60)
def handle_error(pipeline_data):
"""Handle pipeline errors"""
print(f"❌ Pipeline Error for {pipeline_data.pipeline_id}")
for stage_id, result in pipeline_data.stage_results.items():
if 'error' in result:
print(f" Stage {stage_id} error: {result['error']}")
pipeline.set_result_callback(handle_complex_result)
pipeline.set_error_callback(handle_error)
# Initialize and start
try:
pipeline.initialize()
pipeline.start()
# Simulate data input
data_source = WebcamSource(camera_id=0)
data_source.start()
print("🚀 Complex pipeline started. Processing frames...")
frame_count = 0
start_time = time.time()
while frame_count < 50: # Process 50 frames for demo
frame = data_source.get_frame()
if frame is not None:
if pipeline.put_data(frame):
frame_count += 1
if frame_count % 10 == 0:
elapsed = time.time() - start_time
fps = frame_count / elapsed
print(f"📈 Processed {frame_count} frames, Pipeline FPS: {fps:.2f}")
time.sleep(0.1)
except Exception as e:
print(f"Error in complex pipeline: {e}")
finally:
data_source.stop()
pipeline.stop()
# Final statistics
final_stats = pipeline.get_pipeline_statistics()
print(f"\n📊 Final Pipeline Statistics:")
print(f" Total Input: {final_stats['pipeline_input_submitted']}")
print(f" Completed: {final_stats['pipeline_completed']}")
print(f" Success Rate: {final_stats['pipeline_completed']/max(final_stats['pipeline_input_submitted'], 1)*100:.1f}%")
# =============================================================================
# Main Function - Run Examples
# =============================================================================
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="InferencePipeline Examples")
parser.add_argument("--example", choices=["single", "cascade", "complex"],
default="single", help="Which example to run")
args = parser.parse_args()
if args.example == "single":
example_single_stage()
elif args.example == "cascade":
example_two_stage_cascade()
elif args.example == "complex":
example_complex_pipeline()
else:
print("Available examples:")
print(" python pipeline_example.py --example single")
print(" python pipeline_example.py --example cascade")
print(" python pipeline_example.py --example complex")

40
test_ui.py Normal file
View File

@ -0,0 +1,40 @@
#!/usr/bin/env python3
"""
Simple test script to verify UI functionality
"""
import sys
import os
# Add the current directory to the path
sys.path.insert(0, os.path.dirname(__file__))
from PyQt5.QtWidgets import QApplication
from UI import DashboardLogin
def main():
app = QApplication(sys.argv)
# Create and show the dashboard
dashboard = DashboardLogin()
dashboard.show()
print("✅ UI Application Started Successfully!")
print("📋 Available buttons on main screen:")
print(" 1. 🚀 Create New Pipeline")
print(" 2. 📁 Open Existing Pipeline")
print(" 3. ⚙️ Configure Stages & Deploy")
print()
print("🎯 Click the third button 'Configure Stages & Deploy' to test the new workflow!")
print(" This will open the Stage Configuration dialog with:")
print(" • Dongle allocation controls")
print(" • Performance estimation")
print(" • Save & Deploy functionality")
print()
print("Press Ctrl+C or close the window to exit")
# Run the application
sys.exit(app.exec_())
if __name__ == "__main__":
main()

415
ui_config.py Normal file
View File

@ -0,0 +1,415 @@
#!/usr/bin/env python3
"""
UI Configuration and Integration Settings
=========================================
This module provides configuration settings and helper functions for integrating
the UI application with cluster4npu tools.
"""
import os
import json
from typing import Dict, List, Any, Optional
from dataclasses import dataclass, asdict
@dataclass
class UISettings:
"""UI application settings"""
theme: str = "harmonious_dark"
auto_save_interval: int = 300 # seconds
max_recent_files: int = 10
default_dongle_count: int = 16
default_fw_paths: Dict[str, str] = None
def __post_init__(self):
if self.default_fw_paths is None:
self.default_fw_paths = {
"scpu": "fw_scpu.bin",
"ncpu": "fw_ncpu.bin"
}
@dataclass
class ClusterConfig:
"""Cluster hardware configuration"""
available_dongles: int = 16
dongle_series: str = "KL520"
port_range_start: int = 28
port_range_end: int = 60
power_limit_watts: int = 200
cooling_type: str = "standard"
class UIIntegration:
"""Integration layer between UI and cluster4npu tools"""
def __init__(self, config_path: Optional[str] = None):
self.config_path = config_path or os.path.expanduser("~/.cluster4npu_ui_config.json")
self.ui_settings = UISettings()
self.cluster_config = ClusterConfig()
self.load_config()
def load_config(self):
"""Load configuration from file"""
try:
if os.path.exists(self.config_path):
with open(self.config_path, 'r') as f:
data = json.load(f)
if 'ui_settings' in data:
self.ui_settings = UISettings(**data['ui_settings'])
if 'cluster_config' in data:
self.cluster_config = ClusterConfig(**data['cluster_config'])
except Exception as e:
print(f"Warning: Could not load UI config: {e}")
def save_config(self):
"""Save configuration to file"""
try:
data = {
'ui_settings': asdict(self.ui_settings),
'cluster_config': asdict(self.cluster_config)
}
with open(self.config_path, 'w') as f:
json.dump(data, f, indent=2)
except Exception as e:
print(f"Warning: Could not save UI config: {e}")
def get_available_ports(self) -> List[int]:
"""Get list of available USB ports"""
return list(range(
self.cluster_config.port_range_start,
self.cluster_config.port_range_end + 1,
2 # Even numbers only for dongles
))
def validate_stage_config(self, stage_config: Dict[str, Any]) -> Dict[str, Any]:
"""
Validate and normalize a stage configuration from UI
Args:
stage_config: Raw stage configuration from UI
Returns:
Validated and normalized configuration
"""
# Ensure required fields
normalized = {
'name': stage_config.get('name', 'Unnamed Stage'),
'dongles': max(1, min(stage_config.get('dongles', 2), self.cluster_config.available_dongles)),
'port_ids': stage_config.get('port_ids', 'auto'),
'model_path': stage_config.get('model_path', ''),
}
# Auto-assign ports if needed
if normalized['port_ids'] == 'auto':
available_ports = self.get_available_ports()
dongles_needed = normalized['dongles']
normalized['port_ids'] = ','.join(map(str, available_ports[:dongles_needed]))
# Validate model path
if normalized['model_path'] and not os.path.exists(normalized['model_path']):
print(f"Warning: Model file not found: {normalized['model_path']}")
return normalized
def convert_ui_to_inference_config(self, ui_stages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""
Convert UI stage configurations to InferencePipeline StageConfig format
Args:
ui_stages: List of stage configurations from UI
Returns:
List of configurations ready for InferencePipeline
"""
inference_configs = []
for stage in ui_stages:
validated = self.validate_stage_config(stage)
# Parse port IDs
if isinstance(validated['port_ids'], str):
port_ids = [int(p.strip()) for p in validated['port_ids'].split(',') if p.strip()]
else:
port_ids = validated['port_ids']
config = {
'stage_id': validated['name'].lower().replace(' ', '_').replace('-', '_'),
'port_ids': port_ids,
'scpu_fw_path': self.ui_settings.default_fw_paths['scpu'],
'ncpu_fw_path': self.ui_settings.default_fw_paths['ncpu'],
'model_path': validated['model_path'] or f"default_{len(inference_configs)}.nef",
'upload_fw': True,
'max_queue_size': 50
}
inference_configs.append(config)
return inference_configs
def estimate_performance(self, ui_stages: List[Dict[str, Any]]) -> Dict[str, Any]:
"""
Estimate performance metrics for given stage configurations
Args:
ui_stages: List of stage configurations from UI
Returns:
Performance metrics dictionary
"""
total_dongles = sum(stage.get('dongles', 2) for stage in ui_stages)
# Performance estimation based on dongle series
fps_per_dongle = {
'KL520': 30,
'KL720': 45,
'KL1080': 60
}.get(self.cluster_config.dongle_series, 30)
stage_fps = []
stage_latencies = []
for stage in ui_stages:
dongles = stage.get('dongles', 2)
stage_fps_val = dongles * fps_per_dongle
stage_latency = 1000 / stage_fps_val # ms
stage_fps.append(stage_fps_val)
stage_latencies.append(stage_latency)
# Pipeline metrics
pipeline_fps = min(stage_fps) if stage_fps else 0
total_latency = sum(stage_latencies)
# Resource utilization
utilization = (total_dongles / self.cluster_config.available_dongles) * 100
# Power estimation (simplified)
estimated_power = total_dongles * 5 # 5W per dongle
return {
'total_dongles': total_dongles,
'available_dongles': self.cluster_config.available_dongles,
'utilization_percent': utilization,
'pipeline_fps': pipeline_fps,
'total_latency': total_latency,
'stage_fps': stage_fps,
'stage_latencies': stage_latencies,
'estimated_power_watts': estimated_power,
'power_limit_watts': self.cluster_config.power_limit_watts,
'within_power_budget': estimated_power <= self.cluster_config.power_limit_watts
}
def generate_deployment_script(self, ui_stages: List[Dict[str, Any]],
script_format: str = "python") -> str:
"""
Generate deployment script from UI configurations
Args:
ui_stages: List of stage configurations from UI
script_format: Format for the script ("python", "json", "yaml")
Returns:
Generated script content
"""
inference_configs = self.convert_ui_to_inference_config(ui_stages)
if script_format == "python":
return self._generate_python_script(inference_configs)
elif script_format == "json":
return json.dumps({
"pipeline_name": "UI_Generated_Pipeline",
"stages": inference_configs,
"ui_settings": asdict(self.ui_settings),
"cluster_config": asdict(self.cluster_config)
}, indent=2)
elif script_format == "yaml":
return self._generate_yaml_script(inference_configs)
else:
raise ValueError(f"Unsupported script format: {script_format}")
def _generate_python_script(self, inference_configs: List[Dict[str, Any]]) -> str:
"""Generate Python deployment script"""
script = '''#!/usr/bin/env python3
"""
Generated Deployment Script
Created by cluster4npu UI
"""
import sys
import os
import time
sys.path.append(os.path.join(os.path.dirname(__file__), 'src'))
from src.cluster4npu.InferencePipeline import InferencePipeline, StageConfig
def create_pipeline():
"""Create and configure the inference pipeline"""
stage_configs = [
'''
for config in inference_configs:
script += f''' StageConfig(
stage_id="{config['stage_id']}",
port_ids={config['port_ids']},
scpu_fw_path="{config['scpu_fw_path']}",
ncpu_fw_path="{config['ncpu_fw_path']}",
model_path="{config['model_path']}",
upload_fw={config['upload_fw']},
max_queue_size={config['max_queue_size']}
),
'''
script += ''' ]
return InferencePipeline(stage_configs, pipeline_name="UI_Generated_Pipeline")
def main():
"""Main execution function"""
print("🚀 Starting UI-generated pipeline...")
pipeline = create_pipeline()
try:
print("⚡ Initializing pipeline...")
pipeline.initialize()
print("▶️ Starting pipeline...")
pipeline.start()
# Set up callbacks
def handle_results(pipeline_data):
print(f"📊 Results: {pipeline_data.stage_results}")
def handle_errors(pipeline_data):
print(f"❌ Error: {pipeline_data.stage_results}")
pipeline.set_result_callback(handle_results)
pipeline.set_error_callback(handle_errors)
print("✅ Pipeline running. Press Ctrl+C to stop.")
# Run until interrupted
while True:
time.sleep(1)
except KeyboardInterrupt:
print("\\n🛑 Stopping pipeline...")
except Exception as e:
print(f"❌ Pipeline error: {e}")
finally:
pipeline.stop()
print("✅ Pipeline stopped.")
if __name__ == "__main__":
main()
'''
return script
def _generate_yaml_script(self, inference_configs: List[Dict[str, Any]]) -> str:
"""Generate YAML configuration"""
yaml_content = '''# cluster4npu Pipeline Configuration
# Generated by UI Application
pipeline:
name: "UI_Generated_Pipeline"
stages:
'''
for config in inference_configs:
yaml_content += f''' - stage_id: "{config['stage_id']}"
port_ids: {config['port_ids']}
scpu_fw_path: "{config['scpu_fw_path']}"
ncpu_fw_path: "{config['ncpu_fw_path']}"
model_path: "{config['model_path']}"
upload_fw: {str(config['upload_fw']).lower()}
max_queue_size: {config['max_queue_size']}
'''
yaml_content += f'''
# Cluster Configuration
cluster:
available_dongles: {self.cluster_config.available_dongles}
dongle_series: "{self.cluster_config.dongle_series}"
power_limit_watts: {self.cluster_config.power_limit_watts}
# UI Settings
ui:
theme: "{self.ui_settings.theme}"
auto_save_interval: {self.ui_settings.auto_save_interval}
'''
return yaml_content
# Global integration instance
ui_integration = UIIntegration()
def get_integration() -> UIIntegration:
"""Get the global UI integration instance"""
return ui_integration
# Convenience functions for UI components
def validate_stage_configs(ui_stages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Validate UI stage configurations"""
return [ui_integration.validate_stage_config(stage) for stage in ui_stages]
def estimate_pipeline_performance(ui_stages: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Estimate performance for UI stage configurations"""
return ui_integration.estimate_performance(ui_stages)
def export_pipeline_config(ui_stages: List[Dict[str, Any]], format_type: str = "python") -> str:
"""Export UI configurations to deployment scripts"""
return ui_integration.generate_deployment_script(ui_stages, format_type)
def get_available_ports() -> List[int]:
"""Get list of available dongle ports"""
return ui_integration.get_available_ports()
def save_ui_settings():
"""Save current UI settings"""
ui_integration.save_config()
if __name__ == "__main__":
# Test the integration
print("🧪 Testing UI Integration...")
# Sample UI stage configurations
test_stages = [
{'name': 'Input Stage', 'dongles': 2, 'port_ids': 'auto', 'model_path': 'input.nef'},
{'name': 'Processing Stage', 'dongles': 4, 'port_ids': '32,34,36,38', 'model_path': 'process.nef'},
{'name': 'Output Stage', 'dongles': 2, 'port_ids': 'auto', 'model_path': 'output.nef'}
]
# Test validation
validated = validate_stage_configs(test_stages)
print(f"✅ Validated {len(validated)} stages")
# Test performance estimation
performance = estimate_pipeline_performance(test_stages)
print(f"📊 Pipeline FPS: {performance['pipeline_fps']:.1f}")
print(f"📊 Total Latency: {performance['total_latency']:.1f} ms")
print(f"📊 Power Usage: {performance['estimated_power_watts']} W")
# Test script generation
python_script = export_pipeline_config(test_stages, "python")
print(f"🐍 Generated Python script ({len(python_script)} chars)")
json_config = export_pipeline_config(test_stages, "json")
print(f"📄 Generated JSON config ({len(json_config)} chars)")
print("✅ Integration test completed!")

359
ui_integration_example.py Normal file
View File

@ -0,0 +1,359 @@
#!/usr/bin/env python3
"""
UI Integration Example for cluster4npu Tools
============================================
This file demonstrates how to integrate the UI application with the core cluster4npu tools:
- InferencePipeline
- Multidongle
- StageConfig
Usage:
python ui_integration_example.py
This example shows how stage configurations from the UI can be converted
to actual InferencePipeline configurations and executed.
"""
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), 'src'))
try:
from src.cluster4npu.InferencePipeline import InferencePipeline, StageConfig
from src.cluster4npu.Multidongle import PreProcessor, PostProcessor
CLUSTER4NPU_AVAILABLE = True
except ImportError:
print("cluster4npu modules not available - running in simulation mode")
CLUSTER4NPU_AVAILABLE = False
# Mock classes for demonstration
class StageConfig:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
class InferencePipeline:
def __init__(self, stages, **kwargs):
self.stages = stages
def initialize(self):
print("Mock: Initializing pipeline...")
def start(self):
print("Mock: Starting pipeline...")
def stop(self):
print("Mock: Stopping pipeline...")
def convert_ui_config_to_pipeline(stage_configs):
"""
Convert UI stage configurations to InferencePipeline configurations
Args:
stage_configs: List of stage configurations from UI
Returns:
List of StageConfig objects for InferencePipeline
"""
pipeline_stages = []
for config in stage_configs:
# Parse port IDs
if config['port_ids'] == 'auto':
# Auto-assign ports based on stage index
stage_idx = stage_configs.index(config)
port_ids = [28 + (stage_idx * 2), 30 + (stage_idx * 2)]
else:
# Parse comma-separated port IDs
port_ids = [int(p.strip()) for p in config['port_ids'].split(',') if p.strip()]
# Create StageConfig
stage_config = StageConfig(
stage_id=config['name'].lower().replace(' ', '_'),
port_ids=port_ids,
scpu_fw_path="fw_scpu.bin", # Default firmware paths
ncpu_fw_path="fw_ncpu.bin",
model_path=config['model_path'] or "default_model.nef",
upload_fw=True,
max_queue_size=50
)
pipeline_stages.append(stage_config)
print(f"✓ Created stage: {config['name']}")
print(f" - Dongles: {config['dongles']}")
print(f" - Ports: {port_ids}")
print(f" - Model: {config['model_path'] or 'default_model.nef'}")
print()
return pipeline_stages
def create_sample_ui_config():
"""Create a sample UI configuration for testing"""
return [
{
'name': 'Input Processing',
'dongles': 2,
'port_ids': '28,30',
'model_path': 'models/input_processor.nef'
},
{
'name': 'Main Inference',
'dongles': 4,
'port_ids': '32,34,36,38',
'model_path': 'models/main_model.nef'
},
{
'name': 'Post Processing',
'dongles': 2,
'port_ids': 'auto',
'model_path': 'models/post_processor.nef'
}
]
def run_pipeline_from_ui_config(stage_configs):
"""
Run an InferencePipeline based on UI stage configurations
Args:
stage_configs: List of stage configurations from UI
"""
print("🚀 Converting UI Configuration to Pipeline...")
print("=" * 50)
# Convert UI config to pipeline stages
pipeline_stages = convert_ui_config_to_pipeline(stage_configs)
print(f"📊 Created {len(pipeline_stages)} pipeline stages")
print()
# Create and run pipeline
try:
print("🔧 Initializing InferencePipeline...")
pipeline = InferencePipeline(
stage_configs=pipeline_stages,
pipeline_name="UI_Generated_Pipeline"
)
if CLUSTER4NPU_AVAILABLE:
print("⚡ Starting pipeline (real hardware)...")
pipeline.initialize()
pipeline.start()
# Set up result callback
def handle_results(pipeline_data):
print(f"📊 Pipeline Results: {pipeline_data.stage_results}")
pipeline.set_result_callback(handle_results)
print("✅ Pipeline running! Press Ctrl+C to stop...")
try:
import time
while True:
time.sleep(1)
except KeyboardInterrupt:
print("\n🛑 Stopping pipeline...")
pipeline.stop()
print("✅ Pipeline stopped successfully")
else:
print("🎭 Running in simulation mode...")
pipeline.initialize()
pipeline.start()
# Simulate some processing
import time
for i in range(5):
print(f"⏳ Processing frame {i+1}...")
time.sleep(1)
pipeline.stop()
print("✅ Simulation complete")
except Exception as e:
print(f"❌ Error running pipeline: {e}")
return False
return True
def calculate_performance_metrics(stage_configs):
"""
Calculate performance metrics based on stage configurations
Args:
stage_configs: List of stage configurations from UI
Returns:
Dict with performance metrics
"""
total_dongles = sum(config['dongles'] for config in stage_configs)
# Simple performance estimation
base_fps_per_dongle = 30
stage_fps = []
for config in stage_configs:
stage_fps.append(config['dongles'] * base_fps_per_dongle)
# Pipeline FPS is limited by slowest stage
pipeline_fps = min(stage_fps) if stage_fps else 0
# Total latency is sum of stage latencies
total_latency = sum(1000 / fps for fps in stage_fps) # ms
return {
'total_dongles': total_dongles,
'pipeline_fps': pipeline_fps,
'total_latency': total_latency,
'stage_fps': stage_fps,
'bottleneck_stage': stage_configs[stage_fps.index(min(stage_fps))]['name'] if stage_fps else None
}
def export_configuration(stage_configs, format_type="python"):
"""
Export stage configuration to various formats
Args:
stage_configs: List of stage configurations from UI
format_type: Export format ("python", "json", "yaml")
"""
if format_type == "python":
return generate_python_script(stage_configs)
elif format_type == "json":
import json
return json.dumps(stage_configs, indent=2)
elif format_type == "yaml":
yaml_content = "# Pipeline Configuration\nstages:\n"
for config in stage_configs:
yaml_content += f" - name: {config['name']}\n"
yaml_content += f" dongles: {config['dongles']}\n"
yaml_content += f" port_ids: '{config['port_ids']}'\n"
yaml_content += f" model_path: '{config['model_path']}'\n"
return yaml_content
else:
raise ValueError(f"Unsupported format: {format_type}")
def generate_python_script(stage_configs):
"""Generate a standalone Python script from stage configurations"""
script = '''#!/usr/bin/env python3
"""
Generated Pipeline Script
Auto-generated from UI configuration
"""
from src.cluster4npu.InferencePipeline import InferencePipeline, StageConfig
import time
def main():
# Stage configurations generated from UI
stage_configs = [
'''
for config in stage_configs:
port_ids = config['port_ids'].split(',') if ',' in config['port_ids'] else [28, 30]
script += f''' StageConfig(
stage_id="{config['name'].lower().replace(' ', '_')}",
port_ids={port_ids},
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="{config['model_path']}",
upload_fw=True,
max_queue_size=50
),
'''
script += ''' ]
# Create and run pipeline
pipeline = InferencePipeline(stage_configs, pipeline_name="GeneratedPipeline")
try:
print("Initializing pipeline...")
pipeline.initialize()
print("Starting pipeline...")
pipeline.start()
def handle_results(pipeline_data):
print(f"Results: {pipeline_data.stage_results}")
pipeline.set_result_callback(handle_results)
print("Pipeline running. Press Ctrl+C to stop.")
while True:
time.sleep(1)
except KeyboardInterrupt:
print("Stopping pipeline...")
finally:
pipeline.stop()
print("Pipeline stopped.")
if __name__ == "__main__":
main()
'''
return script
def main():
"""Main function demonstrating UI integration"""
print("🎯 cluster4npu UI Integration Example")
print("=" * 40)
print()
# Create sample configuration (as would come from UI)
stage_configs = create_sample_ui_config()
print("📋 Sample UI Configuration:")
for i, config in enumerate(stage_configs, 1):
print(f" {i}. {config['name']}: {config['dongles']} dongles, ports {config['port_ids']}")
print()
# Calculate performance metrics
metrics = calculate_performance_metrics(stage_configs)
print("📊 Performance Metrics:")
print(f" • Total Dongles: {metrics['total_dongles']}")
print(f" • Pipeline FPS: {metrics['pipeline_fps']:.1f}")
print(f" • Total Latency: {metrics['total_latency']:.1f} ms")
print(f" • Bottleneck Stage: {metrics['bottleneck_stage']}")
print()
# Export configuration
print("📄 Export Examples:")
print("\n--- Python Script ---")
python_script = export_configuration(stage_configs, "python")
print(python_script[:300] + "...")
print("\n--- JSON Config ---")
json_config = export_configuration(stage_configs, "json")
print(json_config)
print("\n--- YAML Config ---")
yaml_config = export_configuration(stage_configs, "yaml")
print(yaml_config)
# Ask user if they want to run the pipeline
try:
user_input = input("\n🚀 Run the pipeline? (y/N): ").strip().lower()
if user_input == 'y':
success = run_pipeline_from_ui_config(stage_configs)
if success:
print("✅ Integration example completed successfully!")
else:
print("❌ Integration example failed.")
else:
print("✅ Integration example completed (pipeline not run).")
except (KeyboardInterrupt, EOFError):
print("\n✅ Integration example completed.")
if __name__ == "__main__":
main()

184
uv.lock generated
View File

@ -12,14 +12,34 @@ name = "cluster4npu"
version = "0.1.0" version = "0.1.0"
source = { virtual = "." } source = { virtual = "." }
dependencies = [ dependencies = [
{ name = "nodegraphqt" },
{ name = "numpy" }, { name = "numpy" },
{ name = "odengraphqt" },
{ name = "opencv-python" }, { name = "opencv-python" },
{ name = "pyqt5" },
{ name = "qt-py" },
] ]
[package.metadata] [package.metadata]
requires-dist = [ requires-dist = [
{ name = "nodegraphqt", specifier = ">=0.6.38" },
{ name = "numpy", specifier = ">=2.2.6" }, { name = "numpy", specifier = ">=2.2.6" },
{ name = "odengraphqt", specifier = ">=0.7.4" },
{ name = "opencv-python", specifier = ">=4.11.0.86" }, { name = "opencv-python", specifier = ">=4.11.0.86" },
{ name = "pyqt5", specifier = ">=5.15.11" },
{ name = "qt-py", specifier = ">=1.4.6" },
]
[[package]]
name = "nodegraphqt"
version = "0.6.38"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "qt-py" },
]
sdist = { url = "https://files.pythonhosted.org/packages/02/49/b00e0c38a705890a6a121fdc25cc8d1590464a5556f2a912acb617b00cf7/nodegraphqt-0.6.38.tar.gz", hash = "sha256:918fb5e35622804c76095ff254bf7552c87628dca72ebc0adb0bcbf703a19a73", size = 111150, upload-time = "2024-10-07T01:55:05.574Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/dc/9a/06d9a6785d46f1b9f4873f0b125a1114e239224b857644626addba2aafe6/NodeGraphQt-0.6.38-py3-none-any.whl", hash = "sha256:de79eee416fbce80e1787e5ece526a840e47eb8bbc9dc913629944f6a23951e3", size = 135105, upload-time = "2024-10-07T01:55:03.754Z" },
] ]
[[package]] [[package]]
@ -60,6 +80,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/67/0e/35082d13c09c02c011cf21570543d202ad929d961c02a147493cb0c2bdf5/numpy-2.2.6-cp313-cp313t-win_amd64.whl", hash = "sha256:6031dd6dfecc0cf9f668681a37648373bddd6421fff6c66ec1624eed0180ee06", size = 12771374, upload-time = "2025-05-17T21:43:35.479Z" }, { url = "https://files.pythonhosted.org/packages/67/0e/35082d13c09c02c011cf21570543d202ad929d961c02a147493cb0c2bdf5/numpy-2.2.6-cp313-cp313t-win_amd64.whl", hash = "sha256:6031dd6dfecc0cf9f668681a37648373bddd6421fff6c66ec1624eed0180ee06", size = 12771374, upload-time = "2025-05-17T21:43:35.479Z" },
] ]
[[package]]
name = "odengraphqt"
version = "0.7.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyside6" },
{ name = "qtpy" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0d/47/d4656eb0042a1a7d51c6f969c6a93a693c24b5682dc05fd1bb8eb3f87187/OdenGraphQt-0.7.4.tar.gz", hash = "sha256:91a8238620e3616a680d15832db44c412f96563472f0bd5296da2ff6460a06fe", size = 119687, upload-time = "2024-04-02T10:09:45.351Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/55/24/891913458f9909cd2a7aab55de2ca0143c1f1ad7d0d6deca65a58542412c/OdenGraphQt-0.7.4-py3-none-any.whl", hash = "sha256:999a355536e06eaa17cb0d3fa754927b497a945f5b7e4e21e46541af06dc21cb", size = 142848, upload-time = "2024-04-02T10:09:43.939Z" },
]
[[package]] [[package]]
name = "opencv-python" name = "opencv-python"
version = "4.11.0.86" version = "4.11.0.86"
@ -76,3 +110,153 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fb/d7/1d5941a9dde095468b288d989ff6539dd69cd429dbf1b9e839013d21b6f0/opencv_python-4.11.0.86-cp37-abi3-win32.whl", hash = "sha256:810549cb2a4aedaa84ad9a1c92fbfdfc14090e2749cedf2c1589ad8359aa169b", size = 29384337, upload-time = "2025-01-16T13:52:13.549Z" }, { url = "https://files.pythonhosted.org/packages/fb/d7/1d5941a9dde095468b288d989ff6539dd69cd429dbf1b9e839013d21b6f0/opencv_python-4.11.0.86-cp37-abi3-win32.whl", hash = "sha256:810549cb2a4aedaa84ad9a1c92fbfdfc14090e2749cedf2c1589ad8359aa169b", size = 29384337, upload-time = "2025-01-16T13:52:13.549Z" },
{ url = "https://files.pythonhosted.org/packages/a4/7d/f1c30a92854540bf789e9cd5dde7ef49bbe63f855b85a2e6b3db8135c591/opencv_python-4.11.0.86-cp37-abi3-win_amd64.whl", hash = "sha256:085ad9b77c18853ea66283e98affefe2de8cc4c1f43eda4c100cf9b2721142ec", size = 39488044, upload-time = "2025-01-16T13:52:21.928Z" }, { url = "https://files.pythonhosted.org/packages/a4/7d/f1c30a92854540bf789e9cd5dde7ef49bbe63f855b85a2e6b3db8135c591/opencv_python-4.11.0.86-cp37-abi3-win_amd64.whl", hash = "sha256:085ad9b77c18853ea66283e98affefe2de8cc4c1f43eda4c100cf9b2721142ec", size = 39488044, upload-time = "2025-01-16T13:52:21.928Z" },
] ]
[[package]]
name = "packaging"
version = "25.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/a1/d4/1fc4078c65507b51b96ca8f8c3ba19e6a61c8253c72794544580a7b6c24d/packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f", size = 165727, upload-time = "2025-04-19T11:48:59.673Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
]
[[package]]
name = "pyqt5"
version = "5.15.11"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyqt5-qt5" },
{ name = "pyqt5-sip" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0e/07/c9ed0bd428df6f87183fca565a79fee19fa7c88c7f00a7f011ab4379e77a/PyQt5-5.15.11.tar.gz", hash = "sha256:fda45743ebb4a27b4b1a51c6d8ef455c4c1b5d610c90d2934c7802b5c1557c52", size = 3216775, upload-time = "2024-07-19T08:39:57.756Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/11/64/42ec1b0bd72d87f87bde6ceb6869f444d91a2d601f2e67cd05febc0346a1/PyQt5-5.15.11-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:c8b03dd9380bb13c804f0bdb0f4956067f281785b5e12303d529f0462f9afdc2", size = 6579776, upload-time = "2024-07-19T08:39:19.775Z" },
{ url = "https://files.pythonhosted.org/packages/49/f5/3fb696f4683ea45d68b7e77302eff173493ac81e43d63adb60fa760b9f91/PyQt5-5.15.11-cp38-abi3-macosx_11_0_x86_64.whl", hash = "sha256:6cd75628f6e732b1ffcfe709ab833a0716c0445d7aec8046a48d5843352becb6", size = 7016415, upload-time = "2024-07-19T08:39:32.977Z" },
{ url = "https://files.pythonhosted.org/packages/b4/8c/4065950f9d013c4b2e588fe33cf04e564c2322842d84dbcbce5ba1dc28b0/PyQt5-5.15.11-cp38-abi3-manylinux_2_17_x86_64.whl", hash = "sha256:cd672a6738d1ae33ef7d9efa8e6cb0a1525ecf53ec86da80a9e1b6ec38c8d0f1", size = 8188103, upload-time = "2024-07-19T08:39:40.561Z" },
{ url = "https://files.pythonhosted.org/packages/f3/f0/ae5a5b4f9b826b29ea4be841b2f2d951bcf5ae1d802f3732b145b57c5355/PyQt5-5.15.11-cp38-abi3-win32.whl", hash = "sha256:76be0322ceda5deecd1708a8d628e698089a1cea80d1a49d242a6d579a40babd", size = 5433308, upload-time = "2024-07-19T08:39:46.932Z" },
{ url = "https://files.pythonhosted.org/packages/56/d5/68eb9f3d19ce65df01b6c7b7a577ad3bbc9ab3a5dd3491a4756e71838ec9/PyQt5-5.15.11-cp38-abi3-win_amd64.whl", hash = "sha256:bdde598a3bb95022131a5c9ea62e0a96bd6fb28932cc1619fd7ba211531b7517", size = 6865864, upload-time = "2024-07-19T08:39:53.572Z" },
]
[[package]]
name = "pyqt5-qt5"
version = "5.15.17"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d3/f9/accb06e76e23fb23053d48cc24fd78dec6ed14cb4d5cbadb0fd4a0c1b02e/PyQt5_Qt5-5.15.17-py3-none-macosx_10_13_x86_64.whl", hash = "sha256:d8b8094108e748b4bbd315737cfed81291d2d228de43278f0b8bd7d2b808d2b9", size = 39972275, upload-time = "2025-05-24T11:15:42.259Z" },
{ url = "https://files.pythonhosted.org/packages/87/1a/e1601ad6934cc489b8f1e967494f23958465cf1943712f054c5a306e9029/PyQt5_Qt5-5.15.17-py3-none-macosx_11_0_arm64.whl", hash = "sha256:b68628f9b8261156f91d2f72ebc8dfb28697c4b83549245d9a68195bd2d74f0c", size = 37135109, upload-time = "2025-05-24T11:15:59.786Z" },
{ url = "https://files.pythonhosted.org/packages/ac/e1/13d25a9ff2ac236a264b4603abaa39fa8bb9a7aa430519bb5f545c5b008d/PyQt5_Qt5-5.15.17-py3-none-manylinux2014_x86_64.whl", hash = "sha256:b018f75d1cc61146396fa5af14da1db77c5d6318030e5e366f09ffdf7bd358d8", size = 61112954, upload-time = "2025-05-24T11:16:26.036Z" },
]
[[package]]
name = "pyqt5-sip"
version = "12.17.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/01/79/086b50414bafa71df494398ad277d72e58229a3d1c1b1c766d12b14c2e6d/pyqt5_sip-12.17.0.tar.gz", hash = "sha256:682dadcdbd2239af9fdc0c0628e2776b820e128bec88b49b8d692fe682f90b4f", size = 104042, upload-time = "2025-02-02T17:13:11.268Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a3/e6/e51367c28d69b5a462f38987f6024e766fd8205f121fe2f4d8ba2a6886b9/PyQt5_sip-12.17.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:ea08341c8a5da00c81df0d689ecd4ee47a95e1ecad9e362581c92513f2068005", size = 124650, upload-time = "2025-02-02T17:12:50.595Z" },
{ url = "https://files.pythonhosted.org/packages/64/3b/e6d1f772b41d8445d6faf86cc9da65910484ebd9f7df83abc5d4955437d0/PyQt5_sip-12.17.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:4a92478d6808040fbe614bb61500fbb3f19f72714b99369ec28d26a7e3494115", size = 281893, upload-time = "2025-02-02T17:12:51.966Z" },
{ url = "https://files.pythonhosted.org/packages/ed/c5/d17fc2ddb9156a593710c88afd98abcf4055a2224b772f8bec2c6eea879c/PyQt5_sip-12.17.0-cp312-cp312-win32.whl", hash = "sha256:b0ff280b28813e9bfd3a4de99490739fc29b776dc48f1c849caca7239a10fc8b", size = 49438, upload-time = "2025-02-02T17:12:54.426Z" },
{ url = "https://files.pythonhosted.org/packages/fe/c5/1174988d52c732d07033cf9a5067142b01d76be7731c6394a64d5c3ef65c/PyQt5_sip-12.17.0-cp312-cp312-win_amd64.whl", hash = "sha256:54c31de7706d8a9a8c0fc3ea2c70468aba54b027d4974803f8eace9c22aad41c", size = 58017, upload-time = "2025-02-02T17:12:56.31Z" },
{ url = "https://files.pythonhosted.org/packages/fd/5d/f234e505af1a85189310521447ebc6052ebb697efded850d0f2b2555f7aa/PyQt5_sip-12.17.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:c7a7ff355e369616b6bcb41d45b742327c104b2bf1674ec79b8d67f8f2fa9543", size = 124580, upload-time = "2025-02-02T17:12:58.158Z" },
{ url = "https://files.pythonhosted.org/packages/cd/cb/3b2050e9644d0021bdf25ddf7e4c3526e1edd0198879e76ba308e5d44faf/PyQt5_sip-12.17.0-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:419b9027e92b0b707632c370cfc6dc1f3b43c6313242fc4db57a537029bd179c", size = 281563, upload-time = "2025-02-02T17:12:59.421Z" },
{ url = "https://files.pythonhosted.org/packages/51/61/b8ebde7e0b32d0de44c521a0ace31439885b0423d7d45d010a2f7d92808c/PyQt5_sip-12.17.0-cp313-cp313-win32.whl", hash = "sha256:351beab964a19f5671b2a3e816ecf4d3543a99a7e0650f88a947fea251a7589f", size = 49383, upload-time = "2025-02-02T17:13:00.597Z" },
{ url = "https://files.pythonhosted.org/packages/15/ed/ff94d6b2910e7627380cb1fc9a518ff966e6d78285c8e54c9422b68305db/PyQt5_sip-12.17.0-cp313-cp313-win_amd64.whl", hash = "sha256:672c209d05661fab8e17607c193bf43991d268a1eefbc2c4551fbf30fd8bb2ca", size = 58022, upload-time = "2025-02-02T17:13:01.738Z" },
]
[[package]]
name = "pyside6"
version = "6.8.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyside6-addons" },
{ name = "pyside6-essentials" },
{ name = "shiboken6" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/64/3a56578e01a4d282f15c42f2f0a0322c1e010d1339901d1a52880a678806/PySide6-6.8.1-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:6d1fd95651cdbdea741af21e155350986eca31ff015fc4c721ce01c2a110a4cc", size = 531916, upload-time = "2024-12-02T08:44:13.424Z" },
{ url = "https://files.pythonhosted.org/packages/cf/9b/923e4bf34c85e04f7b60e89e27e150a08b5e6a2b5950227e3010c6d9d2ba/PySide6-6.8.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:7d6adc5d53313249bbe02edb673877c1d437e215d71e88da78412520653f5c9f", size = 532709, upload-time = "2024-12-02T08:44:15.976Z" },
{ url = "https://files.pythonhosted.org/packages/7e/7e/366c05e29a17a9e85edffd147dacfbabc76ee7e6e0f9583328559eb74fbb/PySide6-6.8.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:ddeeaeca8ebd0ddb1ded30dd33e9240a40f330cc91832de346ba6c9d0cd1253e", size = 532709, upload-time = "2024-12-02T08:44:18.321Z" },
{ url = "https://files.pythonhosted.org/packages/68/e6/4cffea422cca3f5bc3d595739b3a35ee710e9864f8ca5c6cf48376864ac0/PySide6-6.8.1-cp39-abi3-win_amd64.whl", hash = "sha256:866eeaca3ffead6b9d30fa3ed395d5624da0246d7586c8b8207e77ac65d82458", size = 538388, upload-time = "2024-12-02T08:44:20.222Z" },
]
[[package]]
name = "pyside6-addons"
version = "6.8.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pyside6-essentials" },
{ name = "shiboken6" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/ee/3d/7fb4334d5250a9fa23ca57b81a77e60edf77d2f60bc5ca0ba9a8e3bc56fb/PySide6_Addons-6.8.1-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:879c12346b4b76f5d5ee6499d8ca53b5666c0c998b8fdf8780f08f69ea95d6f9", size = 302212966, upload-time = "2024-12-02T08:40:14.687Z" },
{ url = "https://files.pythonhosted.org/packages/e4/f6/f3071f51e39e9fbe186aafc1c8d8a0b2a4bd9eb393fee702b73ed3eef5ae/PySide6_Addons-6.8.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f80cc03c1ac54132c6f800aa461dced64acd7d1646898db164ccb56fe3c23dd4", size = 160308867, upload-time = "2024-12-02T08:41:35.782Z" },
{ url = "https://files.pythonhosted.org/packages/48/12/9ff2937b571feccde5261e5be6806bdc5208f29a826783bacec756667384/PySide6_Addons-6.8.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:570a25016d80046274f454ed0bb06734f478ce6c21be5dec62b624773fc7504e", size = 156107988, upload-time = "2024-12-02T08:42:23.562Z" },
{ url = "https://files.pythonhosted.org/packages/ca/71/32e2cadc50996ea855d35baba03e0b783f5ed9ae82f3da67623e66ef44a5/PySide6_Addons-6.8.1-cp39-abi3-win_amd64.whl", hash = "sha256:d7c8c1e89ee0db84631d5b8fdb9129d9d2a0ffb3b4cb2f5192dc8367dd980db4", size = 127967740, upload-time = "2024-12-02T08:42:58.509Z" },
]
[[package]]
name = "pyside6-essentials"
version = "6.8.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "shiboken6" },
]
wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/b9/1de4473bc02b9bd325b996352f88db3a235e7e227a3d6a8bd6d3744ebb52/PySide6_Essentials-6.8.1-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:bd05155245e3cd1572e68d72772e78fadfd713575bbfdd2c5e060d5278e390e9", size = 164790658, upload-time = "2024-12-02T08:39:25.101Z" },
{ url = "https://files.pythonhosted.org/packages/b1/cc/5af1e0c0306cd75864fba49934977d0a96bec4b293b2244f6f80460c2ff5/PySide6_Essentials-6.8.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:2f600b149e65b57acd6a444edb17615adc42cc2491548ae443ccb574036d86b1", size = 95271238, upload-time = "2024-12-02T08:40:13.922Z" },
{ url = "https://files.pythonhosted.org/packages/49/65/21e45a27ec195e01b7af9935e8fa207c30f6afd5389e563fa4be2558281b/PySide6_Essentials-6.8.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:bf8a3c9ee0b997eb18fb00cb09aacaa28b8a51ce3c295a252cc594c5530aba56", size = 93125810, upload-time = "2024-12-02T08:40:57.589Z" },
{ url = "https://files.pythonhosted.org/packages/6c/6f/bdc288149c92664a487816055ba55fa5884f1e07bc35b66c5d22530d0a6d/PySide6_Essentials-6.8.1-cp39-abi3-win_amd64.whl", hash = "sha256:d5ed4ddb149f36d65bc49ae4260b2d213ee88b2d9a309012ae27f38158c2d1b6", size = 72570590, upload-time = "2024-12-02T08:41:32.36Z" },
]
[[package]]
name = "qt-py"
version = "1.4.6"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "types-pyside2" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a9/7a/7dfe58082cead77f0600f5244a9f92caab683da99f2a2e36fa24870a41ca/qt_py-1.4.6.tar.gz", hash = "sha256:d26f808a093754f0b44858745965bab138525cffc77c1296a3293171b2e2469f", size = 57847, upload-time = "2025-05-13T04:21:08.36Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/0d/3486a49ee1550b048a913fe2004588f84f469714950b073cbf2261d6e349/qt_py-1.4.6-py2.py3-none-any.whl", hash = "sha256:1e0f8da9af74f2b3448904fab313f6f79cad56b82895f1a2c541243f00cc244e", size = 42358, upload-time = "2025-05-13T04:21:06.657Z" },
]
[[package]]
name = "qtpy"
version = "2.4.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "packaging" },
]
sdist = { url = "https://files.pythonhosted.org/packages/70/01/392eba83c8e47b946b929d7c46e0f04b35e9671f8bb6fc36b6f7945b4de8/qtpy-2.4.3.tar.gz", hash = "sha256:db744f7832e6d3da90568ba6ccbca3ee2b3b4a890c3d6fbbc63142f6e4cdf5bb", size = 66982, upload-time = "2025-02-11T15:09:25.759Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/69/76/37c0ccd5ab968a6a438f9c623aeecc84c202ab2fabc6a8fd927580c15b5a/QtPy-2.4.3-py3-none-any.whl", hash = "sha256:72095afe13673e017946cc258b8d5da43314197b741ed2890e563cf384b51aa1", size = 95045, upload-time = "2025-02-11T15:09:24.162Z" },
]
[[package]]
name = "shiboken6"
version = "6.8.1"
source = { registry = "https://pypi.org/simple" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/27/66/1acae15fe8126356e8ad460b5dfdc2a17af51de9044c1a3c0e4f9ae69356/shiboken6-6.8.1-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:9a2f51d1ddd3b6d193a0f0fdc09f8d41f2092bc664723c9b9efc1056660d0608", size = 399604, upload-time = "2024-12-02T08:37:22.778Z" },
{ url = "https://files.pythonhosted.org/packages/58/21/e5af942e6fc5a8c6b973aac8d822415ac54041b6861c3d835be9d217f538/shiboken6-6.8.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1dc4c1976809b0e68872bb98474cccd590455bdcd015f0e0639907e94af27b6a", size = 203095, upload-time = "2024-12-02T08:37:24.302Z" },
{ url = "https://files.pythonhosted.org/packages/23/a1/711c7801386d49f9261eeace3f9dbe8f21b2d28b85d4d3b9e6342379c440/shiboken6-6.8.1-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:ab5b60602ca6227103138aae89c4f5df3b1b8e249cbc8ec9e6e2a57f20ad9a91", size = 200113, upload-time = "2024-12-02T08:37:25.672Z" },
{ url = "https://files.pythonhosted.org/packages/2b/5f/3e9aa2b2fd1e24ff7e99717fa1ce3198556433e7ef611728e86f1fd70f94/shiboken6-6.8.1-cp39-abi3-win_amd64.whl", hash = "sha256:3ea127fd72be113b73cacd70e06687ad6f83c1c888047833c7dcdd5cf8e7f586", size = 1149267, upload-time = "2024-12-02T08:37:27.642Z" },
]
[[package]]
name = "types-pyside2"
version = "5.15.2.1.7"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/18/b9/b9691abe89b0dd6f02e52604dda35112e202084970edf1515eba22e45ab8/types_pyside2-5.15.2.1.7.tar.gz", hash = "sha256:1d65072deb97481ad481b3414f94d02fd5da07f5e709c2d439ced14f79b2537c", size = 539112, upload-time = "2024-03-11T19:17:12.962Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/19/b093a69c7964ab9abea8130fc4ca7e5f1f0f9c19433e53e2ca41a38d1285/types_pyside2-5.15.2.1.7-py2.py3-none-any.whl", hash = "sha256:a7bec4cb4657179415ca7ec7c70a45f9f9938664e22f385c85fd7cd724b07d4d", size = 572176, upload-time = "2024-03-11T19:17:11.079Z" },
]
[[package]]
name = "typing-extensions"
version = "4.14.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d1/bc/51647cd02527e87d05cb083ccc402f93e441606ff1f01739a62c8ad09ba5/typing_extensions-4.14.0.tar.gz", hash = "sha256:8676b788e32f02ab42d9e7c61324048ae4c6d844a399eebace3d4979d75ceef4", size = 107423, upload-time = "2025-06-02T14:52:11.399Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/69/e0/552843e0d356fbb5256d21449fa957fa4eff3bbc135a74a691ee70c7c5da/typing_extensions-4.14.0-py3-none-any.whl", hash = "sha256:a1514509136dd0b477638fc68d6a91497af5076466ad0fa6c338e44e359944af", size = 43839, upload-time = "2025-06-02T14:52:10.026Z" },
]