docs: Update documentation to match current visual pipeline designer architecture

- Rebrand README from InferencePipeline to Cluster4NPU UI Visual Pipeline Designer
- Focus documentation on PyQt5-based GUI and drag-and-drop workflow
- Update PROJECT_SUMMARY with current capabilities and focused development priorities
- Streamline DEVELOPMENT_ROADMAP with 4-phase implementation plan
- Remove redundant Chinese technical summary files (STAGE_IMPROVEMENTS_SUMMARY.md, UI_FIXES_SUMMARY.md, STATUS_BAR_FIXES_SUMMARY.md)
- Align all documentation with actual three-panel UI architecture and NodeGraphQt integration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Masonmason 2025-07-30 16:32:28 +08:00
parent cde1aac908
commit a099c56bb5
6 changed files with 409 additions and 1644 deletions

View File

@ -1,332 +1,131 @@
# Development Roadmap: Visual Parallel Inference Pipeline Designer # Development Roadmap
## 🎯 Mission Statement ## Mission
Create an intuitive visual pipeline designer that demonstrates clear speedup benefits of parallel NPU processing through real-time performance visualization and automated optimization.
Transform Cluster4NPU into an intuitive visual tool that enables users to create parallel AI inference pipelines without coding knowledge, with clear visualization of speedup benefits and performance optimization. ## 🎯 Core Development Goals
## 🚨 Critical Missing Features Analysis ### 1. Performance Visualization (Critical)
- **Speedup Metrics**: Clear display of 2x, 3x, 4x performance improvements
- **Before/After Comparison**: Visual proof of parallel processing benefits
- **Device Utilization**: Real-time visualization of NPU usage
- **Execution Flow**: Visual representation of parallel processing paths
### 1. **Parallel Processing Visualization** (CRITICAL) ### 2. Benchmarking System (Critical)
**Current Gap**: Users can't see how parallel processing improves performance - **Automated Testing**: One-click performance measurement
**Impact**: Core value proposition not visible to users - **Comparison Charts**: Single vs multi-device performance analysis
- **Regression Testing**: Track performance over time
- **Optimization Suggestions**: Automated recommendations
**Missing Components**: ### 3. Device Management (High Priority)
- Visual representation of parallel execution paths - **Visual Dashboard**: Device status and health monitoring
- Real-time speedup metrics (2x, 3x, 4x faster) - **Manual Allocation**: Drag-and-drop device assignment
- Before/after performance comparison - **Load Balancing**: Optimal distribution across available NPUs
- Parallel device utilization visualization - **Performance Profiling**: Individual device performance tracking
### 2. **Performance Benchmarking System** (CRITICAL) ### 4. Real-time Monitoring (High Priority)
**Current Gap**: No systematic way to measure and compare performance - **Live Charts**: FPS, latency, and throughput graphs
**Impact**: Users can't quantify benefits of parallel processing - **Resource Monitoring**: CPU, memory, and NPU utilization
- **Bottleneck Detection**: Automated identification of performance issues
- **Alert System**: Warnings for performance degradation
**Missing Components**: ## 📋 Implementation Plan
- Automated benchmark execution
- Single vs multi-device comparison
- Throughput and latency measurement
- Performance regression testing
### 3. **Device Management Dashboard** (HIGH) ### Phase 1: Performance Visualization (Weeks 1-2)
**Current Gap**: Limited visibility into hardware resources
**Impact**: Users can't optimize device allocation
**Missing Components**: **Core Components:**
- Visual device status monitoring - `PerformanceBenchmarker` class for automated testing
- Device health and temperature tracking - `PerformanceDashboard` widget with live charts
- Manual device assignment interface - Speedup calculation and display widgets
- Load balancing visualization - Integration with existing pipeline editor
### 4. **Real-time Performance Monitoring** (HIGH) **Deliverables:**
**Current Gap**: Basic status bar insufficient for performance analysis - Single vs multi-device benchmark comparison
**Impact**: Users can't monitor and optimize running pipelines - Real-time FPS and latency monitoring
- Visual speedup indicators (e.g., "3.2x FASTER")
- Performance history tracking
**Missing Components**: ### Phase 2: Device Management (Weeks 3-4)
- Live performance graphs (FPS, latency)
- Resource utilization charts
- Bottleneck identification
- Performance alerts
## 📋 Detailed Implementation Plan **Core Components:**
- `DeviceManager` with enhanced NPU control
- `DeviceManagementPanel` for visual allocation
- Device health monitoring and profiling
- Load balancing optimization algorithms
### Phase 1: Performance Visualization Foundation (Weeks 1-2) **Deliverables:**
- Visual device status dashboard
#### 1.1 Performance Benchmarking Engine - Drag-and-drop device assignment interface
**Location**: `core/functions/performance_benchmarker.py`
```python
class PerformanceBenchmarker:
def run_single_device_benchmark(pipeline_config, test_data)
def run_multi_device_benchmark(pipeline_config, test_data, device_count)
def calculate_speedup_metrics(single_results, multi_results)
def generate_performance_report(benchmark_results)
```
**Features**:
- Automated test execution with standardized datasets
- Precise timing measurements (inference time, throughput)
- Statistical analysis (mean, std, percentiles)
- Speedup calculation: `speedup = single_device_time / parallel_time`
#### 1.2 Performance Dashboard Widget
**Location**: `ui/components/performance_dashboard.py`
```python
class PerformanceDashboard(QWidget):
def __init__(self):
# Real-time charts using matplotlib or pyqtgraph
self.fps_chart = LiveChart("FPS")
self.latency_chart = LiveChart("Latency (ms)")
self.speedup_display = SpeedupWidget()
self.device_utilization = DeviceUtilizationChart()
```
**UI Elements**:
- **Speedup Indicator**: Large, prominent display (e.g., "3.2x FASTER")
- **Live Charts**: FPS, latency, throughput over time
- **Device Utilization**: Bar charts showing per-device usage
- **Performance Comparison**: Side-by-side single vs parallel metrics
#### 1.3 Benchmark Integration in Dashboard
**Location**: `ui/windows/dashboard.py` (enhancement)
```python
class IntegratedPipelineDashboard:
def create_performance_panel(self):
# Add performance dashboard to right panel
self.performance_dashboard = PerformanceDashboard()
def run_benchmark_test(self):
# Automated benchmark execution
# Show progress dialog
# Display results in performance dashboard
```
### Phase 2: Device Management Enhancement (Weeks 3-4)
#### 2.1 Advanced Device Manager
**Location**: `core/functions/device_manager.py`
```python
class AdvancedDeviceManager:
def detect_all_devices(self) -> List[DeviceInfo]
def get_device_health(self, device_id) -> DeviceHealth
def monitor_device_performance(self, device_id) -> DeviceMetrics
def assign_devices_to_stages(self, pipeline, device_allocation)
def optimize_device_allocation(self, pipeline) -> DeviceAllocation
```
**Features**:
- Real-time device health monitoring (temperature, utilization)
- Automatic device allocation optimization
- Device performance profiling and history - Device performance profiling and history
- Load balancing across available devices - Automatic load balancing recommendations
#### 2.2 Device Management Panel ### Phase 3: Advanced Features (Weeks 5-6)
**Location**: `ui/components/device_management_panel.py`
```python
class DeviceManagementPanel(QWidget):
def __init__(self):
self.device_list = DeviceListWidget()
self.device_details = DeviceDetailsWidget()
self.allocation_visualizer = DeviceAllocationWidget()
self.health_monitor = DeviceHealthWidget()
```
**UI Features**: **Core Components:**
- **Device Grid**: Visual representation of all detected devices - `OptimizationEngine` for automated suggestions
- **Health Indicators**: Color-coded status (green/yellow/red) - Pipeline analysis and bottleneck detection
- **Assignment Interface**: Drag-and-drop device allocation to pipeline stages - Configuration templates and presets
- **Performance History**: Charts showing device performance over time - Performance prediction algorithms
#### 2.3 Parallel Execution Visualizer **Deliverables:**
**Location**: `ui/components/parallel_visualizer.py` - Automated pipeline optimization suggestions
```python - Configuration templates for common use cases
class ParallelExecutionVisualizer(QWidget): - Performance prediction before execution
def show_execution_flow(self, pipeline, device_allocation) - Bottleneck identification and resolution
def animate_data_flow(self, pipeline_data)
def highlight_bottlenecks(self, performance_metrics)
def show_load_balancing(self, device_utilization)
```
**Visual Elements**: ### Phase 4: Professional Polish (Weeks 7-8)
- **Execution Timeline**: Show parallel processing stages
- **Data Flow Animation**: Visual representation of data moving through pipeline
- **Bottleneck Highlighting**: Red indicators for performance bottlenecks
- **Load Distribution**: Visual representation of work distribution
### Phase 3: Pipeline Optimization Assistant (Weeks 5-6) **Core Components:**
- Advanced visualization and reporting
- Export and documentation features
- Performance analytics and insights
- User experience refinements
#### 3.1 Optimization Engine **Deliverables:**
**Location**: `core/functions/optimization_engine.py` - Professional performance reports
```python - Advanced analytics and trending
class PipelineOptimizationEngine: - Export capabilities for results
def analyze_pipeline_bottlenecks(self, pipeline, metrics) - Comprehensive user documentation
def suggest_device_allocation(self, pipeline, available_devices)
def predict_performance(self, pipeline, device_allocation)
def generate_optimization_recommendations(self, analysis)
```
**Optimization Strategies**: ## 🎨 Target User Experience
- **Bottleneck Analysis**: Identify slowest stages in pipeline
- **Device Allocation**: Optimal distribution of devices across stages
- **Queue Size Tuning**: Optimize buffer sizes for throughput
- **Preprocessing Optimization**: Suggest efficient preprocessing strategies
#### 3.2 Optimization Assistant UI ### Ideal Workflow
**Location**: `ui/dialogs/optimization_assistant.py` 1. **Design** (< 5 minutes): Drag-and-drop pipeline creation
```python 2. **Configure**: Automatic device detection and optimal allocation
class OptimizationAssistant(QDialog): 3. **Benchmark**: One-click performance measurement
def __init__(self, pipeline): 4. **Monitor**: Real-time speedup visualization during execution
self.analysis_results = OptimizationAnalysisWidget() 5. **Optimize**: Automated suggestions for performance improvements
self.recommendations = RecommendationListWidget()
self.performance_prediction = PerformancePredictionWidget()
self.apply_optimizations = OptimizationApplyWidget()
```
**Features**: ### Success Metrics
- **Automatic Analysis**: One-click pipeline optimization analysis - **Speedup Visibility**: Clear before/after performance comparison
- **Recommendation List**: Prioritized list of optimization suggestions - **Ease of Use**: Intuitive interface requiring minimal training
- **Performance Prediction**: Estimated speedup from each optimization - **Performance Gains**: Measurable improvements from optimization
- **One-Click Apply**: Easy application of recommended optimizations - **Professional Quality**: Enterprise-ready monitoring and reporting
#### 3.3 Configuration Templates ## 🛠 Technical Approach
**Location**: `core/templates/pipeline_templates.py`
```python
class PipelineTemplates:
def get_fire_detection_template(self, device_count)
def get_object_detection_template(self, device_count)
def get_classification_template(self, device_count)
def create_custom_template(self, pipeline_config)
```
**Template Categories**: ### Extend Current Architecture
- **Common Use Cases**: Fire detection, object detection, classification - Build on existing `InferencePipeline` and `Multidongle` classes
- **Device-Optimized**: Templates for 2, 4, 8 device configurations - Enhance UI with new performance panels and dashboards
- **Performance-Focused**: High-throughput vs low-latency configurations - Integrate visualization libraries (matplotlib/pyqtgraph)
- **Custom Templates**: User-created and shared templates - Add benchmarking automation and result storage
### Phase 4: Advanced Monitoring and Analytics (Weeks 7-8) ### Key Technical Components
- **Performance Engine**: Automated benchmarking and comparison
- **Visualization Layer**: Real-time charts and progress indicators
- **Device Abstraction**: Enhanced NPU management and allocation
- **Optimization Logic**: Automated analysis and suggestions
#### 4.1 Real-time Analytics Engine ## 📈 Expected Impact
**Location**: `core/functions/analytics_engine.py`
```python
class AnalyticsEngine:
def collect_performance_metrics(self, pipeline)
def analyze_performance_trends(self, historical_data)
def detect_performance_anomalies(self, current_metrics)
def generate_performance_insights(self, analytics_data)
```
**Analytics Features**: ### For Users
- **Performance Trending**: Track performance over time - **Simplified Setup**: No coding required for parallel processing
- **Anomaly Detection**: Identify unusual performance patterns - **Clear Benefits**: Visual proof of performance improvements
- **Predictive Analytics**: Forecast performance degradation - **Optimal Performance**: Automated hardware utilization
- **Comparative Analysis**: Compare different pipeline configurations - **Professional Tools**: Enterprise-grade monitoring and analytics
#### 4.2 Advanced Visualization Components ### For Platform
**Location**: `ui/components/advanced_charts.py` - **Competitive Advantage**: Unique visual approach to parallel AI inference
```python - **Market Expansion**: Lower barrier to entry for non-technical users
class AdvancedChartComponents: - **Performance Leadership**: Systematic optimization of NPU utilization
class ParallelTimelineChart: # Show parallel execution timeline - **Enterprise Ready**: Foundation for advanced features and scaling
class SpeedupComparisonChart: # Compare different configurations
class ResourceUtilizationHeatmap: # Device usage over time
class PerformanceTrendChart: # Long-term performance trends
```
**Chart Types**:
- **Timeline Charts**: Show parallel execution stages over time
- **Heatmaps**: Device utilization and performance hotspots
- **Comparison Charts**: Side-by-side performance comparisons
- **Trend Analysis**: Long-term performance patterns
#### 4.3 Reporting and Export
**Location**: `core/functions/report_generator.py`
```python
class ReportGenerator:
def generate_performance_report(self, benchmark_results)
def create_optimization_report(self, before_after_metrics)
def export_configuration_summary(self, pipeline_config)
def generate_executive_summary(self, project_metrics)
```
**Report Types**:
- **Performance Reports**: Detailed benchmark results and analysis
- **Optimization Reports**: Before/after optimization comparisons
- **Configuration Documentation**: Pipeline setup and device allocation
- **Executive Summaries**: High-level performance and ROI metrics
## 🎨 User Experience Enhancements
### Enhanced Pipeline Editor
**Location**: `ui/windows/pipeline_editor.py` (new)
```python
class EnhancedPipelineEditor(QMainWindow):
def __init__(self):
self.node_graph = NodeGraphWidget()
self.performance_overlay = PerformanceOverlayWidget()
self.device_allocation_panel = DeviceAllocationPanel()
self.optimization_assistant = OptimizationAssistantPanel()
```
**New Features**:
- **Performance Overlay**: Show performance metrics directly on pipeline nodes
- **Device Allocation Visualization**: Color-coded nodes showing device assignments
- **Real-time Feedback**: Live performance updates during pipeline execution
- **Optimization Hints**: Visual suggestions for pipeline improvements
### Guided Setup Wizard
**Location**: `ui/dialogs/setup_wizard.py`
```python
class PipelineSetupWizard(QWizard):
def __init__(self):
self.use_case_selection = UseCaseSelectionPage()
self.device_configuration = DeviceConfigurationPage()
self.performance_targets = PerformanceTargetsPage()
self.optimization_preferences = OptimizationPreferencesPage()
```
**Wizard Steps**:
1. **Use Case Selection**: Choose from common pipeline templates
2. **Device Configuration**: Automatic device detection and allocation
3. **Performance Targets**: Set FPS, latency, and throughput goals
4. **Optimization Preferences**: Choose between speed vs accuracy tradeoffs
## 📊 Success Metrics and Validation
### Key Performance Indicators
1. **Time to First Pipeline**: < 5 minutes from launch to working pipeline
2. **Speedup Visibility**: Clear display of performance improvements (2x, 3x, etc.)
3. **Optimization Impact**: Measurable performance gains from suggestions
4. **User Satisfaction**: Intuitive interface requiring minimal training
### Validation Approach
1. **Automated Testing**: Comprehensive test suite for all new components
2. **Performance Benchmarking**: Systematic testing across different hardware configurations
3. **User Testing**: Feedback from non-technical users on ease of use
4. **Performance Validation**: Verify actual speedup matches predicted improvements
## 🛠 Technical Implementation Notes
### Architecture Principles
- **Modular Design**: Each component should be independently testable
- **Performance First**: All visualizations must not impact inference performance
- **User-Centric**: Every feature should directly benefit the end user experience
- **Scalable**: Design for future expansion to more device types and use cases
### Integration Strategy
- **Extend Existing**: Build on current InferencePipeline and dashboard architecture
- **Backward Compatible**: Maintain compatibility with existing pipeline configurations
- **Progressive Enhancement**: Add features incrementally without breaking existing functionality
- **Clean Interfaces**: Well-defined APIs between components for maintainability
## 🎯 Expected Outcomes
### For End Users
- **Dramatic Productivity Increase**: Create parallel pipelines in minutes instead of hours
- **Clear ROI Demonstration**: Visual proof of performance improvements and cost savings
- **Optimized Performance**: Automatic suggestions leading to better hardware utilization
- **Professional Results**: Production-ready pipelines without deep technical knowledge
### For the Platform
- **Market Differentiation**: Unique visual approach to parallel AI inference
- **Reduced Support Burden**: Self-service optimization reduces need for expert consultation
- **Scalable Business Model**: Platform enables users to handle larger, more complex projects
- **Community Growth**: Easy-to-use tools attract broader user base
This roadmap transforms Cluster4NPU from a functional tool into an intuitive platform that makes parallel AI inference accessible to non-technical users while providing clear visualization of performance benefits.

View File

@ -1,217 +1,138 @@
# Cluster4NPU - Visual Pipeline Designer for Parallel AI Inference # Cluster4NPU UI - Project Summary
## Project Overview ## Vision
Cluster4NPU is a visual pipeline designer that enables users to create parallel AI inference workflows for Kneron NPU dongles without extensive coding knowledge. The system provides drag-and-drop pipeline construction with real-time performance analysis and speedup visualization. Create an intuitive visual tool that enables users to design parallel AI inference pipelines for Kneron NPU dongles without coding knowledge, with clear visualization of performance benefits and hardware utilization.
## Current System Status ## Current System Status
### ✅ Completed Core Features ### ✅ Current Capabilities
#### 1. **Visual Pipeline Designer** **Visual Pipeline Designer:**
- **Node-based interface**: Drag-and-drop pipeline construction using NodeGraphQt - Drag-and-drop node-based interface using NodeGraphQt
- **5 node types**: Input, Model, Preprocess, Postprocess, Output nodes - 5 node types: Input, Model, Preprocess, Postprocess, Output
- **Real-time validation**: Instant pipeline structure analysis and error detection - Real-time pipeline validation and stage counting
- **Property editing**: Type-aware configuration widgets for each node - Property configuration panels with type-aware widgets
- **Save/Load**: Pipeline persistence in .mflow format - Pipeline persistence in .mflow JSON format
#### 2. **High-Performance Inference Engine** **Professional UI:**
- **Multi-stage pipelines**: Chain multiple AI models for complex workflows - Three-panel layout (templates, editor, configuration)
- **Hardware integration**: Kneron NPU dongles (KL520, KL720, KL1080) with auto-detection - Global status bar with live statistics
- **Thread-safe processing**: Concurrent execution with automatic queue management - Real-time connection analysis and error detection
- **Flexible preprocessing**: Custom data transformation between stages - Integrated project management and recent files
- **Comprehensive statistics**: Built-in performance monitoring and metrics
#### 3. **Professional UI Architecture** **Inference Engine:**
- **Modular design**: Refactored from 3,345-line monolithic file to focused modules - Multi-stage pipeline orchestration with threading
- **3-panel layout**: Node templates, pipeline editor, configuration panels - Kneron NPU dongle integration (KL520, KL720, KL1080)
- **Real-time status**: Global status bar with stage count and connection monitoring - Hardware auto-detection and device management
- **Clean interface**: Removed unnecessary UI elements for professional appearance - Real-time performance monitoring (FPS, latency)
#### 4. **Recent Major Improvements** ### 🎯 Core Use Cases
- **Enhanced stage calculation**: Only connected model nodes count as stages
- **Improved connection detection**: Accurate pipeline connectivity analysis
- **Node creation fixes**: Resolved instantiation and property editing issues
- **UI cleanup**: Professional interface with consistent styling
### 🔄 Current Capabilities **Pipeline Flow:**
#### Pipeline Construction
``` ```
Input Node → Preprocess Node → Model Node → Postprocess Node → Output Node Input → Preprocess → Model → Postprocess → Output
↓ ↓ ↓ ↓ ↓
Camera/File Resize/Norm AI Inference Format/Filter File/Display Camera Resize NPU Inference Format Display
``` ```
#### Supported Hardware **Supported Sources:**
- **Kneron NPU dongles**: KL520, KL720, KL1080 series - USB cameras with configurable resolution/FPS
- **Multi-device support**: Automatic detection and load balancing - Video files (MP4, AVI, MOV) with frame processing
- **USB connectivity**: Port-based device management - Image files (JPG, PNG, BMP) for batch processing
- RTSP streams for live video (basic support)
#### Input Sources ## 🚀 Development Priorities
- **Camera input**: USB cameras with configurable resolution/FPS
- **Video files**: MP4, AVI, MOV with frame-by-frame processing
- **Image files**: JPG, PNG, BMP with batch processing
- **RTSP streams**: Live video streaming (basic support)
## 🎯 Main Goal: Parallel Pipeline Speedup Visualization ### Immediate Goals
1. **Performance Visualization**: Show clear speedup benefits of parallel processing
2. **Device Management**: Enhanced control over NPU dongle allocation
3. **Benchmarking System**: Automated performance testing and comparison
4. **Real-time Dashboard**: Live monitoring of pipeline execution
### User Requirements ## 🚨 Key Missing Features
1. **No-code pipeline development**: Visual interface for non-technical users
2. **Parallel processing setup**: Easy configuration of multi-device inference
3. **Speedup visualization**: Clear metrics showing performance improvements
4. **Real-time monitoring**: Live performance feedback during execution
## 🚨 Critical Missing Features ### Performance Visualization
- Parallel vs sequential execution comparison
- Visual device allocation and load balancing
- Speedup calculation and metrics display
- Performance improvement charts
### 1. **Parallel Processing Visualization** (HIGH PRIORITY) ### Advanced Monitoring
**Current State**: Basic multi-device support exists but no visual representation
**Missing**:
- Visual representation of parallel execution paths
- Device allocation visualization
- Load balancing indicators
- Parallel vs sequential comparison charts
### 2. **Performance Benchmarking System** (HIGH PRIORITY)
**Current State**: Basic statistics collection exists
**Missing**:
- Automated benchmark execution
- Speedup calculation (parallel vs single device)
- Performance comparison charts
- Bottleneck identification
- Throughput optimization suggestions
### 3. **Device Management Interface** (MEDIUM PRIORITY)
**Current State**: Auto-detection works but limited UI
**Missing**:
- Visual device status dashboard
- Device health monitoring
- Manual device assignment interface
- Device performance profiling
### 4. **Pipeline Optimization Assistant** (MEDIUM PRIORITY)
**Current State**: Manual configuration only
**Missing**:
- Automatic pipeline optimization suggestions
- Device allocation recommendations
- Performance prediction before execution
- Configuration templates for common use cases
### 5. **Real-time Performance Dashboard** (HIGH PRIORITY)
**Current State**: Basic status bar with limited info
**Missing**:
- Live performance graphs (FPS, latency, throughput) - Live performance graphs (FPS, latency, throughput)
- Resource utilization charts (CPU, memory, device usage) - Resource utilization visualization
- Parallel execution timeline visualization - Bottleneck identification and alerts
- Performance alerts and warnings - Historical performance tracking
## 📊 Detailed Gap Analysis ### Device Management
- Visual device status dashboard
- Manual device assignment interface
- Device health monitoring and profiling
- Optimal allocation recommendations
### Core Engine Gaps ### Pipeline Optimization
| Feature | Current Status | Missing Components | - Automated benchmark execution
|---------|---------------|-------------------| - Performance prediction before deployment
| **Parallel Execution** | ✅ Multi-device support | ❌ Visual parallel flow representation | - Configuration templates for common use cases
| **Performance Metrics** | ✅ Basic statistics | ❌ Speedup calculation & comparison | - Optimization suggestions based on analysis
| **Device Management** | ✅ Auto-detection | ❌ Visual device dashboard |
| **Optimization** | ✅ Manual tuning | ❌ Automatic optimization suggestions |
| **Real-time Monitoring** | ✅ Status updates | ❌ Live performance visualization |
### UI/UX Gaps ## 🛠 Technical Architecture
| Component | Current Status | Missing Elements |
|-----------|---------------|------------------|
| **Pipeline Visualization** | ✅ Node graph | ❌ Parallel execution paths |
| **Performance Dashboard** | ✅ Status bar | ❌ Charts and graphs |
| **Device Interface** | ✅ Basic detection | ❌ Management dashboard |
| **Benchmarking** | ❌ Not implemented | ❌ Complete benchmarking system |
| **Optimization UI** | ❌ Not implemented | ❌ Suggestion interface |
## 🛠 Technical Architecture Needs ### Current Foundation
- **Core Processing**: `InferencePipeline` with multi-stage orchestration
- **Hardware Integration**: `Multidongle` with NPU auto-detection
- **UI Framework**: PyQt5 with NodeGraphQt visual editor
- **Pipeline Analysis**: Real-time validation and stage detection
### Missing Core Components ### Key Components Needed
1. **ParallelExecutionEngine**: Coordinate multiple inference paths 1. **PerformanceBenchmarker**: Automated speedup measurement
2. **PerformanceBenchmarker**: Automated testing and comparison 2. **DeviceManager**: Advanced NPU allocation and monitoring
3. **DeviceManager**: Advanced device control and monitoring 3. **VisualizationDashboard**: Live performance charts and metrics
4. **OptimizationEngine**: Automatic pipeline optimization 4. **OptimizationEngine**: Automated configuration suggestions
5. **VisualizationEngine**: Real-time charts and graphs
### Missing UI Components ## 🎯 Implementation Roadmap
1. **PerformanceDashboard**: Live monitoring interface
2. **DeviceManagementPanel**: Device status and control
3. **BenchmarkingDialog**: Performance testing interface
4. **OptimizationAssistant**: Suggestion and recommendation UI
5. **ParallelVisualizationWidget**: Parallel execution display
## 🎯 Development Priorities ### Phase 1: Performance Visualization
- Implement parallel vs sequential benchmarking
- Add speedup calculation and display
- Create performance comparison charts
- Build real-time monitoring dashboard
### Phase 1: Performance Visualization (Weeks 1-2) ### Phase 2: Device Management
**Goal**: Show users the speedup benefits of parallel processing
- Implement performance benchmarking system
- Create speedup calculation and comparison
- Build basic performance dashboard with charts
- Add parallel vs sequential execution comparison
### Phase 2: Device Management Enhancement (Weeks 3-4)
**Goal**: Better control over hardware resources
- Enhanced device detection and status monitoring
- Visual device allocation interface - Visual device allocation interface
- Device health and performance profiling - Device health monitoring and profiling
- Manual device assignment capabilities - Manual assignment capabilities
- Load balancing optimization
### Phase 3: Pipeline Optimization (Weeks 5-6) ### Phase 3: Advanced Features
**Goal**: Automatic optimization suggestions - Pipeline optimization suggestions
- Pipeline analysis and bottleneck detection - Configuration templates
- Automatic device allocation recommendations - Performance prediction
- Performance prediction before execution
- Configuration templates and presets
### Phase 4: Advanced Visualization (Weeks 7-8)
**Goal**: Professional monitoring and analysis tools
- Real-time performance graphs and charts
- Resource utilization monitoring
- Parallel execution timeline visualization
- Advanced analytics and reporting - Advanced analytics and reporting
## 🎨 User Experience Vision ## 🎨 User Experience Goals
### Target User Journey ### Target Workflow
1. **Pipeline Design**: Drag-and-drop nodes to create inference pipeline 1. **Design**: Drag-and-drop pipeline creation (< 5 minutes)
2. **Device Setup**: Visual device detection and allocation 2. **Configure**: Automatic device detection and allocation
3. **Performance Preview**: See predicted speedup before execution 3. **Preview**: Performance prediction before execution
4. **Execution Monitoring**: Real-time performance dashboard 4. **Monitor**: Real-time speedup visualization
5. **Results Analysis**: Speedup comparison and optimization suggestions 5. **Optimize**: Automated suggestions for improvements
### Key Success Metrics ### Success Metrics
- **Time to create pipeline**: < 5 minutes for typical use case - Clear visualization of parallel processing benefits
- **Speedup visibility**: Clear before/after performance comparison - Intuitive interface requiring minimal training
- **Device utilization**: Visual feedback on hardware usage - Measurable performance improvements from optimization
- **Optimization impact**: Measurable performance improvements - Professional-grade monitoring and analytics
## 🔧 Implementation Strategy ## 📈 Business Value
### Immediate Next Steps **For Users:**
1. **Create performance benchmarking spec** for automated testing - No-code parallel processing setup
2. **Design parallel visualization interface** for execution monitoring - Clear ROI demonstration through speedup metrics
3. **Implement device management dashboard** for hardware control - Optimal hardware utilization without expert knowledge
4. **Build speedup calculation engine** for performance comparison
### Technical Approach **For Platform:**
- **Extend existing InferencePipeline**: Add parallel execution coordination - Unique visual approach to AI inference optimization
- **Enhance UI with new panels**: Performance dashboard and device management - Lower barrier to entry for complex parallel processing
- **Integrate visualization libraries**: Charts.js or similar for real-time graphs - Scalable foundation for enterprise features
- **Add benchmarking automation**: Systematic performance testing
## 📈 Expected Outcomes
### For End Users
- **Simplified parallel processing**: No coding required for multi-device setup
- **Clear performance benefits**: Visual proof of speedup improvements
- **Optimized configurations**: Automatic suggestions for best performance
- **Professional monitoring**: Real-time insights into system performance
### For the Platform
- **Competitive advantage**: Unique visual approach to parallel AI inference
- **User adoption**: Lower barrier to entry for non-technical users
- **Performance optimization**: Systematic approach to hardware utilization
- **Scalability**: Foundation for advanced features and enterprise use
This consolidated summary focuses on your main goal of creating an intuitive GUI for parallel inference pipeline development with clear speedup visualization. The missing components are prioritized based on their impact on user experience and the core value proposition.

View File

@ -1,15 +1,15 @@
# InferencePipeline # Cluster4NPU UI - Visual Pipeline Designer
A high-performance multi-stage inference pipeline system designed for Kneron NPU dongles, enabling flexible single-stage and cascaded multi-stage AI inference workflows. A visual pipeline designer for creating parallel AI inference workflows using Kneron NPU dongles. Build complex multi-stage inference pipelines through an intuitive drag-and-drop interface without coding knowledge.
<!-- ## Features ## Features
- **Single-stage inference**: Direct replacement for MultiDongle with enhanced features - **Visual Pipeline Design**: Drag-and-drop node-based interface using NodeGraphQt
- **Multi-stage cascaded pipelines**: Chain multiple AI models for complex workflows - **Multi-Stage Pipelines**: Chain multiple AI models for complex workflows
- **Flexible preprocessing/postprocessing**: Custom data transformation between stages - **Real-time Performance Monitoring**: Live FPS, latency, and throughput tracking
- **Thread-safe design**: Concurrent processing with automatic queue management - **Hardware Integration**: Automatic Kneron NPU dongle detection and management
- **Real-time performance**: Optimized for live video streams and high-throughput scenarios - **Professional UI**: Three-panel layout with integrated configuration and monitoring
- **Comprehensive statistics**: Built-in performance monitoring and metrics --> - **Pipeline Validation**: Real-time pipeline structure analysis and error detection
## Installation ## Installation
@ -29,460 +29,231 @@ uv pip install -r requirements.txt
### Requirements ### Requirements
```txt **Python Dependencies:**
"numpy>=2.2.6", - PyQt5 (GUI framework)
"opencv-python>=4.11.0.86", - NodeGraphQt (visual node editor)
``` - OpenCV (image processing)
- NumPy (array operations)
- Kneron KP SDK (NPU communication)
### Hardware Requirements **Hardware Requirements:**
- Kneron NPU dongles (KL520, KL720, KL1080)
- Kneron AI dongles (KL520, KL720, etc.) - USB 3.0 ports for device connections
- USB ports for device connections
- Compatible firmware files (`fw_scpu.bin`, `fw_ncpu.bin`) - Compatible firmware files (`fw_scpu.bin`, `fw_ncpu.bin`)
- Trained model files (`.nef` format) - Trained model files (`.nef` format)
## Quick Start ## Quick Start
### Single-Stage Pipeline ### Launching the Application
Replace your existing MultiDongle usage with InferencePipeline for enhanced features:
```python
from InferencePipeline import InferencePipeline, StageConfig
# Configure single stage
stage_config = StageConfig(
stage_id="fire_detection",
port_ids=[28, 32], # USB port IDs for your dongles
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fire_detection_520.nef",
upload_fw=True
)
# Create and start pipeline
pipeline = InferencePipeline([stage_config], pipeline_name="FireDetection")
pipeline.initialize()
pipeline.start()
# Set up result callback
def handle_result(pipeline_data):
result = pipeline_data.stage_results.get("fire_detection", {})
print(f"🔥 Detection: {result.get('result', 'Unknown')} "
f"(Probability: {result.get('probability', 0.0):.3f})")
pipeline.set_result_callback(handle_result)
# Process frames
import cv2
cap = cv2.VideoCapture(0)
try:
while True:
ret, frame = cap.read()
if ret:
pipeline.put_data(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
finally:
cap.release()
pipeline.stop()
```
### Multi-Stage Cascade Pipeline
Chain multiple models for complex workflows:
```python
from InferencePipeline import InferencePipeline, StageConfig
from Multidongle import PreProcessor, PostProcessor
# Custom preprocessing for second stage
def roi_extraction(frame, target_size):
"""Extract region of interest from detection results"""
# Extract center region as example
h, w = frame.shape[:2]
center_crop = frame[h//4:3*h//4, w//4:3*w//4]
return cv2.resize(center_crop, target_size)
# Custom result fusion
def combine_results(raw_output, **kwargs):
"""Combine detection + classification results"""
classification_prob = float(raw_output[0]) if raw_output.size > 0 else 0.0
detection_conf = kwargs.get('detection_conf', 0.5)
# Weighted combination
combined_score = (classification_prob * 0.7) + (detection_conf * 0.3)
return {
'combined_probability': combined_score,
'classification_prob': classification_prob,
'detection_conf': detection_conf,
'result': 'Fire Detected' if combined_score > 0.6 else 'No Fire',
'confidence': 'High' if combined_score > 0.8 else 'Low'
}
# Stage 1: Object Detection
detection_stage = StageConfig(
stage_id="object_detection",
port_ids=[28, 30],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="object_detection_520.nef",
upload_fw=True
)
# Stage 2: Fire Classification with preprocessing
classification_stage = StageConfig(
stage_id="fire_classification",
port_ids=[32, 34],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="fire_classification_520.nef",
upload_fw=True,
input_preprocessor=PreProcessor(resize_fn=roi_extraction),
output_postprocessor=PostProcessor(process_fn=combine_results)
)
# Create two-stage pipeline
pipeline = InferencePipeline(
[detection_stage, classification_stage],
pipeline_name="DetectionClassificationCascade"
)
# Enhanced result handler
def handle_cascade_result(pipeline_data):
detection = pipeline_data.stage_results.get("object_detection", {})
classification = pipeline_data.stage_results.get("fire_classification", {})
print(f"🎯 Detection: {detection.get('result', 'Unknown')} "
f"(Conf: {detection.get('probability', 0.0):.3f})")
print(f"🔥 Classification: {classification.get('result', 'Unknown')} "
f"(Combined: {classification.get('combined_probability', 0.0):.3f})")
print(f"⏱️ Processing Time: {pipeline_data.metadata.get('total_processing_time', 0.0):.3f}s")
print("-" * 50)
pipeline.set_result_callback(handle_cascade_result)
pipeline.initialize()
pipeline.start()
# Your processing loop here...
```
## Usage Examples
### Example 1: Real-time Webcam Processing
```python
from InferencePipeline import InferencePipeline, StageConfig
from Multidongle import WebcamSource
def run_realtime_detection():
# Configure pipeline
config = StageConfig(
stage_id="realtime_detection",
port_ids=[28, 32],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="your_model.nef",
upload_fw=True,
max_queue_size=30 # Prevent memory buildup
)
pipeline = InferencePipeline([config])
pipeline.initialize()
pipeline.start()
# Use webcam source
source = WebcamSource(camera_id=0)
source.start()
def display_results(pipeline_data):
result = pipeline_data.stage_results["realtime_detection"]
probability = result.get('probability', 0.0)
detection = result.get('result', 'Unknown')
# Your visualization logic here
print(f"Detection: {detection} ({probability:.3f})")
pipeline.set_result_callback(display_results)
try:
while True:
frame = source.get_frame()
if frame is not None:
pipeline.put_data(frame)
time.sleep(0.033) # ~30 FPS
except KeyboardInterrupt:
print("Stopping...")
finally:
source.stop()
pipeline.stop()
if __name__ == "__main__":
run_realtime_detection()
```
### Example 2: Complex Multi-Modal Pipeline
```python
def run_multimodal_pipeline():
"""Multi-modal fire detection with RGB, edge, and thermal-like analysis"""
def edge_preprocessing(frame, target_size):
"""Extract edge features"""
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
edges_3ch = cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR)
return cv2.resize(edges_3ch, target_size)
def thermal_preprocessing(frame, target_size):
"""Simulate thermal processing"""
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
thermal_like = hsv[:, :, 2] # Value channel
thermal_3ch = cv2.cvtColor(thermal_like, cv2.COLOR_GRAY2BGR)
return cv2.resize(thermal_3ch, target_size)
def fusion_postprocessing(raw_output, **kwargs):
"""Fuse results from multiple modalities"""
if raw_output.size > 0:
current_prob = float(raw_output[0])
rgb_conf = kwargs.get('rgb_conf', 0.5)
edge_conf = kwargs.get('edge_conf', 0.5)
# Weighted fusion
fused_prob = (current_prob * 0.5) + (rgb_conf * 0.3) + (edge_conf * 0.2)
return {
'fused_probability': fused_prob,
'modality_scores': {
'thermal': current_prob,
'rgb': rgb_conf,
'edge': edge_conf
},
'result': 'Fire Detected' if fused_prob > 0.6 else 'No Fire',
'confidence': 'Very High' if fused_prob > 0.9 else 'High' if fused_prob > 0.7 else 'Medium'
}
return {'fused_probability': 0.0, 'result': 'No Fire'}
# Define stages
stages = [
StageConfig("rgb_analysis", [28, 30], "fw_scpu.bin", "fw_ncpu.bin", "rgb_model.nef", True),
StageConfig("edge_analysis", [32, 34], "fw_scpu.bin", "fw_ncpu.bin", "edge_model.nef", True,
input_preprocessor=PreProcessor(resize_fn=edge_preprocessing)),
StageConfig("thermal_analysis", [36, 38], "fw_scpu.bin", "fw_ncpu.bin", "thermal_model.nef", True,
input_preprocessor=PreProcessor(resize_fn=thermal_preprocessing)),
StageConfig("fusion", [40, 42], "fw_scpu.bin", "fw_ncpu.bin", "fusion_model.nef", True,
output_postprocessor=PostProcessor(process_fn=fusion_postprocessing))
]
pipeline = InferencePipeline(stages, pipeline_name="MultiModalFireDetection")
def handle_multimodal_result(pipeline_data):
print(f"\n🔥 Multi-Modal Fire Detection Results:")
for stage_id, result in pipeline_data.stage_results.items():
if 'probability' in result:
print(f" {stage_id}: {result['result']} ({result['probability']:.3f})")
if 'fusion' in pipeline_data.stage_results:
fusion = pipeline_data.stage_results['fusion']
print(f" 🎯 FINAL: {fusion['result']} (Fused: {fusion['fused_probability']:.3f})")
print(f" Confidence: {fusion.get('confidence', 'Unknown')}")
pipeline.set_result_callback(handle_multimodal_result)
# Start pipeline
pipeline.initialize()
pipeline.start()
# Your processing logic here...
```
### Example 3: Batch Processing
```python
def process_image_batch(image_paths):
"""Process a batch of images through pipeline"""
config = StageConfig(
stage_id="batch_processing",
port_ids=[28, 32],
scpu_fw_path="fw_scpu.bin",
ncpu_fw_path="fw_ncpu.bin",
model_path="batch_model.nef",
upload_fw=True
)
pipeline = InferencePipeline([config])
pipeline.initialize()
pipeline.start()
results = []
def collect_result(pipeline_data):
result = pipeline_data.stage_results["batch_processing"]
results.append({
'pipeline_id': pipeline_data.pipeline_id,
'result': result,
'processing_time': pipeline_data.metadata.get('total_processing_time', 0.0)
})
pipeline.set_result_callback(collect_result)
# Submit all images
for img_path in image_paths:
image = cv2.imread(img_path)
if image is not None:
pipeline.put_data(image)
# Wait for all results
import time
while len(results) < len(image_paths):
time.sleep(0.1)
pipeline.stop()
return results
```
## Configuration
### StageConfig Parameters
```python
StageConfig(
stage_id="unique_stage_name", # Required: Unique identifier
port_ids=[28, 32], # Required: USB port IDs for dongles
scpu_fw_path="fw_scpu.bin", # Required: SCPU firmware path
ncpu_fw_path="fw_ncpu.bin", # Required: NCPU firmware path
model_path="model.nef", # Required: Model file path
upload_fw=True, # Upload firmware on init
max_queue_size=50, # Queue size limit
input_preprocessor=None, # Optional: Inter-stage preprocessing
output_postprocessor=None, # Optional: Inter-stage postprocessing
stage_preprocessor=None, # Optional: MultiDongle preprocessing
stage_postprocessor=None # Optional: MultiDongle postprocessing
)
```
### Performance Tuning
```python
# For high-throughput scenarios
config = StageConfig(
stage_id="high_performance",
port_ids=[28, 30, 32, 34], # Use more dongles
max_queue_size=100, # Larger queues
# ... other params
)
# For low-latency scenarios
config = StageConfig(
stage_id="low_latency",
port_ids=[28, 32],
max_queue_size=10, # Smaller queues
# ... other params
)
```
## Statistics and Monitoring
```python
# Enable statistics reporting
def print_stats(stats):
print(f"\n📊 Pipeline Statistics:")
print(f" Input: {stats['pipeline_input_submitted']}")
print(f" Completed: {stats['pipeline_completed']}")
print(f" Success Rate: {stats['pipeline_completed']/max(stats['pipeline_input_submitted'], 1)*100:.1f}%")
for stage_stat in stats['stage_statistics']:
print(f" Stage {stage_stat['stage_id']}: "
f"Processed={stage_stat['processed_count']}, "
f"AvgTime={stage_stat['avg_processing_time']:.3f}s")
pipeline.set_stats_callback(print_stats)
pipeline.start_stats_reporting(interval=5.0) # Report every 5 seconds
```
## Running Examples
The project includes comprehensive examples in `test.py`:
```bash ```bash
# Single-stage pipeline # Activate virtual environment
uv run python test.py --example single source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Two-stage cascade pipeline # Launch the visual pipeline designer
uv run python test.py --example cascade python main.py
# Complex multi-stage pipeline
uv run python test.py --example complex
``` ```
## API Reference ### Creating Your First Pipeline
### InferencePipeline 1. **Start the Application**: Launch `main.py` to open the login/project manager
2. **Create New Project**: Click "Create New Pipeline" or load an existing `.mflow` file
3. **Design Pipeline**: Use the 3-panel interface:
- **Left Panel**: Drag nodes from the template palette
- **Middle Panel**: Connect nodes to build your pipeline flow
- **Right Panel**: Configure node properties and monitor performance
Main pipeline orchestrator class. ### Basic Pipeline Structure
**Methods:** ```
- `initialize()`: Initialize all pipeline stages Input Node → Preprocess Node → Model Node → Postprocess Node → Output Node
- `start()`: Start pipeline processing threads ```
- `stop()`: Gracefully stop pipeline
- `put_data(data, timeout=1.0)`: Submit data for processing
- `get_result(timeout=0.1)`: Get processed results
- `set_result_callback(callback)`: Set success callback
- `set_error_callback(callback)`: Set error callback
- `get_pipeline_statistics()`: Get performance metrics
### StageConfig **Node Types:**
- **Input Node**: Camera, video file, or image source
- **Preprocess Node**: Data transformation (resize, normalize, format conversion)
- **Model Node**: AI inference on Kneron NPU dongles
- **Postprocess Node**: Result processing (classification, detection formatting)
- **Output Node**: Display, file output, or network streaming
Configuration for individual pipeline stages. ### Visual Pipeline Design Workflow
### PipelineData 1. **Node Placement**: Drag nodes from the left template palette
2. **Connection**: Connect nodes by dragging from output to input ports
3. **Configuration**: Select nodes and configure properties in the right panel
4. **Validation**: Real-time pipeline validation with stage counting
5. **Deployment**: Export configured pipeline for execution
Data structure flowing through pipeline stages. ## User Interface
**Attributes:** ### Three-Panel Layout
- `data`: Main data payload
- `metadata`: Processing metadata
- `stage_results`: Results from each stage
- `pipeline_id`: Unique identifier
- `timestamp`: Creation timestamp
## Performance Considerations The main dashboard provides an integrated development environment with three main panels:
1. **Queue Sizing**: Balance memory usage vs. throughput with `max_queue_size` **Left Panel (25% width):**
2. **Dongle Distribution**: Distribute dongles across stages for optimal performance - **Node Templates**: Drag-and-drop node palette
3. **Preprocessing**: Minimize expensive operations in preprocessors - Input Node (camera, video, image sources)
4. **Memory Management**: Monitor queue sizes and processing times - Model Node (AI inference on NPU dongles)
5. **Threading**: Pipeline uses multiple threads - ensure thread-safe operations - Preprocess Node (data transformation)
- Postprocess Node (result processing)
- Output Node (display, file, stream output)
- **Pipeline Operations**: Validation and management tools
- **Instructions**: Context-sensitive help
**Middle Panel (50% width):**
- **Visual Pipeline Editor**: NodeGraphQt-based visual editor
- **Real-time Validation**: Instant pipeline structure analysis
- **Node Connection**: Drag from output to input ports to connect nodes
- **Global Status Bar**: Shows stage count and pipeline statistics
**Right Panel (25% width):**
- **Properties Tab**: Node-specific configuration panels
- **Performance Tab**: Real-time performance monitoring and estimation
- **Dongles Tab**: Hardware device management and allocation
### Project Management
**Login/Startup Window:**
- Recent projects list with quick access
- Create new pipeline projects
- Load existing `.mflow` pipeline files
- Project location management
### Real-time Feedback
- **Stage Counting**: Automatic detection of pipeline stages
- **Connection Analysis**: Real-time validation of node connections
- **Error Highlighting**: Visual indicators for configuration issues
- **Performance Metrics**: Live FPS, latency, and throughput display
## Architecture
### Core Components
**Pipeline Analysis Engine (`core/pipeline.py`):**
- Automatic stage detection and validation
- Connection path analysis between nodes
- Real-time pipeline structure summarization
- Configuration export for deployment
**Node System (`core/nodes/`):**
- Extensible node architecture with type-specific properties
- Business logic separation from UI presentation
- Dynamic property validation and configuration panels
**Inference Engine (`core/functions/InferencePipeline.py`):**
- Multi-stage pipeline orchestration with thread-based processing
- Real-time performance monitoring and FPS calculation
- Inter-stage data flow and result aggregation
**Hardware Abstraction (`core/functions/Multidongle.py`):**
- Kneron NPU dongle management and auto-detection
- Multi-device support with load balancing
- Async inference processing with result queuing
### Data Flow
1. **Design Phase**: Visual pipeline creation using drag-and-drop interface
2. **Validation Phase**: Real-time analysis of pipeline structure and configuration
3. **Export Phase**: Generate executable configuration from visual design
4. **Execution Phase**: Deploy pipeline to hardware with performance monitoring
## File Formats
### Pipeline Files (`.mflow`)
JSON-based format storing:
- Node definitions and properties
- Connection relationships
- Stage configurations
- Export settings
### Hardware Configuration
- Firmware files: `fw_scpu.bin`, `fw_ncpu.bin`
- Model files: `.nef` format for Kneron NPUs
- Device mapping: USB port assignment to pipeline stages
## Performance Monitoring
### Real-time Metrics
- **FPS (Frames Per Second)**: Processing throughput
- **Latency**: End-to-end processing time
- **Stage Performance**: Per-stage processing statistics
- **Device Utilization**: NPU dongle usage monitoring
### Statistics Collection
- Pipeline input/output counts
- Processing time distributions
- Error rates and failure analysis
- Resource utilization tracking
## Testing and Validation
Run the test suite to verify functionality:
```bash
# Test core pipeline functionality
python tests/test_pipeline_editor.py
# Test UI components
python tests/test_ui_fixes.py
# Test integration
python tests/test_integration.py
```
## Troubleshooting ## Troubleshooting
### Common Issues ### Common Issues
**Pipeline hangs or stops processing:** **Node creation fails:**
- Check dongle connections and firmware compatibility - Verify NodeGraphQt installation and compatibility
- Monitor queue sizes for bottlenecks - Check node template definitions in `core/nodes/`
- Verify model file paths and formats
**High memory usage:** **Pipeline validation errors:**
- Reduce `max_queue_size` parameters - Ensure all model nodes are connected between input and output
- Ensure proper cleanup in custom processors - Verify node property configurations are complete
- Monitor statistics for processing times
**Poor performance:** **Hardware detection issues:**
- Distribute dongles optimally across stages - Check USB connections and dongles power
- Profile preprocessing/postprocessing functions - Verify firmware files are accessible
- Consider batch processing for high throughput - Ensure proper Kneron SDK installation
### Debug Mode **Performance issues:**
- Monitor device utilization in Dongles tab
- Adjust queue sizes for throughput vs. latency tradeoffs
- Check for processing bottlenecks in stage statistics
Enable detailed logging for troubleshooting: ## Development
```python ### Project Structure
import logging
logging.basicConfig(level=logging.DEBUG)
# Pipeline will output detailed processing information ```
``` cluster4npu_ui/
├── main.py # Application entry point
├── config/ # Configuration and theming
├── core/ # Core processing engine
│ ├── functions/ # Inference and hardware abstraction
│ ├── nodes/ # Node type definitions
│ └── pipeline.py # Pipeline analysis and validation
├── ui/ # User interface components
│ ├── windows/ # Main windows (login, dashboard)
│ ├── components/ # Reusable UI widgets
│ └── dialogs/ # Modal dialogs
├── tests/ # Test suite
└── resources/ # Assets and styling
```
### Contributing
1. Follow the TDD workflow defined in `CLAUDE.md`
2. Run tests before committing changes
3. Maintain the three-panel UI architecture
4. Document new node types and their properties
## License
This project is part of the Cluster4NPU ecosystem for parallel AI inference on Kneron NPU hardware.

View File

@ -1,206 +0,0 @@
# Stage 計算與介面改進總結
## 概述
根據用戶要求,對 stage 計算邏輯和用戶介面進行了三個主要改進:
1. **修正 Stage 計算邏輯**Model node 必須連接在 input 和 output 之間才能被認定為 stage
2. **重新組織工具欄**:移除上方工具欄,將按鈕集中到左側面板
3. **簡化狀態顯示**:移除重複的 stage 信息,將狀態移到底部狀態欄
## 1. Stage 計算邏輯修正
### 問題描述
- 原本只要有 model node 就會被計算為 stage
- 沒有檢查 model node 是否真正連接在管道流程中
### 解決方案
修改 `core/pipeline.py` 中的 `analyze_pipeline_stages()` 函數:
```python
# 新增連接檢查邏輯
connected_model_nodes = []
for model_node in model_nodes:
if is_node_connected_to_pipeline(model_node, input_nodes, output_nodes):
connected_model_nodes.append(model_node)
```
### 核心改進
- **連接驗證**:使用 `is_node_connected_to_pipeline()` 檢查 model node 是否同時連接到 input 和 output
- **路徑檢查**:增強 `has_path_between_nodes()` 函數,支持多種連接方式
- **錯誤處理**:改善連接檢查的異常處理
### 影響
- ✅ 只有真正參與管道流程的 model node 才會被計算為 stage
- ✅ 獨立的、未連接的 model node 不會影響 stage 計數
- ✅ 更準確反映實際的管道結構
## 2. 工具欄重新組織
### 問題描述
- 上方有重複的工具欄按鈕Add Node、Validate 等)
- 介面元素分散,不夠集中
### 解決方案
- **移除上方工具欄**:從 `create_pipeline_editor_panel()` 中移除工具欄
- **集中到左側**:在 `create_node_template_panel()` 中添加操作按鈕
### 新的左側面板結構
```
左側面板:
├── Node Templates (節點模板)
│ ├── Input Node
│ ├── Model Node
│ ├── Preprocess Node
│ ├── Postprocess Node
│ └── Output Node
├── Pipeline Operations (管道操作)
│ ├── 🔍 Validate Pipeline
│ └── 🗑️ Clear Pipeline
└── Instructions (使用說明)
```
### 視覺改進
- **一致的設計**:操作按鈕使用與節點模板相同的樣式
- **表情符號圖標**:增加視覺識別度
- **懸停效果**:改善用戶交互體驗
## 3. 狀態顯示簡化
### 問題描述
- 右側面板有獨立的 Stages 標籤,與主畫面重複
- Stage 信息分散在多個位置
### 解決方案
#### 移除重複標籤
- 從右側配置面板移除 "Stages" 標籤
- 移除 `create_stage_config_panel()` 方法
- 保留 Properties、Performance、Dongles 標籤
#### 新增底部狀態欄
創建 `create_status_bar_widget()` 方法,包含:
```python
狀態欄:
├── Stage Count Widget (階段計數)
│ ├── 階段數量顯示
│ ├── 狀態圖標 (✅/⚠️/❌)
│ └── 顏色編碼狀態
├── Spacer (間隔)
└── Statistics Label (統計信息)
├── 節點總數
└── 連接數量
```
#### StageCountWidget 改進
- **尺寸優化**:從 200x80 縮小到 120x25
- **佈局改變**:從垂直佈局改為水平佈局
- **樣式簡化**:透明背景,適合狀態欄
- **狀態圖標**
- ✅ 有效管道(綠色)
- ⚠️ 無階段(黃色)
- ❌ 錯誤狀態(紅色)
## 實現細節
### 檔案修改
#### `core/pipeline.py`
```python
# 主要修改
def analyze_pipeline_stages(node_graph):
# 新增連接檢查
connected_model_nodes = []
for model_node in model_nodes:
if is_node_connected_to_pipeline(model_node, input_nodes, output_nodes):
connected_model_nodes.append(model_node)
def has_path_between_nodes(start_node, end_node, visited=None):
# 增強錯誤處理
try:
# ... 連接檢查邏輯
except Exception:
pass # 安全地處理連接錯誤
```
#### `ui/windows/dashboard.py`
```python
# 主要修改
class StageCountWidget(QWidget):
def setup_ui(self):
layout = QHBoxLayout() # 改為水平佈局
self.setFixedSize(120, 25) # 縮小尺寸
def update_stage_count(self, count, valid, error):
# 添加狀態圖標
if not valid:
self.stage_label.setText(f"Stages: {count} ❌")
elif count == 0:
self.stage_label.setText("Stages: 0 ⚠️")
else:
self.stage_label.setText(f"Stages: {count} ✅")
class IntegratedPipelineDashboard(QMainWindow):
def create_status_bar_widget(self):
# 新增方法:創建底部狀態欄
# 包含 stage count 和統計信息
def analyze_pipeline(self):
# 更新統計標籤
self.stats_label.setText(f"Nodes: {total_nodes} | Connections: {connection_count}")
```
### 配置變更
- **右側面板**:移除 Stages 標籤,保留 3 個標籤
- **左側面板**:添加 Pipeline Operations 區域
- **中間面板**:底部添加狀態欄
## 測試驗證
### 自動化測試
創建 `test_stage_improvements.py` 驗證:
- ✅ Stage 計算函數存在且正常工作
- ✅ UI 方法正確實現
- ✅ 舊功能正確移除
- ✅ 新狀態欄功能正常
### 功能測試
- ✅ Stage 計算只計算連接的 model nodes
- ✅ 工具欄按鈕在左側面板正常工作
- ✅ 狀態欄正確顯示 stage 信息和統計
- ✅ 介面布局清晰,無重複信息
## 用戶體驗改進
### 視覺改進
1. **更清晰的布局**:工具集中在左側,狀態信息在底部
2. **減少視覺混亂**:移除重複的 stage 信息
3. **即時反饋**:狀態欄提供實時的管道狀態
### 功能改進
1. **準確的計算**Stage 數量真實反映管道結構
2. **集中的控制**:所有操作都在左側面板
3. **豐富的信息**:狀態欄顯示 stage、節點、連接數量
## 向後兼容性
### 保持兼容
- 保留所有原有的核心功能
- API 接口保持不變
- 文件格式無變化
### 漸進式改進
- 新增功能作為增強,不破壞現有流程
- 錯誤處理機制確保穩定性
- 後備方案處理缺失的依賴
## 總結
這次改進成功解決了用戶提出的三個主要問題:
1. **🎯 精確的 Stage 計算**:只有真正連接在管道中的 model node 才會被計算
2. **🎨 改進的界面布局**:工具集中在左側,減少界面混亂
3. **📊 簡潔的狀態顯示**:底部狀態欄提供所有必要信息,避免重複
改進後的界面更加直觀和高效,同時保持了所有原有功能的完整性。

View File

@ -1,265 +0,0 @@
# 狀態欄修正總結
## 概述
根據用戶提供的截圖反饋,針對狀態欄顯示問題進行了兩項重要修正:
1. **修正 Stage 數量不顯示問題**:狀態欄中沒有顯示 stage 數量
2. **移除左下角橫槓圖示**:清除 NodeGraphQt 在 canvas 左下角的不必要 UI 元素
## 問題分析
### 截圖顯示的問題
從用戶提供的截圖 `Screenshot 2025-07-10 at 2.13.14 AM.png` 可以看到:
1. **狀態欄顯示不完整**
- 右下角顯示 "Nodes: 5 | Connections: 4"
- 但沒有顯示 "Stages: X" 信息
2. **左下角有橫槓圖示**
- NodeGraphQt 在 canvas 左下角顯示了不必要的 UI 元素
- 影響界面整潔度
3. **管道結構**
- 截圖顯示了完整的管道Input → Preprocess → Model → Postprocess → Output
- 這應該算作 1 個 stage因為只有 1 個 model node
## 1. Stage 數量顯示修正
### 問題診斷
Stage count widget 創建了但可能不可見,需要確保:
- Widget 正確顯示
- 字體大小適中
- 調試信息輸出
### 解決方案
#### 1.1 改進 StageCountWidget 可見性
```python
def setup_ui(self):
"""Setup the stage count widget UI."""
layout = QHBoxLayout()
layout.setContentsMargins(5, 2, 5, 2)
# Stage count label - 增加字體大小
self.stage_label = QLabel("Stages: 0")
self.stage_label.setFont(QFont("Arial", 10, QFont.Bold)) # 從 9pt 改為 10pt
self.stage_label.setStyleSheet("color: #cdd6f4; font-weight: bold;")
layout.addWidget(self.stage_label)
self.setLayout(layout)
# 確保 widget 可見
self.setVisible(True)
self.stage_label.setVisible(True)
```
#### 1.2 添加調試信息
```python
def analyze_pipeline(self):
# 添加調試輸出
if self.stage_count_widget:
print(f"🔄 Updating stage count widget: {current_stage_count} stages")
self.stage_count_widget.update_stage_count(
current_stage_count,
summary['valid'],
summary.get('error', '')
)
```
#### 1.3 狀態圖標顯示
```python
def update_stage_count(self, count: int, valid: bool = True, error: str = ""):
"""Update the stage count display."""
if not valid:
self.stage_label.setText(f"Stages: {count} ❌")
self.stage_label.setStyleSheet("color: #f38ba8; font-weight: bold;")
else:
if count == 0:
self.stage_label.setText("Stages: 0 ⚠️")
self.stage_label.setStyleSheet("color: #f9e2af; font-weight: bold;")
else:
self.stage_label.setText(f"Stages: {count} ✅")
self.stage_label.setStyleSheet("color: #a6e3a1; font-weight: bold;")
```
## 2. 左下角橫槓圖示移除
### 問題診斷
NodeGraphQt 在初始化後可能創建各種 UI 元素,包括:
- Logo/品牌圖示
- 導航工具欄
- 縮放控制器
- 迷你地圖
### 解決方案
#### 2.1 初始化時的 UI 配置
```python
def setup_node_graph(self):
try:
self.graph = NodeGraph()
# 配置隱藏不需要的 UI 元素
viewer = self.graph.viewer()
if viewer:
# 隱藏 logo/圖示
if hasattr(viewer, 'set_logo_visible'):
viewer.set_logo_visible(False)
elif hasattr(viewer, 'show_logo'):
viewer.show_logo(False)
# 隱藏導航工具欄
if hasattr(viewer, 'set_nav_widget_visible'):
viewer.set_nav_widget_visible(False)
# 隱藏迷你地圖
if hasattr(viewer, 'set_minimap_visible'):
viewer.set_minimap_visible(False)
# 隱藏工具欄元素
widget = viewer.widget
if widget:
for child in widget.findChildren(QToolBar):
child.setVisible(False)
```
#### 2.2 延遲清理機制
由於某些 UI 元素可能在初始化後才創建,添加延遲清理:
```python
def __init__(self):
# ... 其他初始化代碼
# 設置延遲清理計時器
self.ui_cleanup_timer = QTimer()
self.ui_cleanup_timer.setSingleShot(True)
self.ui_cleanup_timer.timeout.connect(self.cleanup_node_graph_ui)
self.ui_cleanup_timer.start(1000) # 1 秒後執行清理
```
#### 2.3 智能清理方法
```python
def cleanup_node_graph_ui(self):
"""Clean up NodeGraphQt UI elements after initialization."""
if not self.graph:
return
try:
viewer = self.graph.viewer()
if viewer:
widget = viewer.widget
if widget:
print("🧹 Cleaning up NodeGraphQt UI elements...")
# 隱藏底部左側的小 widget
for child in widget.findChildren(QWidget):
if hasattr(child, 'geometry'):
geom = child.geometry()
parent_geom = widget.geometry()
# 檢查是否為底部左側的小 widget
if (geom.height() < 100 and
geom.width() < 200 and
geom.y() > parent_geom.height() - 100 and
geom.x() < 200):
print(f"🗑️ Hiding bottom-left widget: {child.__class__.__name__}")
child.setVisible(False)
# 通過 CSS 隱藏特定元素
widget.setStyleSheet(widget.styleSheet() + """
QWidget[objectName*="nav"] { display: none; }
QWidget[objectName*="toolbar"] { display: none; }
QWidget[objectName*="control"] { display: none; }
QFrame[objectName*="zoom"] { display: none; }
""")
except Exception as e:
print(f"⚠️ Error cleaning up NodeGraphQt UI: {e}")
```
## 測試驗證
### 自動化測試結果
```bash
🚀 Starting status bar fixes tests...
🔍 Testing stage count widget visibility...
✅ StageCountWidget created successfully
✅ Widget is visible
✅ Stage label is visible
✅ Correct size: 120x22
✅ Font size: 10pt
🔍 Testing stage count updates...
✅ Zero stages warning display
✅ Valid stages success display
✅ Error state display
🔍 Testing UI cleanup functionality...
✅ cleanup_node_graph_ui method exists
✅ UI cleanup timer setup found
✅ Cleanup method has bottom-left widget hiding logic
📊 Test Results: 5/5 tests passed
🎉 All status bar fixes tests passed!
```
### 功能驗證
1. **Stage 數量顯示**
- ✅ Widget 正確創建和顯示
- ✅ 狀態圖標正確顯示(✅/⚠️/❌)
- ✅ 字體大小適中10pt
- ✅ 調試信息正確輸出
2. **UI 清理**
- ✅ 多層次的 UI 元素隱藏策略
- ✅ 延遲清理機制
- ✅ 智能幾何檢測
- ✅ CSS 樣式隱藏
## 預期效果
### 狀態欄顯示
修正後的狀態欄應該顯示:
```
左側: Stages: 1 ✅ 右側: Nodes: 5 | Connections: 4
```
### Canvas 清理
- 左下角不再顯示橫槓圖示
- 界面更加整潔
- 無多餘的導航元素
## 技術細節
### 文件修改
- **`ui/windows/dashboard.py`**: 主要修改文件
- 改進 `StageCountWidget.setup_ui()` 方法
- 添加 `cleanup_node_graph_ui()` 方法
- 更新 `setup_node_graph()` 方法
- 添加延遲清理機制
### 兼容性考慮
- **多 API 支持**:支持不同版本的 NodeGraphQt API
- **錯誤處理**:安全的異常捕獲
- **漸進式清理**:多層次的 UI 元素隱藏策略
### 調試支持
- **調試輸出**:添加 stage count 更新的調試信息
- **清理日志**:輸出被隱藏的 UI 元素信息
- **錯誤日志**:記錄清理過程中的異常
## 總結
這次修正成功解決了用戶報告的兩個具體問題:
1. **🔢 Stage 數量顯示**:現在狀態欄左側正確顯示 stage 數量和狀態
2. **🧹 UI 清理**:移除了 NodeGraphQt 在左下角的不必要 UI 元素
修正後的界面應該提供:
- 清晰的狀態信息顯示
- 整潔的 canvas 界面
- 更好的用戶體驗
所有修正都經過全面測試,確保功能正常且不影響其他功能。

View File

@ -1,255 +0,0 @@
# UI 修正總結
## 概述
根據用戶反饋,對用戶界面進行了三項重要修正:
1. **修正 Connection 計算邏輯**:解決連接數顯示為 0 的問題
2. **移除 Canvas 左下角圖示**:清理 NodeGraphQt 界面元素
3. **全域狀態欄**:讓狀態列延伸到左右兩邊的 panel
## 1. Connection 計算邏輯修正
### 問題描述
- 用戶連接了 input → preprocess → model → postprocess → output 節點
- 狀態欄仍然顯示 "Connections: 0"
- 原有的計算邏輯無法正確檢測連接
### 根本原因
原始代碼使用了不一致的 API 調用:
```python
# 舊版本 - 可能無法正確檢測連接
for output in node.outputs():
if hasattr(output, 'connected_inputs'):
connection_count += len(output.connected_inputs())
```
### 解決方案
改進連接計算邏輯,支持多種 NodeGraphQt API 方式:
```python
# 新版本 - 支持多種連接檢測方式
def analyze_pipeline(self):
connection_count = 0
if self.graph:
for node in self.graph.all_nodes():
try:
if hasattr(node, 'output_ports'):
for output_port in node.output_ports():
if hasattr(output_port, 'connected_ports'):
connection_count += len(output_port.connected_ports())
elif hasattr(node, 'outputs'):
for output in node.outputs():
if hasattr(output, 'connected_ports'):
connection_count += len(output.connected_ports())
elif hasattr(output, 'connected_inputs'):
connection_count += len(output.connected_inputs())
except Exception:
continue # 安全地處理 API 差異
```
### 改進特點
- **多 API 支持**:同時支持 `output_ports()``outputs()` 方法
- **容錯處理**:使用 try-except 處理不同版本的 API 差異
- **準確計算**:正確統計所有節點間的連接數量
## 2. Canvas 左下角圖示移除
### 問題描述
- NodeGraphQt 在 canvas 左下角顯示 logo/圖示
- 影響界面整潔度
### 解決方案
`setup_node_graph()` 方法中配置 NodeGraphQt 隱藏不需要的 UI 元素:
```python
def setup_node_graph(self):
try:
self.graph = NodeGraph()
# 配置 NodeGraphQt 隱藏不需要的 UI 元素
viewer = self.graph.viewer()
if viewer:
# 隱藏左下角的 logo/圖示
if hasattr(viewer, 'set_logo_visible'):
viewer.set_logo_visible(False)
elif hasattr(viewer, 'show_logo'):
viewer.show_logo(False)
# 嘗試隱藏網格
if hasattr(viewer, 'set_grid_mode'):
viewer.set_grid_mode(0) # 0 = 無網格
elif hasattr(viewer, 'grid_mode'):
viewer.grid_mode = 0
```
### 改進特點
- **Logo 隱藏**:支持多種 API 方式隱藏 logo
- **網格配置**:可選的網格隱藏功能
- **兼容性**:處理不同版本的 NodeGraphQt API
## 3. 全域狀態欄
### 問題描述
- 狀態欄只在中間的 pipeline editor 面板中顯示
- 無法延伸到左右兩邊的 panel視覺效果不佳
### 解決方案
#### 3.1 重新設計布局結構
將狀態欄從 pipeline editor 面板移動到主布局:
```python
def setup_integrated_ui(self):
# 主布局包含狀態欄
main_layout = QVBoxLayout(central_widget)
main_layout.setContentsMargins(0, 0, 0, 0)
main_layout.setSpacing(0)
# 3 個面板的水平分割器
main_splitter = QSplitter(Qt.Horizontal)
# ... 添加左、中、右三個面板
# 將分割器添加到主布局
main_layout.addWidget(main_splitter)
# 在最底部添加全域狀態欄
self.global_status_bar = self.create_status_bar_widget()
main_layout.addWidget(self.global_status_bar)
```
#### 3.2 狀態欄樣式更新
設計類似 VSCode 的全域狀態欄:
```python
def create_status_bar_widget(self):
status_widget = QWidget()
status_widget.setFixedHeight(28)
status_widget.setStyleSheet("""
QWidget {
background-color: #1e1e2e;
border-top: 1px solid #45475a;
margin: 0px;
padding: 0px;
}
""")
layout = QHBoxLayout(status_widget)
layout.setContentsMargins(15, 3, 15, 3)
layout.setSpacing(20)
# 左側Stage count
self.stage_count_widget = StageCountWidget()
layout.addWidget(self.stage_count_widget)
# 中間:彈性空間
layout.addStretch()
# 右側:統計信息
self.stats_label = QLabel("Nodes: 0 | Connections: 0")
layout.addWidget(self.stats_label)
```
#### 3.3 移除重複的狀態欄
`create_pipeline_editor_panel()` 移除本地狀態欄:
```python
def create_pipeline_editor_panel(self):
# 直接添加 graph widget不再創建本地狀態欄
if self.graph and NODEGRAPH_AVAILABLE:
graph_widget = self.graph.widget
graph_widget.setMinimumHeight(400)
layout.addWidget(graph_widget)
```
### 視覺效果改進
- **全寬度顯示**:狀態欄現在橫跨整個應用程式寬度
- **統一風格**:與 VSCode 等編輯器的狀態欄風格一致
- **清晰分割**:頂部邊框清楚分割內容區域和狀態欄
## StageCountWidget 優化
### 尺寸調整
適配全域狀態欄的高度:
```python
def __init__(self):
self.setup_ui()
self.setFixedSize(120, 22) # 從 120x25 調整為 120x22
```
### 佈局優化
保持水平佈局,適合狀態欄顯示:
```python
def setup_ui(self):
layout = QHBoxLayout()
layout.setContentsMargins(5, 2, 5, 2) # 緊湊的邊距
self.stage_label = QLabel("Stages: 0")
self.stage_label.setFont(QFont("Arial", 9, QFont.Bold))
# 透明背景適合狀態欄
self.setStyleSheet("background-color: transparent; border: none;")
```
## 測試驗證
### 自動化測試
創建 `test_ui_fixes.py` 全面測試:
```bash
🚀 Starting UI fixes tests...
✅ Connection counting improvements - 5/5 tests passed
✅ Canvas cleanup - logo removal logic found
✅ Global status bar - full-width styling verified
✅ StageCountWidget updates - correct sizing (120x22)
✅ Layout structure - no duplicate status bars
📊 Test Results: 5/5 tests passed
🎉 All UI fixes tests passed!
```
### 功能驗證
- **Connection 計算**:正確顯示節點間的連接數量
- **Canvas 清理**:左下角圖示成功隱藏
- **狀態欄布局**:全寬度顯示,跨越所有面板
- **實時更新**:狀態信息隨節點變化即時更新
## 技術細節
### 文件修改
- **`ui/windows/dashboard.py`**: 主要修改文件
- 改進 `analyze_pipeline()` 方法的連接計算
- 更新 `setup_node_graph()` 隱藏 logo
- 重構 `setup_integrated_ui()` 支持全域狀態欄
- 優化 `StageCountWidget` 適配新布局
### 兼容性處理
- **多 API 支持**:處理不同版本的 NodeGraphQt API
- **錯誤處理**:安全的異常捕獲,防止 API 差異導致崩潰
- **向後兼容**:保持原有功能不受影響
### 性能優化
- **高效計算**:改進的連接計算邏輯更準確
- **減少重複**:移除重複的狀態欄創建
- **資源管理**:適當的錯誤處理避免資源洩漏
## 用戶體驗改進
### 視覺改進
1. **準確的信息顯示**:連接數量正確反映實際狀態
2. **清潔的界面**:移除不必要的 logo 和圖示
3. **一致的布局**:全域狀態欄提供統一的信息展示
### 功能改進
1. **實時反饋**:狀態信息即時更新
2. **全面覆蓋**:狀態欄跨越整個應用程式寬度
3. **穩定性**:改進的錯誤處理提高穩定性
## 總結
這次 UI 修正成功解決了用戶提出的三個具體問題:
1. **🔗 Connection 計算修正**:現在可以正確顯示節點間的連接數量
2. **🎨 Canvas 清理**:移除左下角圖示,界面更加整潔
3. **📊 全域狀態欄**:狀態列延伸到左右兩邊,提供更好的視覺體驗
修正後的界面更加專業和用戶友好,同時保持了所有原有功能的完整性。測試結果顯示所有改進都按預期工作,沒有引入新的問題。