Three related fixes to the QObject::connect / QTextCursor warning that
appeared when stopping inference:
1. StdoutCapture: replace signal emission with queue.Queue.put_nowait()
so non-Qt SDK threads (Kneron shutdown) never touch Qt signal machinery.
DeploymentWorker.stdout_captured signal removed; worker now accepts a
stdout_queue and passes it to StdoutCapture.
2. start_deployment: create QTimer (100 ms) on main thread to drain the
stdout queue via _drain_stdout_queue(). Connect worker.finished to
_on_worker_finished to stop the timer and flush remaining output.
3. stop_deployment / wait_for_stop: the background thread was calling
QTextEdit.append() and other widget methods directly, which internally
creates QTextCursor queued connections — the real trigger of the
warning. Fixed by having wait_for_stop emit _stop_done signal only;
all UI updates moved to _on_stop_done slot (main thread).
Also adds QTextCursor import in main.py to pre-register the type with
Qt's meta-type system as a belt-and-suspenders measure.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- result_handler: add _InferenceResultEncoder to handle dataclass objects
(ObjectDetectionResult, ClassificationResult) in JSON serialization;
fixes "Object of type ObjectDetectionResult is not JSON serializable"
- deployment: replace textCursor().movePosition() with toPlainText/setPlainText
for log trimming; eliminates QTextCursor cross-thread Qt warning
- main: remove duplicate setAttribute(AA_EnableHighDpiScaling) call in
setup_application() which ran after QApplication was already created;
fixes "Attribute Qt::AA_EnableHighDpiScaling must be set before
QCoreApplication is created"
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add .autoflow/ with health check, PRD, Design Doc, TDD, progress tracking
- Add tests/conftest.py with PyQt5/KP SDK stubs for unit testing
- Add pytest config to pyproject.toml (pythonpath, import-mode, test naming)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Move test scripts to tests/ directory for better organization
- Add improved YOLOv5 postprocessing with reference implementation
- Update gitignore to exclude *.mflow files and include main.spec
- Add debug capabilities and coordinate scaling improvements
- Enhance multi-series support with proper validation
- Add AGENTS.md documentation and example utilities
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add comprehensive test scripts for multi-series dongle configuration
- Add debugging tools for deployment and flow testing
- Add configuration verification and guide utilities
- Fix stdout/stderr handling in deployment dialog for PyInstaller builds
- Includes port ID configuration tests and multi-series config validation
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implement PostProcessorOptions system with built-in postprocessing types (fire detection, YOLO v3/v5, classification, raw output)
- Add fire detection as default option maintaining backward compatibility
- Support YOLO v3/v5 object detection with bounding box visualization in live view windows
- Integrate text output with confidence scores and visual indicators for all postprocess types
- Update exact nodes postprocess_node.py to configure postprocessing through UI properties
- Add comprehensive example demonstrating all available postprocessing options and usage patterns
- Enhance WebcamInferenceRunner with dynamic visualization based on result types
Technical improvements:
- Created PostProcessType enum and PostProcessorOptions configuration class
- Built-in postprocessing eliminates external dependencies on Kneron Default examples
- Added BoundingBox, ObjectDetectionResult, and ClassificationResult data structures
- Enhanced live view with color-coded confidence bars and object detection overlays
- Integrated postprocessing options into MultiDongle constructor and exact nodes system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key improvements:
- Add timeout mechanism (2s) for result ordering to prevent slow devices from blocking pipeline
- Implement performance-biased load balancing with 2x penalty for low-GOPS devices (< 10 GOPS)
- Adjust KL520 GOPS from 3 to 2 for more accurate performance representation
- Remove KL540 references to focus on available hardware
- Add intelligent sequence skipping with timeout results for better throughput
This resolves the issue where multi-series mode had lower FPS than single KL720
due to KL520 devices creating bottlenecks in the result ordering queue.
Performance impact:
- Reduces KL520 task allocation from ~12.5% to ~5-8%
- Prevents pipeline stalls from slow inference results
- Maintains result ordering integrity with timeout fallback
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix mflow_converter to properly handle multi-series configuration creation
- Update InferencePipeline to correctly initialize MultiDongle with multi-series config
- Add comprehensive multi-series configuration validation in mflow_converter
- Enhance deployment dialog to display multi-series configuration details
- Improve analysis and configuration tabs to show proper multi-series info
This resolves the issue where multi-series mode was falling back to single-series
during inference initialization, ensuring proper multi-series dongle support.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace tkinter with PyQt5 QFileDialog as primary folder selector to fix macOS crashes
- Add specialized assets_folder property handling in dashboard with validation
- Integrate improved folder dialog utility with ExactModelNode
- Provide detailed validation feedback and user-friendly tooltips
- Maintain backward compatibility with tkinter as fallback
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Disable footer section in dashboard login UI
- Clean up layout by removing footer elements
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implement SingleInstance class using QSharedMemory and file locking
- Cross-platform support with fcntl on Unix/macOS and file creation on Windows
- Show warning dialog when user tries to launch second instance
- Automatic cleanup of resources on application exit
- Graceful handling of instance detection failures
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove "All files (*)" option from file dialog, only allow .mflow files
- Change error handling to return to login page instead of opening empty pipeline
- Update error message to be more specific about file format requirements
- Properly clean up dashboard window when file load fails
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Update all imports to use relative imports instead of cluster4npu_ui.* prefix
- Remove export configuration functionality from dashboard menu
- Remove performance analysis action from pipeline menu
- Update dependencies in pyproject.toml to include NodeGraphQt and PyQt5
- Maintain clean import structure across all modules
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implement MultiSeriesDongleManager for parallel inference across different dongle series
- Add GOPS-based load balancing (KL720: 1425 GOPS, KL520: 345 GOPS ratio ~4:1)
- Ensure sequential result output despite heterogeneous processing speeds
- Include comprehensive threading architecture with dispatcher, per-dongle workers, and result ordering
- Add performance statistics and monitoring capabilities
- Update project configuration and documentation
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add smart path truncation for long file paths (preserves filename and parent folder)
- Set maximum width constraints on all UI components (QPushButton, QComboBox, QSpinBox, QDoubleSpinBox, QLineEdit)
- Add tooltips showing full paths for truncated file path buttons
- Disable horizontal scrollbar and optimize right panel width (320-380px)
- Improve styling for all property widgets with consistent theme
- Add better placeholder text for input fields
Key improvements:
- File paths like "C:/Very/Long/Path/.../filename.nef" → "...Long/Path/filename.nef"
- All widgets limited to 250px max width to prevent panel expansion
- Enhanced hover and focus states for better UX
- Properties panel now fits within fixed width without horizontal scroll
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add upload_fw property with enhanced UI checkbox styling
- Connect checkbox to inference pipeline process
- Enable/disable firmware upload based on user selection
- Add visual feedback and logging for firmware upload status
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Simplified language for better readability
- Added specific performance expectations (4-5 FPS)
- Clear test scenarios for QA validation
- Direct problem-to-solution mapping
- Removed technical jargon for broader audience
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Minor improvements:
- Remove duplicate logging from inference results to reduce console noise
- Update deployment dialog UI text to remove emoji for cleaner display
- Clean up commented debug statements across multiple files
- Improve user experience with more professional terminal output
- Maintain functionality while reducing visual clutter
This commit focuses on polish and user experience improvements
without changing core functionality.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major improvements:
- Add intelligent memory management for both input and output queues
- Implement frame dropping strategy to prevent memory overflow
- Set output queue limit to 50 results with FIFO cleanup
- Add input queue management with real-time frame dropping
- Filter async results from callbacks and display to reduce noise
- Improve system stability and prevent queue-related hangs
- Add comprehensive logging for dropped frames and results
Performance enhancements:
- Maintain real-time processing by prioritizing latest frames
- Prevent memory accumulation that previously caused system freezes
- Ensure consistent queue size reporting and FPS calculations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Rebrand README from InferencePipeline to Cluster4NPU UI Visual Pipeline Designer
- Focus documentation on PyQt5-based GUI and drag-and-drop workflow
- Update PROJECT_SUMMARY with current capabilities and focused development priorities
- Streamline DEVELOPMENT_ROADMAP with 4-phase implementation plan
- Remove redundant Chinese technical summary files (STAGE_IMPROVEMENTS_SUMMARY.md, UI_FIXES_SUMMARY.md, STATUS_BAR_FIXES_SUMMARY.md)
- Align all documentation with actual three-panel UI architecture and NodeGraphQt integration
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add pipeline activity logging every 10 results to track processing
- Add queue size monitoring in InferencePipeline coordinator
- Add camera frame capture logging every 100 frames
- Add MultiDongle send/receive thread logging every 100 operations
- Add error handling for repeated callback failures in camera source
This will help identify where the pipeline stops processing:
- Camera capture stopping
- MultiDongle threads blocking
- Pipeline coordinator hanging
- Queue capacity issues
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove all emojis from terminal output formatting for cleaner display
- Add debug print statement to track pipeline.get_current_fps() values
- Change FPS display to "Pipeline FPS (Output Queue)" for clarity
- Simplify output formatting by removing emoji decorations
- This will help identify why FPS calculation isn't working as expected
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove qRegisterMetaType import that is not available in all PyQt5 versions
- Remove QTextCursor import and registration that was causing import error
- Simplify deployment dialog initialization to avoid PyQt5 compatibility issues
- The QTextCursor warning was not critical and the registration was unnecessary
This fixes the "cannot import name 'qRegisterMetaType' from 'PyQt5.QtCore'" error
that prevented deployment dialog from opening.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove kp.core.set_timeout() call that causes crashes when camera is connected
- Add explanatory message indicating timeout is skipped for stability
- This prevents the system crash that occurs during camera initialization
- Trade-off: Removes USB timeout but ensures stable camera operation
The timeout setting was conflicting with camera connection process,
causing the entire system to crash during device initialization.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add qRegisterMetaType(QTextCursor) to prevent Qt threading warning
- Import QTextCursor and qRegisterMetaType from PyQt5
- Resolves "Cannot queue arguments of type 'QTextCursor'" warning
- Ensures thread-safe GUI updates for terminal display
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Re-enable kp.core.set_timeout() which is required for proper device communication
- Fix GUI terminal truncation issue by using append() instead of setPlainText()
- Remove aggressive line limiting that was causing log display to stop midway
- Implement gentler memory management (trim only after 1000+ lines)
- This should resolve pipeline timeout issues and complete log display
The previous USB timeout disable was causing stage timeouts without inference results.
The terminal display issue was due to frequent text replacement causing display corruption.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add time-window based FPS calculation using output queue timestamps
- Replace misleading "Theoretical FPS" (based on processing time) with real "Pipeline FPS"
- Track actual inference output generation rate over 10-second sliding window
- Add thread-safe FPS calculation with proper timestamp management
- Display realistic FPS values (4-9 FPS) instead of inflated values (90+ FPS)
Key improvements:
- _record_output_timestamp(): Records when each output is generated
- get_current_fps(): Calculates FPS based on actual throughput over time window
- Thread-safe implementation with fps_lock for concurrent access
- Automatic cleanup of old timestamps outside the time window
- Integration with GUI display to show meaningful FPS metrics
This provides users with accurate inference throughput measurements that reflect
real-world performance, especially important for multi-dongle setups where
understanding actual scaling is crucial.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Comment out print() statements in InferencePipeline that duplicate GUI callback output
- Prevents each inference result from appearing multiple times in terminal
- Keeps logging system clean while maintaining GUI formatted display
- This was causing terminal output to show each result 2-3 times due to:
1. InferencePipeline print() statements captured by StdoutCapture
2. Same results formatted and sent via terminal_output callback
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add StdoutCapture context manager to capture all print() statements
- Connect captured output to GUI terminal display via stdout_captured signal
- Fix logging issue where pipeline initialization and operation logs were not shown in app
- Prevent infinite recursion with _emitting flag in TeeWriter
- Ensure both console and GUI receive all log messages during deployment
- Comment out USB timeout setting that was causing device timeout issues
This resolves the issue where logs would stop showing partially in the app,
ensuring complete visibility of MultiDongle and InferencePipeline operations.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed NameError where 'processed_result' was referenced but not defined.
Should use 'inference_result' which contains the actual inference output
from MultiDongle.get_latest_inference_result().
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key fixes:
1. Remove 'block' parameter from put_input() call - not supported in standalone code
2. Remove 'timeout' parameter from get_latest_inference_result() call
3. Improve _has_inference_result() logic to properly detect real inference results
- Don't count "Processing" or "async" status as valid results
- Only count actual tuple (prob, result_str) or meaningful dict results
- Match standalone code behavior for FPS calculation
This should resolve the "unexpected keyword argument" errors and
provide accurate FPS counting like the standalone baseline.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key changes:
1. FPS Calculation: Only count when stage receives actual inference results
- Add _has_inference_result() method to check for valid results
- Only increment processed_count when real inference result is available
- This measures "inferences per second" not "frames per second"
2. Reduced Log Spam: Remove excessive preprocessing debug logs
- Remove shape/dtype logs for every frame
- Only log successful inference results
- Keep essential error logs
3. Maintain Async Pattern: Keep non-blocking processing
- Still use timeout=0.001 for get_latest_inference_result
- Still use block=False for put_input
- No blocking while loops
Expected result: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
matching standalone code behavior.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>