Key changes:
1. FPS Calculation: Only count when stage receives actual inference results
- Add _has_inference_result() method to check for valid results
- Only increment processed_count when real inference result is available
- This measures "inferences per second" not "frames per second"
2. Reduced Log Spam: Remove excessive preprocessing debug logs
- Remove shape/dtype logs for every frame
- Only log successful inference results
- Keep essential error logs
3. Maintain Async Pattern: Keep non-blocking processing
- Still use timeout=0.001 for get_latest_inference_result
- Still use block=False for put_input
- No blocking while loops
Expected result: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
matching standalone code behavior.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key fixes:
1. FPS Calculation: Only count actual inference results, not frame processing
- Previous: counted every frame processed (~90 FPS, incorrect)
- Now: only counts when actual inference results are received (~9 FPS, correct)
- Return None from _process_data when no inference result available
- Skip FPS counting for iterations without real results
2. Log Reduction: Significantly reduced verbose logging
- Removed excessive debug prints for preprocessing steps
- Removed "No inference result" spam messages
- Only log actual successful inference results
3. Async Processing: Maintain proper async pattern
- Still use non-blocking get_latest_inference_result(timeout=0.001)
- Still use non-blocking put_input(block=False)
- But only count real inference throughput for FPS
This should now match standalone code behavior: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
The key issue was in InferencePipeline._process_data() where a 5-second
while loop was blocking waiting for inference results. This completely
serialized processing and prevented multiple dongles from working in parallel.
Changes:
- Replace blocking while loop with single non-blocking call
- Use timeout=0.001 for get_latest_inference_result (async pattern)
- Use block=False for put_input to prevent queue blocking
- Increase worker queue timeout from 0.1s to 1.0s
- Handle async processing status properly
This matches the pattern from the standalone code that achieved
4.xx FPS (1 dongle) vs 9.xx FPS (2 dongles) scaling.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Enable USB timeout (5000ms) for stable communication
- Fix send thread timeout from 0.01s to 1.0s for better blocking
- Update WebcamInferenceRunner to use async pattern (non-blocking)
- Add non-blocking put_input option to prevent frame drops
- Improve thread stopping mechanism with better cleanup
These changes follow Kneron official example pattern and should
enable proper parallel processing across multiple dongles.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed DeviceDescriptorList object attribute error by properly accessing
the device_descriptor_list attribute instead of treating the result as
a direct list of devices.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit integrates the dongle model detection logic into .
It refactors the method to:
- Handle in list or object format.
- Extract and for each device.
- Use to identify dongle models.
- Return a more detailed device information structure.
The previously deleted files were moved to the directory.
- Remove all debug print statements from deployment dialog
- Remove debug output from workflow orchestrator and inference pipeline
- Remove test signal emissions and unused imports
- Code is now clean and production-ready
- Results are successfully flowing from inference to GUI display
- Remove dependency on result_handler for setting pipeline result callback
- Always call result_callback when handle_result is triggered
- This fixes the issue where GUI callbacks weren't being called because
output type 'display' wasn't supported, causing result_handler to be None
- Add more debug output to trace callback flow
- Add debug output in InferencePipeline result callback to see if it's called
- Add debug output in WorkflowOrchestrator handle_result to trace callback flow
- This will help identify exactly where the callback chain is breaking
- Previous test showed GUI can receive signals but callbacks aren't triggered
- Add time import for test result generation
- Add test signal emissions to verify GUI connection works
- Add debug prints for signal establishment
- Test both result_updated and terminal_output signals
- This will help identify if the issue is signal connection or data flow
- Add debug prints in combined_result_callback to see received data
- Add debug prints in update_inference_results to track GUI updates
- Fix tuple order in terminal formatting to match actual (probability, result) format
- This will help identify why results show in terminal but not in GUI
- Add upload_fw property to ExactModelNode for firmware upload control
- Display all model node properties in right panel (model_path, scpu_fw_path, ncpu_fw_path, dongle_series, num_dongles, port_id, upload_fw)
- Replace console terminal output with GUI terminal display in deployment dialog
- Add Terminal Output section to deployment tab with proper formatting
- Terminal results now appear in app view instead of console for packaged apps
- Maintain backward compatibility with existing pipeline configurations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add result callback mechanism to WorkflowOrchestrator
- Implement result_updated signal in DeploymentWorker
- Create detailed inference results display with timestamps and formatted output
- Support both tuple and dict result formats
- Add auto-scrolling results panel with history management
- Connect pipeline results to Live View tab for real-time monitoring
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ambiguous truth value error in get_latest_inference_result method
- Fix ambiguous truth value error in postprocess function
- Replace direct array evaluation with explicit length checks
- Use proper None checks instead of truthy evaluation on numpy arrays
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix remaining array comparison error in inference result validation
- Update PyQt signal signature for proper numpy array handling
- Improve DeploymentWorker to keep running after deployment
- Enhance stop button with non-blocking UI updates and better error handling
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ambiguous truth value error in InferencePipeline result handling
- Add stop inference button to deployment dialog with proper UI state management
- Improve error handling for tuple vs dict result types
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add detailed TODO.md with complete project roadmap and implementation priorities
- Implement CameraSource class with multi-camera support and real-time capture
- Add VideoFileSource class with batch processing and frame control capabilities
- Create foundation for complete input/output data flow integration
- Document current auto-resize preprocessing implementation status
- Establish clear development phases and key missing components
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Always store firmware paths (scpu_fw_path, ncpu_fw_path) when provided, not just when upload_fw=True
- Restore firmware upload condition to only run when upload_fw=True
- Fix 'MultiDongle' object has no attribute 'scpu_fw_path' error during pipeline initialization
- Ensure firmware paths are available for both upload and non-upload scenarios
This resolves the pipeline deployment error where firmware paths were missing
even when provided to the constructor, causing initialization failures.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Device Detection Updates:
- Update device series detection to use product_id mapping (0x100 -> KL520, 0x720 -> KL720)
- Handle JSON dict format from kp.core.scan_devices() properly
- Extract usb_port_id correctly from device descriptors
- Support multiple device descriptor formats (dict, list, object)
- Enhanced debug output shows Product ID for verification
Pipeline Deployment Fixes:
- Remove invalid preprocessor/postprocessor parameters from MultiDongle constructor
- Add max_queue_size parameter support to MultiDongle
- Fix pipeline stage initialization to match MultiDongle constructor
- Add auto_detect parameter support for pipeline stages
- Store stage processors as instance variables for future use
Example Updates:
- Update device_detection_example.py to show Product ID in output
- Enhanced error handling and format detection
Resolves pipeline deployment error: "MultiDongle.__init__() got an unexpected keyword argument 'preprocessor'"
Now properly handles real device descriptors with correct product_id to series mapping.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace simulated dongle detection with actual kp.core.scan_devices()
- Display real device series (KL520, KL720, etc.) and port IDs in UI
- Add device information management methods (get_detected_devices, refresh_dongle_detection, etc.)
- Enhanced performance estimation based on actual detected devices
- Add device-specific optimization suggestions and warnings
- Fallback to simulation mode if device scanning fails
- Store detected device info for use throughout the application
The Dashboard now shows real Kneron device information when "Detect Dongles" is clicked,
displaying format: "KL520 Dongle - Port 28" with total device count.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add scan_devices() method using kp.core.scan_devices() for device discovery
- Add connect_auto_detected_devices() for automatic device connection
- Add device series detection (KL520, KL720, KL630, KL730, KL540, etc.)
- Add auto_detect parameter to MultiDongle constructor
- Add get_device_info() and print_device_info() methods to display port IDs and series
- Update connection logic to use kp.core.connect_devices() per official docs
- Add device_detection_example.py with usage examples
- Maintain backward compatibility with manual port specification
Features display dongle series and port ID as requested for better device management.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add DataProcessor abstract base class with process method
- Add PostProcessor class for handling inference output data
- Fix PreProcessor inheritance from DataProcessor
- Resolves "name 'DataProcessor' is not defined" error during pipeline deployment
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
• Comment out pipeline editor to resolve import conflicts
• Update test.mflow with new node IDs and preprocess node
• Add new deployment screenshot
• Remove old screenshot file
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major Features:
• Complete deployment dialog system with validation and dongle management
• Enhanced dashboard with deploy button and validation checks
• Comprehensive deployment test suite and demo scripts
• Pipeline validation for model paths, firmware, and port configurations
• Real-time deployment status tracking and error handling
Technical Improvements:
• Node property validation for deployment readiness
• File existence checks for models and firmware files
• Port ID validation and format checking
• Integration between UI components and core deployment functions
• Comprehensive error messaging and user feedback
New Components:
• DeploymentDialog with advanced configuration options
• Pipeline deployment validation system
• Test deployment scripts with various scenarios
• Enhanced dashboard UI with deployment workflow
• Screenshot updates reflecting new deployment features
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>