Key changes:
1. FPS Calculation: Only count when stage receives actual inference results
- Add _has_inference_result() method to check for valid results
- Only increment processed_count when real inference result is available
- This measures "inferences per second" not "frames per second"
2. Reduced Log Spam: Remove excessive preprocessing debug logs
- Remove shape/dtype logs for every frame
- Only log successful inference results
- Keep essential error logs
3. Maintain Async Pattern: Keep non-blocking processing
- Still use timeout=0.001 for get_latest_inference_result
- Still use block=False for put_input
- No blocking while loops
Expected result: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
matching standalone code behavior.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
The key issue was in InferencePipeline._process_data() where a 5-second
while loop was blocking waiting for inference results. This completely
serialized processing and prevented multiple dongles from working in parallel.
Changes:
- Replace blocking while loop with single non-blocking call
- Use timeout=0.001 for get_latest_inference_result (async pattern)
- Use block=False for put_input to prevent queue blocking
- Increase worker queue timeout from 0.1s to 1.0s
- Handle async processing status properly
This matches the pattern from the standalone code that achieved
4.xx FPS (1 dongle) vs 9.xx FPS (2 dongles) scaling.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove all debug print statements from deployment dialog
- Remove debug output from workflow orchestrator and inference pipeline
- Remove test signal emissions and unused imports
- Code is now clean and production-ready
- Results are successfully flowing from inference to GUI display
- Add debug output in InferencePipeline result callback to see if it's called
- Add debug output in WorkflowOrchestrator handle_result to trace callback flow
- This will help identify exactly where the callback chain is breaking
- Previous test showed GUI can receive signals but callbacks aren't triggered
- Fix remaining array comparison error in inference result validation
- Update PyQt signal signature for proper numpy array handling
- Improve DeploymentWorker to keep running after deployment
- Enhance stop button with non-blocking UI updates and better error handling
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ambiguous truth value error in InferencePipeline result handling
- Add stop inference button to deployment dialog with proper UI state management
- Improve error handling for tuple vs dict result types
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Device Detection Updates:
- Update device series detection to use product_id mapping (0x100 -> KL520, 0x720 -> KL720)
- Handle JSON dict format from kp.core.scan_devices() properly
- Extract usb_port_id correctly from device descriptors
- Support multiple device descriptor formats (dict, list, object)
- Enhanced debug output shows Product ID for verification
Pipeline Deployment Fixes:
- Remove invalid preprocessor/postprocessor parameters from MultiDongle constructor
- Add max_queue_size parameter support to MultiDongle
- Fix pipeline stage initialization to match MultiDongle constructor
- Add auto_detect parameter support for pipeline stages
- Store stage processors as instance variables for future use
Example Updates:
- Update device_detection_example.py to show Product ID in output
- Enhanced error handling and format detection
Resolves pipeline deployment error: "MultiDongle.__init__() got an unexpected keyword argument 'preprocessor'"
Now properly handles real device descriptors with correct product_id to series mapping.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>