Key fixes:
1. FPS Calculation: Only count actual inference results, not frame processing
- Previous: counted every frame processed (~90 FPS, incorrect)
- Now: only counts when actual inference results are received (~9 FPS, correct)
- Return None from _process_data when no inference result available
- Skip FPS counting for iterations without real results
2. Log Reduction: Significantly reduced verbose logging
- Removed excessive debug prints for preprocessing steps
- Removed "No inference result" spam messages
- Only log actual successful inference results
3. Async Processing: Maintain proper async pattern
- Still use non-blocking get_latest_inference_result(timeout=0.001)
- Still use non-blocking put_input(block=False)
- But only count real inference throughput for FPS
This should now match standalone code behavior: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
The key issue was in InferencePipeline._process_data() where a 5-second
while loop was blocking waiting for inference results. This completely
serialized processing and prevented multiple dongles from working in parallel.
Changes:
- Replace blocking while loop with single non-blocking call
- Use timeout=0.001 for get_latest_inference_result (async pattern)
- Use block=False for put_input to prevent queue blocking
- Increase worker queue timeout from 0.1s to 1.0s
- Handle async processing status properly
This matches the pattern from the standalone code that achieved
4.xx FPS (1 dongle) vs 9.xx FPS (2 dongles) scaling.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove all debug print statements from deployment dialog
- Remove debug output from workflow orchestrator and inference pipeline
- Remove test signal emissions and unused imports
- Code is now clean and production-ready
- Results are successfully flowing from inference to GUI display
- Add debug output in InferencePipeline result callback to see if it's called
- Add debug output in WorkflowOrchestrator handle_result to trace callback flow
- This will help identify exactly where the callback chain is breaking
- Previous test showed GUI can receive signals but callbacks aren't triggered
- Fix remaining array comparison error in inference result validation
- Update PyQt signal signature for proper numpy array handling
- Improve DeploymentWorker to keep running after deployment
- Enhance stop button with non-blocking UI updates and better error handling
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ambiguous truth value error in InferencePipeline result handling
- Add stop inference button to deployment dialog with proper UI state management
- Improve error handling for tuple vs dict result types
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Device Detection Updates:
- Update device series detection to use product_id mapping (0x100 -> KL520, 0x720 -> KL720)
- Handle JSON dict format from kp.core.scan_devices() properly
- Extract usb_port_id correctly from device descriptors
- Support multiple device descriptor formats (dict, list, object)
- Enhanced debug output shows Product ID for verification
Pipeline Deployment Fixes:
- Remove invalid preprocessor/postprocessor parameters from MultiDongle constructor
- Add max_queue_size parameter support to MultiDongle
- Fix pipeline stage initialization to match MultiDongle constructor
- Add auto_detect parameter support for pipeline stages
- Store stage processors as instance variables for future use
Example Updates:
- Update device_detection_example.py to show Product ID in output
- Enhanced error handling and format detection
Resolves pipeline deployment error: "MultiDongle.__init__() got an unexpected keyword argument 'preprocessor'"
Now properly handles real device descriptors with correct product_id to series mapping.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>