Major improvements:
- Add intelligent memory management for both input and output queues
- Implement frame dropping strategy to prevent memory overflow
- Set output queue limit to 50 results with FIFO cleanup
- Add input queue management with real-time frame dropping
- Filter async results from callbacks and display to reduce noise
- Improve system stability and prevent queue-related hangs
- Add comprehensive logging for dropped frames and results
Performance enhancements:
- Maintain real-time processing by prioritizing latest frames
- Prevent memory accumulation that previously caused system freezes
- Ensure consistent queue size reporting and FPS calculations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add pipeline activity logging every 10 results to track processing
- Add queue size monitoring in InferencePipeline coordinator
- Add camera frame capture logging every 100 frames
- Add MultiDongle send/receive thread logging every 100 operations
- Add error handling for repeated callback failures in camera source
This will help identify where the pipeline stops processing:
- Camera capture stopping
- MultiDongle threads blocking
- Pipeline coordinator hanging
- Queue capacity issues
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add time-window based FPS calculation using output queue timestamps
- Replace misleading "Theoretical FPS" (based on processing time) with real "Pipeline FPS"
- Track actual inference output generation rate over 10-second sliding window
- Add thread-safe FPS calculation with proper timestamp management
- Display realistic FPS values (4-9 FPS) instead of inflated values (90+ FPS)
Key improvements:
- _record_output_timestamp(): Records when each output is generated
- get_current_fps(): Calculates FPS based on actual throughput over time window
- Thread-safe implementation with fps_lock for concurrent access
- Automatic cleanup of old timestamps outside the time window
- Integration with GUI display to show meaningful FPS metrics
This provides users with accurate inference throughput measurements that reflect
real-world performance, especially important for multi-dongle setups where
understanding actual scaling is crucial.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Comment out print() statements in InferencePipeline that duplicate GUI callback output
- Prevents each inference result from appearing multiple times in terminal
- Keeps logging system clean while maintaining GUI formatted display
- This was causing terminal output to show each result 2-3 times due to:
1. InferencePipeline print() statements captured by StdoutCapture
2. Same results formatted and sent via terminal_output callback
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed NameError where 'processed_result' was referenced but not defined.
Should use 'inference_result' which contains the actual inference output
from MultiDongle.get_latest_inference_result().
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key fixes:
1. Remove 'block' parameter from put_input() call - not supported in standalone code
2. Remove 'timeout' parameter from get_latest_inference_result() call
3. Improve _has_inference_result() logic to properly detect real inference results
- Don't count "Processing" or "async" status as valid results
- Only count actual tuple (prob, result_str) or meaningful dict results
- Match standalone code behavior for FPS calculation
This should resolve the "unexpected keyword argument" errors and
provide accurate FPS counting like the standalone baseline.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key changes:
1. FPS Calculation: Only count when stage receives actual inference results
- Add _has_inference_result() method to check for valid results
- Only increment processed_count when real inference result is available
- This measures "inferences per second" not "frames per second"
2. Reduced Log Spam: Remove excessive preprocessing debug logs
- Remove shape/dtype logs for every frame
- Only log successful inference results
- Keep essential error logs
3. Maintain Async Pattern: Keep non-blocking processing
- Still use timeout=0.001 for get_latest_inference_result
- Still use block=False for put_input
- No blocking while loops
Expected result: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
matching standalone code behavior.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Key fixes:
1. FPS Calculation: Only count actual inference results, not frame processing
- Previous: counted every frame processed (~90 FPS, incorrect)
- Now: only counts when actual inference results are received (~9 FPS, correct)
- Return None from _process_data when no inference result available
- Skip FPS counting for iterations without real results
2. Log Reduction: Significantly reduced verbose logging
- Removed excessive debug prints for preprocessing steps
- Removed "No inference result" spam messages
- Only log actual successful inference results
3. Async Processing: Maintain proper async pattern
- Still use non-blocking get_latest_inference_result(timeout=0.001)
- Still use non-blocking put_input(block=False)
- But only count real inference throughput for FPS
This should now match standalone code behavior: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
The key issue was in InferencePipeline._process_data() where a 5-second
while loop was blocking waiting for inference results. This completely
serialized processing and prevented multiple dongles from working in parallel.
Changes:
- Replace blocking while loop with single non-blocking call
- Use timeout=0.001 for get_latest_inference_result (async pattern)
- Use block=False for put_input to prevent queue blocking
- Increase worker queue timeout from 0.1s to 1.0s
- Handle async processing status properly
This matches the pattern from the standalone code that achieved
4.xx FPS (1 dongle) vs 9.xx FPS (2 dongles) scaling.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove all debug print statements from deployment dialog
- Remove debug output from workflow orchestrator and inference pipeline
- Remove test signal emissions and unused imports
- Code is now clean and production-ready
- Results are successfully flowing from inference to GUI display
- Add debug output in InferencePipeline result callback to see if it's called
- Add debug output in WorkflowOrchestrator handle_result to trace callback flow
- This will help identify exactly where the callback chain is breaking
- Previous test showed GUI can receive signals but callbacks aren't triggered
- Fix remaining array comparison error in inference result validation
- Update PyQt signal signature for proper numpy array handling
- Improve DeploymentWorker to keep running after deployment
- Enhance stop button with non-blocking UI updates and better error handling
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ambiguous truth value error in InferencePipeline result handling
- Add stop inference button to deployment dialog with proper UI state management
- Improve error handling for tuple vs dict result types
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Device Detection Updates:
- Update device series detection to use product_id mapping (0x100 -> KL520, 0x720 -> KL720)
- Handle JSON dict format from kp.core.scan_devices() properly
- Extract usb_port_id correctly from device descriptors
- Support multiple device descriptor formats (dict, list, object)
- Enhanced debug output shows Product ID for verification
Pipeline Deployment Fixes:
- Remove invalid preprocessor/postprocessor parameters from MultiDongle constructor
- Add max_queue_size parameter support to MultiDongle
- Fix pipeline stage initialization to match MultiDongle constructor
- Add auto_detect parameter support for pipeline stages
- Store stage processors as instance variables for future use
Example Updates:
- Update device_detection_example.py to show Product ID in output
- Enhanced error handling and format detection
Resolves pipeline deployment error: "MultiDongle.__init__() got an unexpected keyword argument 'preprocessor'"
Now properly handles real device descriptors with correct product_id to series mapping.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>