Masonmason 67a1031009 fix: Remove blocking while loop that prevented multi-dongle scaling
The key issue was in InferencePipeline._process_data() where a 5-second
while loop was blocking waiting for inference results. This completely
serialized processing and prevented multiple dongles from working in parallel.

Changes:
- Replace blocking while loop with single non-blocking call
- Use timeout=0.001 for get_latest_inference_result (async pattern)
- Use block=False for put_input to prevent queue blocking
- Increase worker queue timeout from 0.1s to 1.0s
- Handle async processing status properly

This matches the pattern from the standalone code that achieved
4.xx FPS (1 dongle) vs 9.xx FPS (2 dongles) scaling.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 10:52:58 +08:00
2025-05-29 15:27:12 +08:00
2025-05-15 01:22:37 +08:00
2025-05-29 15:27:12 +08:00
2025-05-29 15:27:12 +08:00
Description
No description provided
1.7 MiB
Languages
Python 100%