Key fixes:
1. FPS Calculation: Only count actual inference results, not frame processing
- Previous: counted every frame processed (~90 FPS, incorrect)
- Now: only counts when actual inference results are received (~9 FPS, correct)
- Return None from _process_data when no inference result available
- Skip FPS counting for iterations without real results
2. Log Reduction: Significantly reduced verbose logging
- Removed excessive debug prints for preprocessing steps
- Removed "No inference result" spam messages
- Only log actual successful inference results
3. Async Processing: Maintain proper async pattern
- Still use non-blocking get_latest_inference_result(timeout=0.001)
- Still use non-blocking put_input(block=False)
- But only count real inference throughput for FPS
This should now match standalone code behavior: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>