Key changes: 1. FPS Calculation: Only count when stage receives actual inference results - Add _has_inference_result() method to check for valid results - Only increment processed_count when real inference result is available - This measures "inferences per second" not "frames per second" 2. Reduced Log Spam: Remove excessive preprocessing debug logs - Remove shape/dtype logs for every frame - Only log successful inference results - Keep essential error logs 3. Maintain Async Pattern: Keep non-blocking processing - Still use timeout=0.001 for get_latest_inference_result - Still use block=False for put_input - No blocking while loops Expected result: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles) matching standalone code behavior. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Description
No description provided
Languages
Python
100%