67 Commits

Author SHA1 Message Date
06d7c72c93 update release note 2025-07-31 01:24:37 +08:00
a1b6af0bde feat: Optimize properties panel layout to prevent horizontal scrolling
- Add smart path truncation for long file paths (preserves filename and parent folder)
- Set maximum width constraints on all UI components (QPushButton, QComboBox, QSpinBox, QDoubleSpinBox, QLineEdit)
- Add tooltips showing full paths for truncated file path buttons
- Disable horizontal scrollbar and optimize right panel width (320-380px)
- Improve styling for all property widgets with consistent theme
- Add better placeholder text for input fields

Key improvements:
- File paths like "C:/Very/Long/Path/.../filename.nef" → "...Long/Path/filename.nef"
- All widgets limited to 250px max width to prevent panel expansion
- Enhanced hover and focus states for better UX
- Properties panel now fits within fixed width without horizontal scroll

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-31 01:23:40 +08:00
77bd8324ab feat: Add Upload Firmware checkbox to Model Node properties panel
- Add upload_fw property with enhanced UI checkbox styling
- Connect checkbox to inference pipeline process
- Enable/disable firmware upload based on user selection
- Add visual feedback and logging for firmware upload status

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-31 01:13:49 +08:00
b8e95f56cb docs: Update release notes v0.0.2 with clear user and QA guidelines
- Simplified language for better readability
- Added specific performance expectations (4-5 FPS)
- Clear test scenarios for QA validation
- Direct problem-to-solution mapping
- Removed technical jargon for broader audience

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 23:04:38 +08:00
7d61f1c856 refactor: Clean up console output and improve UI text
Minor improvements:
- Remove duplicate logging from inference results to reduce console noise
- Update deployment dialog UI text to remove emoji for cleaner display
- Clean up commented debug statements across multiple files
- Improve user experience with more professional terminal output
- Maintain functionality while reducing visual clutter

This commit focuses on polish and user experience improvements
without changing core functionality.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 22:55:14 +08:00
0a946c5aaa feat: Implement memory management and queue optimization
Major improvements:
- Add intelligent memory management for both input and output queues
- Implement frame dropping strategy to prevent memory overflow
- Set output queue limit to 50 results with FIFO cleanup
- Add input queue management with real-time frame dropping
- Filter async results from callbacks and display to reduce noise
- Improve system stability and prevent queue-related hangs
- Add comprehensive logging for dropped frames and results

Performance enhancements:
- Maintain real-time processing by prioritizing latest frames
- Prevent memory accumulation that previously caused system freezes
- Ensure consistent queue size reporting and FPS calculations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 22:46:08 +08:00
c9f294bb4c fix: Improve FPS calculation and filter async results
主要改進:
- 修改 FPS 計算邏輯為累積式計算,避免初期不穩定的高 FPS 值
- 過濾掉 async 和 processing 狀態的結果,不顯示也不計入統計
- 只有真正的推理結果才會被計入 FPS 和處理計數
- 新增 _has_valid_inference_result 方法來驗證結果有效性
- 改善 MultiDongle 的 stop 方法,正確斷開設備連接
- 清理不必要的檔案和更新測試配置

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 19:45:34 +08:00
a099c56bb5 docs: Update documentation to match current visual pipeline designer architecture
- Rebrand README from InferencePipeline to Cluster4NPU UI Visual Pipeline Designer
- Focus documentation on PyQt5-based GUI and drag-and-drop workflow
- Update PROJECT_SUMMARY with current capabilities and focused development priorities
- Streamline DEVELOPMENT_ROADMAP with 4-phase implementation plan
- Remove redundant Chinese technical summary files (STAGE_IMPROVEMENTS_SUMMARY.md, UI_FIXES_SUMMARY.md, STATUS_BAR_FIXES_SUMMARY.md)
- Align all documentation with actual three-panel UI architecture and NodeGraphQt integration

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 16:32:28 +08:00
cde1aac908 debug: Add comprehensive logging to diagnose pipeline hanging issue
- Add pipeline activity logging every 10 results to track processing
- Add queue size monitoring in InferencePipeline coordinator
- Add camera frame capture logging every 100 frames
- Add MultiDongle send/receive thread logging every 100 operations
- Add error handling for repeated callback failures in camera source

This will help identify where the pipeline stops processing:
- Camera capture stopping
- MultiDongle threads blocking
- Pipeline coordinator hanging
- Queue capacity issues

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:49:00 +08:00
4b8fb7fead debug: Remove emojis and add debug info for FPS calculation
- Remove all emojis from terminal output formatting for cleaner display
- Add debug print statement to track pipeline.get_current_fps() values
- Change FPS display to "Pipeline FPS (Output Queue)" for clarity
- Simplify output formatting by removing emoji decorations
- This will help identify why FPS calculation isn't working as expected

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:43:43 +08:00
7a71e77aae fix: Remove problematic qRegisterMetaType import that causes deployment failure
- Remove qRegisterMetaType import that is not available in all PyQt5 versions
- Remove QTextCursor import and registration that was causing import error
- Simplify deployment dialog initialization to avoid PyQt5 compatibility issues
- The QTextCursor warning was not critical and the registration was unnecessary

This fixes the "cannot import name 'qRegisterMetaType' from 'PyQt5.QtCore'" error
that prevented deployment dialog from opening.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:36:20 +08:00
8aafec6bfe Merge branch 'main' of github.com:HuangMason320/cluster4npu 2025-07-24 19:31:55 +08:00
ab802e60cf fix: Remove dongle USB timeout setting to prevent camera connection crashes
- Remove kp.core.set_timeout() call that causes crashes when camera is connected
- Add explanatory message indicating timeout is skipped for stability
- This prevents the system crash that occurs during camera initialization
- Trade-off: Removes USB timeout but ensures stable camera operation

The timeout setting was conflicting with camera connection process,
causing the entire system to crash during device initialization.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:29:26 +08:00
260668ceb8 fix: Register QTextCursor meta type to eliminate Qt warning
- Add qRegisterMetaType(QTextCursor) to prevent Qt threading warning
- Import QTextCursor and qRegisterMetaType from PyQt5
- Resolves "Cannot queue arguments of type 'QTextCursor'" warning
- Ensures thread-safe GUI updates for terminal display

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:26:37 +08:00
24d5726ee2 fix: Restore USB timeout setting and improve terminal display reliability
- Re-enable kp.core.set_timeout() which is required for proper device communication
- Fix GUI terminal truncation issue by using append() instead of setPlainText()
- Remove aggressive line limiting that was causing log display to stop midway
- Implement gentler memory management (trim only after 1000+ lines)
- This should resolve pipeline timeout issues and complete log display

The previous USB timeout disable was causing stage timeouts without inference results.
The terminal display issue was due to frequent text replacement causing display corruption.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:25:02 +08:00
233ddd4a01 Merge branch 'main' of github.com:HuangMason320/cluster4npu 2025-07-24 19:18:38 +08:00
f41d9ae5c8 feat: Implement output queue based FPS calculation for accurate throughput measurement
- Add time-window based FPS calculation using output queue timestamps
- Replace misleading "Theoretical FPS" (based on processing time) with real "Pipeline FPS"
- Track actual inference output generation rate over 10-second sliding window
- Add thread-safe FPS calculation with proper timestamp management
- Display realistic FPS values (4-9 FPS) instead of inflated values (90+ FPS)

Key improvements:
- _record_output_timestamp(): Records when each output is generated
- get_current_fps(): Calculates FPS based on actual throughput over time window
- Thread-safe implementation with fps_lock for concurrent access
- Automatic cleanup of old timestamps outside the time window
- Integration with GUI display to show meaningful FPS metrics

This provides users with accurate inference throughput measurements that reflect
real-world performance, especially important for multi-dongle setups where
understanding actual scaling is crucial.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:17:18 +08:00
2ba0f4ae27 fix: Remove duplicate inference result logging to prevent terminal spam
- Comment out print() statements in InferencePipeline that duplicate GUI callback output
- Prevents each inference result from appearing multiple times in terminal
- Keeps logging system clean while maintaining GUI formatted display
- This was causing terminal output to show each result 2-3 times due to:
  1. InferencePipeline print() statements captured by StdoutCapture
  2. Same results formatted and sent via terminal_output callback

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 19:10:37 +08:00
f9ba162c81 Merge branch 'main' of github.com:HuangMason320/cluster4npu 2025-07-24 12:53:13 +08:00
83906c87e3 fix: Implement stdout/stderr capture for complete logging in deployment UI
- Add StdoutCapture context manager to capture all print() statements
- Connect captured output to GUI terminal display via stdout_captured signal
- Fix logging issue where pipeline initialization and operation logs were not shown in app
- Prevent infinite recursion with _emitting flag in TeeWriter
- Ensure both console and GUI receive all log messages during deployment
- Comment out USB timeout setting that was causing device timeout issues

This resolves the issue where logs would stop showing partially in the app,
ensuring complete visibility of MultiDongle and InferencePipeline operations.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 12:52:35 +08:00
b316c7c68a ignore timeout to prevent error 2025-07-24 12:13:57 +08:00
23d8c4ff61 fix: Replace undefined 'processed_result' with 'inference_result'
Fixed NameError where 'processed_result' was referenced but not defined.
Should use 'inference_result' which contains the actual inference output
from MultiDongle.get_latest_inference_result().

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 12:11:44 +08:00
18f1426cbc Merge branch 'main' of github.com:HuangMason320/cluster4npu 2025-07-24 11:56:40 +08:00
bab06b9fa4 Merge branch 'main' of github.com:HuangMason320/cluster4npu 2025-07-24 11:56:32 +08:00
f902659017 fix: Remove incompatible parameters to match standalone MultiDongle API
Key fixes:
1. Remove 'block' parameter from put_input() call - not supported in standalone code
2. Remove 'timeout' parameter from get_latest_inference_result() call
3. Improve _has_inference_result() logic to properly detect real inference results
   - Don't count "Processing" or "async" status as valid results
   - Only count actual tuple (prob, result_str) or meaningful dict results
   - Match standalone code behavior for FPS calculation

This should resolve the "unexpected keyword argument" errors and
provide accurate FPS counting like the standalone baseline.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 11:56:01 +08:00
80275bc774 fix: Correct FPS calculation to count actual inference results only
Key changes:
1. FPS Calculation: Only count when stage receives actual inference results
   - Add _has_inference_result() method to check for valid results
   - Only increment processed_count when real inference result is available
   - This measures "inferences per second" not "frames per second"

2. Reduced Log Spam: Remove excessive preprocessing debug logs
   - Remove shape/dtype logs for every frame
   - Only log successful inference results
   - Keep essential error logs

3. Maintain Async Pattern: Keep non-blocking processing
   - Still use timeout=0.001 for get_latest_inference_result
   - Still use block=False for put_input
   - No blocking while loops

Expected result: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)
matching standalone code behavior.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 11:30:13 +08:00
273ae71846 fix: Correct FPS calculation and reduce log spam
Key fixes:
1. FPS Calculation: Only count actual inference results, not frame processing
   - Previous: counted every frame processed (~90 FPS, incorrect)
   - Now: only counts when actual inference results are received (~9 FPS, correct)
   - Return None from _process_data when no inference result available
   - Skip FPS counting for iterations without real results

2. Log Reduction: Significantly reduced verbose logging
   - Removed excessive debug prints for preprocessing steps
   - Removed "No inference result" spam messages
   - Only log actual successful inference results

3. Async Processing: Maintain proper async pattern
   - Still use non-blocking get_latest_inference_result(timeout=0.001)
   - Still use non-blocking put_input(block=False)
   - But only count real inference throughput for FPS

This should now match standalone code behavior: ~4 FPS (1 dongle) vs ~9 FPS (2 dongles)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 11:12:42 +08:00
67a1031009 fix: Remove blocking while loop that prevented multi-dongle scaling
The key issue was in InferencePipeline._process_data() where a 5-second
while loop was blocking waiting for inference results. This completely
serialized processing and prevented multiple dongles from working in parallel.

Changes:
- Replace blocking while loop with single non-blocking call
- Use timeout=0.001 for get_latest_inference_result (async pattern)
- Use block=False for put_input to prevent queue blocking
- Increase worker queue timeout from 0.1s to 1.0s
- Handle async processing status properly

This matches the pattern from the standalone code that achieved
4.xx FPS (1 dongle) vs 9.xx FPS (2 dongles) scaling.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 10:52:58 +08:00
bc92761a83 fix: Optimize multi-dongle inference for proper parallel processing
- Enable USB timeout (5000ms) for stable communication
- Fix send thread timeout from 0.01s to 1.0s for better blocking
- Update WebcamInferenceRunner to use async pattern (non-blocking)
- Add non-blocking put_input option to prevent frame drops
- Improve thread stopping mechanism with better cleanup

These changes follow Kneron official example pattern and should
enable proper parallel processing across multiple dongles.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 10:39:20 +08:00
cb9dff10a9 fix: Correct device scanning to access device_descriptor_list properly
Fixed DeviceDescriptorList object attribute error by properly accessing
the device_descriptor_list attribute instead of treating the result as
a direct list of devices.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-24 10:13:17 +08:00
f45c56d529 update debug_deployment 2025-07-24 10:05:39 +08:00
36c710416e feat: Add .gitignore and ignore dist.zip 2025-07-24 10:02:15 +08:00
183b5659b7 feat: Integrate dongle model detection and refactor scan_devices
This commit integrates the dongle model detection logic into .
It refactors the  method to:
- Handle  in list or object format.
- Extract  and  for each device.
- Use  to identify dongle models.
- Return a more detailed device information structure.

The previously deleted files were moved to the  directory.
2025-07-24 10:01:56 +08:00
0e8d75c85c cleanup: Remove debug output after successful fix verification
- Remove all debug print statements from deployment dialog
- Remove debug output from workflow orchestrator and inference pipeline
- Remove test signal emissions and unused imports
- Code is now clean and production-ready
- Results are successfully flowing from inference to GUI display
2025-07-23 22:50:34 +08:00
18ec31738a fix: Ensure result callback is always set on pipeline regardless of result handler
- Remove dependency on result_handler for setting pipeline result callback
- Always call result_callback when handle_result is triggered
- This fixes the issue where GUI callbacks weren't being called because
  output type 'display' wasn't supported, causing result_handler to be None
- Add more debug output to trace callback flow
2025-07-23 22:43:42 +08:00
2dec66edad debug: Add callback chain debugging to InferencePipeline and WorkflowOrchestrator
- Add debug output in InferencePipeline result callback to see if it's called
- Add debug output in WorkflowOrchestrator handle_result to trace callback flow
- This will help identify exactly where the callback chain is breaking
- Previous test showed GUI can receive signals but callbacks aren't triggered
2025-07-23 22:43:06 +08:00
dc36f1436b debug: Add comprehensive debug output and test signals
- Add time import for test result generation
- Add test signal emissions to verify GUI connection works
- Add debug prints for signal establishment
- Test both result_updated and terminal_output signals
- This will help identify if the issue is signal connection or data flow
2025-07-23 22:39:57 +08:00
6245e25a33 debug: Add debug output to track result callback data flow
- Add debug prints in combined_result_callback to see received data
- Add debug prints in update_inference_results to track GUI updates
- Fix tuple order in terminal formatting to match actual (probability, result) format
- This will help identify why results show in terminal but not in GUI
2025-07-23 22:38:48 +08:00
1b3bed1f31 feat: Add upload_fw property to model nodes and GUI terminal output
- Add upload_fw property to ExactModelNode for firmware upload control
- Display all model node properties in right panel (model_path, scpu_fw_path, ncpu_fw_path, dongle_series, num_dongles, port_id, upload_fw)
- Replace console terminal output with GUI terminal display in deployment dialog
- Add Terminal Output section to deployment tab with proper formatting
- Terminal results now appear in app view instead of console for packaged apps
- Maintain backward compatibility with existing pipeline configurations

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-23 22:30:11 +08:00
07cbd146e5 update .md files 2025-07-23 22:10:03 +08:00
144089b144 add test_ui_deployment 2025-07-17 12:12:47 +08:00
be44e6214a update debug for deploment 2025-07-17 12:05:10 +08:00
45222fdd06 add debug for deploment 2025-07-17 11:46:30 +08:00
0e3295a780 feat: Add comprehensive terminal result printing for dongle deployments
- Enhanced deployment workflow to print detailed inference results to terminal in real-time
- Added rich formatting with emojis, confidence indicators, and performance metrics
- Combined GUI and terminal callbacks for dual output during module deployment
- Improved workflow orchestrator startup/shutdown feedback
- Added demonstration script showing terminal output examples
- Supports multi-stage pipelines with individual stage result display
- Includes processing time, FPS calculations, and metadata visualization

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 10:39:08 +08:00
e6c9817a98 feat: Add real-time inference results display to deployment UI
- Add result callback mechanism to WorkflowOrchestrator
- Implement result_updated signal in DeploymentWorker
- Create detailed inference results display with timestamps and formatted output
- Support both tuple and dict result formats
- Add auto-scrolling results panel with history management
- Connect pipeline results to Live View tab for real-time monitoring

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 10:22:48 +08:00
e97fd7a025 fix: Resolve remaining numpy array comparison errors in MultiDongle
- Fix ambiguous truth value error in get_latest_inference_result method
- Fix ambiguous truth value error in postprocess function
- Replace direct array evaluation with explicit length checks
- Use proper None checks instead of truthy evaluation on numpy arrays

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 10:11:38 +08:00
0a70df4098 fix: Complete array comparison fix and improve stop button functionality
- Fix remaining array comparison error in inference result validation
- Update PyQt signal signature for proper numpy array handling
- Improve DeploymentWorker to keep running after deployment
- Enhance stop button with non-blocking UI updates and better error handling

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 10:03:59 +08:00
183300472e fix: Resolve array comparison error and add inference stop functionality
- Fix ambiguous truth value error in InferencePipeline result handling
- Add stop inference button to deployment dialog with proper UI state management
- Improve error handling for tuple vs dict result types

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 09:46:31 +08:00
c94eb5ee30 fix import path problem in deployment.py 2025-07-17 09:25:07 +08:00
af9adc8e82 fix: Address file path and data processing bugs, add real-time viewer 2025-07-17 09:18:27 +08:00