|
2025-08-18 23:25:56 - INFO - Loading model: LiquidAI/LFM2-VL-1.6B |
|
2025-08-18 23:25:58 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). |
|
2025-08-18 23:26:26 - INFO - Model loaded in 29.54 seconds |
|
2025-08-18 23:26:26 - INFO - GPU Memory Usage after model load: 3023.64 MB |
|
2025-08-18 23:28:45 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Received new video inference request. Prompt: 'Please describe the video.', Video: 'messi_part_001.mp4' |
|
2025-08-18 23:28:45 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Video saved to temporary file: temp_videos/2d0a4e6b-87a3-4f80-9d2e-24b74787acdb.mp4 |
|
2025-08-18 23:28:45 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Extracting frames using method: uniform, rate/threshold: 30 |
|
2025-08-18 23:28:48 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Extracted 30 frames successfully. Saving to temporary files... |
|
2025-08-18 23:28:48 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] 30 frames saved to temp_videos/2d0a4e6b-87a3-4f80-9d2e-24b74787acdb |
|
2025-08-18 23:28:48 - INFO - Prompt token length: 783 |
|
2025-08-18 23:28:54 - INFO - Tokens per second: 28.289322629768442, Peak GPU memory MB: 4206.375 |
|
2025-08-18 23:28:54 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Inference time: 8.48 seconds, CPU usage: 22.5%, CPU core utilization: [21.5, 23.9, 21.6, 23.0] |
|
2025-08-18 23:28:54 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Cleaned up temporary file: temp_videos/2d0a4e6b-87a3-4f80-9d2e-24b74787acdb.mp4 |
|
2025-08-18 23:28:54 - INFO - [2d0a4e6b-87a3-4f80-9d2e-24b74787acdb] Cleaned up temporary frame directory: temp_videos/2d0a4e6b-87a3-4f80-9d2e-24b74787acdb |
|
|