task
stringclasses 1
value | source
stringclasses 1
value | video
stringlengths 37
38
| duration
int64 120
150
| tgt
listlengths 1
24
| conversations
listlengths 2
2
|
|---|---|---|---|---|---|
tvg
|
qvhighlights
|
qvh/videos/nqpx0jIZb34_510.0_660.0.mp4
| 150
|
[
[
42,
58
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Chef adds scallions to the dish' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/nqpx0jIZb34_660.0_810.0.mp4
| 150
|
[
[
114,
134
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Woman in blue sari prays at the altar' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/DCscUz4RPh0_360.0_510.0.mp4
| 150
|
[
[
56,
84
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'South Indian Boy having chat with dad on Spiderman'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/DCscUz4RPh0_60.0_210.0.mp4
| 150
|
[
[
0,
132
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'South Indian Muslim girls having chat during derive' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/DCscUz4RPh0_210.0_360.0.mp4
| 150
|
[
[
0,
150
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Three young Indian women stand in a salon and talk' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/DCscUz4RPh0_660.0_810.0.mp4
| 150
|
[
[
94,
108
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Row of fancy cars are in a parking garage' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/DCscUz4RPh0_510.0_660.0.mp4
| 150
|
[
[
54,
72
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A man is browsing a selection of eyeglasses on the counter of a store', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tpPEjvL9Svc_60.0_210.0.mp4
| 150
|
[
[
88,
150
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Girl filming new house going to buy', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tpPEjvL9Svc_210.0_360.0.mp4
| 150
|
[
[
0,
28
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'The woman is walking around the kitchen of an empty house'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tpPEjvL9Svc_360.0_510.0.mp4
| 150
|
[
[
0,
114
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'Woman holds her baby in the car'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tpPEjvL9Svc_510.0_660.0.mp4
| 150
|
[
[
12,
150
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A lady in a white T-shirt is driving while talking about a house', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tpPEjvL9Svc_660.0_810.0.mp4
| 150
|
[
[
44,
84
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Masked woman walks away from a Best Buy', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/f2u3Ah9Pf-8_210.0_360.0.mp4
| 150
|
[
[
98,
102
],
[
122,
126
],
[
134,
144
],
[
146,
150
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'Some friends order scorpions on sticks to eat from a street food vendor in Thailand'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/f2u3Ah9Pf-8_60.0_210.0.mp4
| 150
|
[
[
0,
46
],
[
124,
126
],
[
128,
134
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'A woman explores the food in a Bangkok night market'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/f2u3Ah9Pf-8_360.0_510.0.mp4
| 150
|
[
[
100,
122
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'Showing the view from the top of a building at night'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/lCz28sQdmBs_60.0_210.0.mp4
| 150
|
[
[
0,
22
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'An anchor with a purple shirt and white tie is gesturing and showing the path of the swirling green storm graphic', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/VXPyKbhNzKM_210.0_360.0.mp4
| 150
|
[
[
36,
54
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Man sits on a bench while a woman is talking' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/VXPyKbhNzKM_360.0_510.0.mp4
| 150
|
[
[
4,
12
],
[
14,
24
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'A couple is admiring the view on the beach near a large red suspension bridge'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/VXPyKbhNzKM_60.0_210.0.mp4
| 150
|
[
[
80,
94
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A woman is in the passenger seat of a car demonstrating a type of blue massage ball', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/ngxEf97MQSE_60.0_210.0.mp4
| 150
|
[
[
46,
100
],
[
102,
124
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'girl in peach she talking about how 12 year boy solve the Rubik cube in 35 second' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/ngxEf97MQSE_210.0_360.0.mp4
| 150
|
[
[
70,
120
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'A group is driving on a perilous narrow gravel road along the mountain'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/ngxEf97MQSE_360.0_510.0.mp4
| 150
|
[
[
26,
40
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Woman looks out the window of a moving car' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/ngxEf97MQSE_510.0_660.0.mp4
| 150
|
[
[
0,
22
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Sheep are together by a brick wall' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/ngxEf97MQSE_660.0_810.0.mp4
| 148
|
[
[
54,
68
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Cows are standing in the middle of the road' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/dhNjXCT-ytI_60.0_210.0.mp4
| 150
|
[
[
14,
32
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'The chef cooks milk in a pot' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/dhNjXCT-ytI_210.0_360.0.mp4
| 150
|
[
[
4,
54
],
[
56,
64
],
[
66,
78
],
[
84,
150
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Chef makes bread from beginning to end', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/dhNjXCT-ytI_360.0_510.0.mp4
| 150
|
[
[
78,
108
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Tomatoes and spices are being prepared and mixed on stainless steel cookware', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/dhNjXCT-ytI_660.0_810.0.mp4
| 150
|
[
[
48,
74
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Two boys eat on the couch together' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/dhNjXCT-ytI_510.0_660.0.mp4
| 150
|
[
[
50,
96
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A woman with mark between her eyebrows is driving a car and talking to the camera' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/Rq8-Xdj0-AE_210.0_360.0.mp4
| 150
|
[
[
30,
58
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A man holding a cup coffee walking through a parking lot and getting inside his car' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/Rq8-Xdj0-AE_360.0_510.0.mp4
| 150
|
[
[
14,
36
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Man in red sets up the photo shoot for the racer' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/Rq8-Xdj0-AE_60.0_210.0.mp4
| 150
|
[
[
10,
84
],
[
90,
150
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Man with baseball cap drives a car' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/8hB_Av9Exoc_210.0_360.0.mp4
| 150
|
[
[
90,
114
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'People are watching a woman in blue gather well water'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/8hB_Av9Exoc_60.0_210.0.mp4
| 150
|
[
[
84,
96
],
[
100,
104
],
[
108,
112
],
[
116,
120
],
[
138,
142
],
[
146,
150
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Young kids go for a hike' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/8hB_Av9Exoc_360.0_510.0.mp4
| 150
|
[
[
50,
150
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Little girls trailing with there father' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/SnwhJrnpEiE_210.0_360.0.mp4
| 150
|
[
[
38,
54
],
[
56,
74
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Young girl sits next to a plastic bottle while being filmed' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/SnwhJrnpEiE_360.0_510.0.mp4
| 150
|
[
[
84,
108
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A man in a backpack crawling off of his bed' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/SnwhJrnpEiE_60.0_210.0.mp4
| 150
|
[
[
104,
120
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A man is doing stretches in a grassy field during sunset' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/SnwhJrnpEiE_510.0_660.0.mp4
| 126
|
[
[
112,
126
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Black background indicating the end of the video' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/EDwPWnOmotU_360.0_510.0.mp4
| 150
|
[
[
66,
100
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A woman is walking through an Egyptian tomb on a tour' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/EDwPWnOmotU_60.0_210.0.mp4
| 150
|
[
[
58,
84
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Man holds up a plaque with Egyptian hieroglyphs' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/EDwPWnOmotU_210.0_360.0.mp4
| 150
|
[
[
112,
130
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'The Great Sphinx is approached' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/T5B4I5TzNWI_210.0_360.0.mp4
| 150
|
[
[
124,
138
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Two friends sit in a restaurant eating noodles and soup together and one is wearing a very neon yellow jacket' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/T5B4I5TzNWI_360.0_510.0.mp4
| 150
|
[
[
6,
60
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A man in a bright lime jacket talks with his friend and makes many gestures' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/T5B4I5TzNWI_60.0_210.0.mp4
| 150
|
[
[
0,
18
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Woman with white hair looks at her phone', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/T5B4I5TzNWI_510.0_660.0.mp4
| 132
|
[
[
64,
122
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A behind-the-scenes camera captures EXO-SC members preparing for and shooting a scene' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/7i0QMnz4ExY_60.0_210.0.mp4
| 150
|
[
[
18,
42
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Workers sort old clothes into separate coloured piles' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/a-egGH4_hPE_60.0_210.0.mp4
| 150
|
[
[
0,
12
],
[
14,
20
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Woman holds up a slide then looks through a microscope' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/INr5f7EWUag_60.0_210.0.mp4
| 150
|
[
[
0,
38
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Teen girl vlogs next to pixelated person' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/INr5f7EWUag_210.0_360.0.mp4
| 150
|
[
[
0,
72
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Two women one with red jacket on the left and one with black top on the right describing their previous day on the camera' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/INr5f7EWUag_360.0_510.0.mp4
| 150
|
[
[
84,
96
],
[
104,
112
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Woman in red top eats a burger' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/INr5f7EWUag_510.0_660.0.mp4
| 150
|
[
[
34,
40
],
[
52,
54
],
[
62,
66
],
[
74,
80
],
[
94,
98
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'Woman points to different areas of a map'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2EFIFTHnjvI_60.0_210.0.mp4
| 150
|
[
[
2,
48
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'A woman gives a small tour of a shop and explains some of the items she likes' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2EFIFTHnjvI_210.0_360.0.mp4
| 150
|
[
[
0,
150
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Girl sharing her eating habits' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2EFIFTHnjvI_360.0_510.0.mp4
| 150
|
[
[
0,
144
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A woman in a white vest and white cap is talking about her meal hours', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/-jLUEGASVeE_210.0_360.0.mp4
| 150
|
[
[
28,
58
],
[
96,
142
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A woman with her mask pulled down over her chin speaks into the camera' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/-jLUEGASVeE_60.0_210.0.mp4
| 150
|
[
[
0,
62
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A woman's doing her make-up' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/-jLUEGASVeE_360.0_510.0.mp4
| 150
|
[
[
16,
54
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A camera view of a tall apartment' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/-jLUEGASVeE_510.0_660.0.mp4
| 150
|
[
[
116,
134
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Black food tray is empty with just soy sauce and chopsticks left behind' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/-jLUEGASVeE_660.0_810.0.mp4
| 150
|
[
[
102,
118
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A woman in a mask is holding up a yellow squash in the store' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/d_-6Qz0621o_60.0_210.0.mp4
| 150
|
[
[
94,
132
],
[
140,
150
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A young woman with long black hair talks to the camera with a blue patterned mask on her face' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/d_-6Qz0621o_360.0_510.0.mp4
| 150
|
[
[
0,
24
],
[
40,
48
],
[
52,
58
],
[
60,
68
],
[
124,
150
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A woman and a man are talking to each other in front of a camera while walking on the street' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/d_-6Qz0621o_210.0_360.0.mp4
| 150
|
[
[
64,
110
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Shopping day 2: Sarojni and Banjara Market' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/Tzob7cj1-1Q_360.0_510.0.mp4
| 150
|
[
[
132,
150
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'People are riding ATV's in the desert' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/Tzob7cj1-1Q_210.0_360.0.mp4
| 150
|
[
[
20,
68
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A group of friends are passing by a store filled with countless candles' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/Tzob7cj1-1Q_60.0_210.0.mp4
| 150
|
[
[
34,
76
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A couple sit on a roof top bird watching and talking' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oSJ_rKL1auI_60.0_210.0.mp4
| 150
|
[
[
94,
122
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Hand moves the wheel at Shoe Carnival' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oSJ_rKL1auI_210.0_360.0.mp4
| 150
|
[
[
0,
150
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'Two little girls exploring different collections of shoes'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oSJ_rKL1auI_360.0_510.0.mp4
| 150
|
[
[
20,
52
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A young girl is trying on white shoes at a store' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oSJ_rKL1auI_510.0_660.0.mp4
| 150
|
[
[
0,
36
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Woman looks at different bags at the store' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_60.0_210.0.mp4
| 150
|
[
[
62,
86
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Woman holds up a mirror while doing her makeup' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_60.0_210.0.mp4
| 150
|
[
[
98,
112
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A woman holds up her phone mount to the mirror while doing her make up' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_210.0_360.0.mp4
| 150
|
[
[
0,
8
],
[
20,
22
],
[
32,
42
],
[
50,
58
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Black woman tries to adjust her weave when she's wearing black and white' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_210.0_360.0.mp4
| 150
|
[
[
56,
76
],
[
86,
90
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Girl in blue jean jacket touches her hair as she sits in a car as it drives' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_360.0_510.0.mp4
| 150
|
[
[
102,
134
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A black woman in a black jacket talks in the rear of a parked car', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_360.0_510.0.mp4
| 150
|
[
[
102,
134
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'A woman speaks while sitting in a car' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_510.0_660.0.mp4
| 150
|
[
[
134,
148
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A woman is approaching some electric scooters for rent' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/B39Cz-4_Z5M_660.0_810.0.mp4
| 150
|
[
[
0,
28
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A woman in LA finds many things are closed and malls have few people', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tinynG75yfg_210.0_360.0.mp4
| 150
|
[
[
60,
82
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A lady cuddling a little cat in outdoors' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tinynG75yfg_60.0_210.0.mp4
| 150
|
[
[
68,
90
],
[
136,
150
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'Woman wears a mask around her chin in the car' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/tinynG75yfg_360.0_510.0.mp4
| 150
|
[
[
74,
96
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A woman picks up an object from a table and dances with it' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oqkZ22RsvYQ_360.0_510.0.mp4
| 150
|
[
[
0,
30
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Stunning view of a waterfall from a camera and mobile view' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oqkZ22RsvYQ_210.0_360.0.mp4
| 150
|
[
[
54,
88
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'A happy couple visit Moran Falls and then see the sunset' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/oqkZ22RsvYQ_60.0_210.0.mp4
| 150
|
[
[
24,
52
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A bearded man pets a bird and it jumps on his shoulder' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/QmK4UKKsROY_210.0_360.0.mp4
| 150
|
[
[
0,
32
],
[
52,
60
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A group of protesters marches down the street with green lights around them' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/QmK4UKKsROY_60.0_210.0.mp4
| 150
|
[
[
4,
150
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A group of people are gathered in an highway then start walking while chanting' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/QmK4UKKsROY_360.0_510.0.mp4
| 150
|
[
[
80,
102
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Police lights are flashing on patrol cars in the street near where police are gathered around a person laying on their back on the pavement' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2eLqrZ5jReQ_60.0_210.0.mp4
| 150
|
[
[
28,
48
],
[
62,
72
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Toddler is playing with a paper santa hat', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2eLqrZ5jReQ_360.0_510.0.mp4
| 150
|
[
[
118,
144
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A woman wearing a santa hat is setting up a small Christmas tree' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2eLqrZ5jReQ_210.0_360.0.mp4
| 150
|
[
[
136,
150
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Family holds hands and turns in the circle' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/2eLqrZ5jReQ_510.0_660.0.mp4
| 150
|
[
[
0,
22
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'A little boy with star pyjamas is laying on the floor to check on the Christmas tree', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/MbkinIfE6mg_60.0_210.0.mp4
| 150
|
[
[
44,
72
]
] |
[
{
"from": "human",
"value": "<image>\nGiven a textual query: 'Man gets plane ticket then walks down glass hallway', when does the described content occur in the video? Please return the timestamp in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/MbkinIfE6mg_210.0_360.0.mp4
| 150
|
[
[
50,
72
],
[
76,
90
],
[
104,
150
]
] |
[
{
"from": "human",
"value": "<image>\nLocalize the visual content described by the given textual query 'A man in grey t-shirt is talking in front of the camera' in the video, and output the start and end timestamps in seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/MbkinIfE6mg_360.0_510.0.mp4
| 150
|
[
[
26,
28
],
[
54,
58
],
[
62,
80
]
] |
[
{
"from": "human",
"value": "<image>\nLocate the visual content mentioned in the text query 'Man in white top walks around a mall' within the video using timestamps.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/65mGAYu7zBQ_210.0_360.0.mp4
| 150
|
[
[
0,
42
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'Young girl having breakfast and sharing menu' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/65mGAYu7zBQ_60.0_210.0.mp4
| 150
|
[
[
30,
150
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A girl talking and planning to prepare her breakfast' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/65mGAYu7zBQ_360.0_510.0.mp4
| 150
|
[
[
20,
34
],
[
42,
56
],
[
68,
72
],
[
82,
92
]
] |
[
{
"from": "human",
"value": "<image>\nThe given natural language query 'A young blonde woman applying beauty products on her face' is semantically aligned with a video moment, please give the start time and end time of the video moment.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/65mGAYu7zBQ_510.0_660.0.mp4
| 138
|
[
[
18,
58
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'A girl with blue hoodie is making the meal for herself and eat it' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/eEqwHSll6Rg_210.0_360.0.mp4
| 150
|
[
[
12,
48
],
[
60,
90
]
] |
[
{
"from": "human",
"value": "<image>\nFind the video segment that corresponds to the given textual query 'Girl having some snacks during filming and working' and determine its start and end seconds.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
tvg
|
qvhighlights
|
qvh/videos/eEqwHSll6Rg_60.0_210.0.mp4
| 150
|
[
[
48,
64
]
] |
[
{
"from": "human",
"value": "<image>\nDetect and report the start and end timestamps of the video segment that semantically matches the given textual query 'Woman makes herself some cereal'.The format of your response should be: 'The event happens in <start time> - <end time>''."
},
{
"from": "gpt",
"value": "The event happens in <vid> - <vid>."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.