Commit
·
d6ba0ed
1
Parent(s):
53f5292
Update README.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ For quick start please have a look this [demo](https://github.com/ahmedssabir/Te
|
|
| 21 |
to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier
|
| 22 |
is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects.
|
| 23 |
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
| 24 |
-
relation. In particular, we use Sentence-RoBERTa via cosine similarity to give a soft score, and then
|
| 25 |
we use a threshold to annotate the final label (if th ≥ 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
| 26 |
of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
|
| 27 |
to estimate the visual relatedness score.
|
|
|
|
| 21 |
to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier
|
| 22 |
is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects.
|
| 23 |
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
| 24 |
+
relation. In particular, we use Sentence-RoBERTa-sts via cosine similarity to give a soft score, and then
|
| 25 |
we use a threshold to annotate the final label (if th ≥ 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
| 26 |
of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
|
| 27 |
to estimate the visual relatedness score.
|