Spaces:
Build error
Build error
Sujit Pal
commited on
Commit
·
2a06c48
1
Parent(s):
70aaa1b
fix: add link to model card
Browse files- dashboard_featurefinder.py +3 -2
- dashboard_image2image.py +3 -2
- dashboard_text2image.py +3 -2
dashboard_featurefinder.py
CHANGED
|
@@ -68,8 +68,9 @@ def app():
|
|
| 68 |
st.markdown("""
|
| 69 |
The CLIP model from OpenAI is trained in a self-supervised manner using
|
| 70 |
contrastive learning to project images and caption text onto a common
|
| 71 |
-
embedding space. We have fine-tuned the model
|
| 72 |
-
(10k images and ~50k captions from the remote
|
|
|
|
| 73 |
|
| 74 |
This demo shows the ability of the model to find specific features
|
| 75 |
(specified as text queries) in the image. As an example, say you wish to
|
|
|
|
| 68 |
st.markdown("""
|
| 69 |
The CLIP model from OpenAI is trained in a self-supervised manner using
|
| 70 |
contrastive learning to project images and caption text onto a common
|
| 71 |
+
embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
|
| 72 |
+
using the RSICD dataset (10k images and ~50k captions from the remote
|
| 73 |
+
sensing domain).
|
| 74 |
|
| 75 |
This demo shows the ability of the model to find specific features
|
| 76 |
(specified as text queries) in the image. As an example, say you wish to
|
dashboard_image2image.py
CHANGED
|
@@ -48,8 +48,9 @@ def app():
|
|
| 48 |
st.markdown("""
|
| 49 |
The CLIP model from OpenAI is trained in a self-supervised manner using
|
| 50 |
contrastive learning to project images and caption text onto a common
|
| 51 |
-
embedding space. We have fine-tuned the model
|
| 52 |
-
(10k images and ~50k captions from the remote
|
|
|
|
| 53 |
|
| 54 |
This demo shows the image to image retrieval capabilities of this model, i.e.,
|
| 55 |
given an image file name as a query, we use our fine-tuned CLIP model
|
|
|
|
| 48 |
st.markdown("""
|
| 49 |
The CLIP model from OpenAI is trained in a self-supervised manner using
|
| 50 |
contrastive learning to project images and caption text onto a common
|
| 51 |
+
embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
|
| 52 |
+
using the RSICD dataset (10k images and ~50k captions from the remote
|
| 53 |
+
sensing domain).
|
| 54 |
|
| 55 |
This demo shows the image to image retrieval capabilities of this model, i.e.,
|
| 56 |
given an image file name as a query, we use our fine-tuned CLIP model
|
dashboard_text2image.py
CHANGED
|
@@ -28,8 +28,9 @@ def app():
|
|
| 28 |
st.markdown("""
|
| 29 |
The CLIP model from OpenAI is trained in a self-supervised manner using
|
| 30 |
contrastive learning to project images and caption text onto a common
|
| 31 |
-
embedding space. We have fine-tuned the model
|
| 32 |
-
(10k images and ~50k captions from the remote
|
|
|
|
| 33 |
|
| 34 |
This demo shows the image to text retrieval capabilities of this model, i.e.,
|
| 35 |
given a text query, we use our fine-tuned CLIP model to project the text query
|
|
|
|
| 28 |
st.markdown("""
|
| 29 |
The CLIP model from OpenAI is trained in a self-supervised manner using
|
| 30 |
contrastive learning to project images and caption text onto a common
|
| 31 |
+
embedding space. We have fine-tuned the model (see [Model card](https://huggingface.co/flax-community/clip-rsicd-v2))
|
| 32 |
+
using the RSICD dataset (10k images and ~50k captions from the remote
|
| 33 |
+
sensing domain).
|
| 34 |
|
| 35 |
This demo shows the image to text retrieval capabilities of this model, i.e.,
|
| 36 |
given a text query, we use our fine-tuned CLIP model to project the text query
|