Update README.md
Browse files
README.md
CHANGED
|
@@ -4,9 +4,11 @@ tags:
|
|
| 4 |
library_name: keras
|
| 5 |
---
|
| 6 |
## Model description
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
| 10 |
|
| 11 |
## Dataset
|
| 12 |
[NYU Depth Dataset V2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
|
|
|
|
| 4 |
library_name: keras
|
| 5 |
---
|
| 6 |
## Model description
|
| 7 |
+
The original idea from Keras examples [Monocular depth estimation](https://keras.io/examples/vision/depth_estimation/) of [Victor Basu](https://www.linkedin.com/in/victor-basu-520958147/)
|
| 8 |
|
| 9 |
+
Full credits go to [Vu Minh Chien](https://www.linkedin.com/in/vumichien/)
|
| 10 |
+
|
| 11 |
+
Depth estimation is a crucial step towards inferring scene geometry from 2D images. The goal in monocular depth estimation is to predict the depth value of each pixel or infer depth information, given only a single RGB image as input.
|
| 12 |
|
| 13 |
## Dataset
|
| 14 |
[NYU Depth Dataset V2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect.
|