Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post processing code for depth and stencil #13

Open
JiamingSuen opened this issue Oct 20, 2017 · 26 comments
Open

Post processing code for depth and stencil #13

JiamingSuen opened this issue Oct 20, 2017 · 26 comments

Comments

@JiamingSuen
Copy link

Hi @barcharcraz , thanks for this amazing work!
I'm having trouble processing the captured tiff image. Would you like to post the post-processing code for depth and stencil? It's still a mount of work figuring out the exact logarithmic encoding the game developer used in depth buffer, and for stencil the object ID and "some flags(according to the paper)" are still unclear.
I started with this article decoding the depth buffer, but I'm not sure if z = log(C*w + 1) / log(C*Far + 1) * w //DirectX with depth range 0..1 is the exact way they encode the depth. Having your post-processing code will really save a lot of effort for me and the community.
Thanks for your time!

@racinmat
Copy link
Contributor

Hopefully this article series could help a little bit.

@barcharcraz
Copy link
Contributor

barcharcraz commented Nov 14, 2017 via email

@racinmat
Copy link
Contributor

Thanks for reply. I quite struggle with reading the data, since every page of this multipage tiff uses different flags.
Also, I struggle with 2D bounding boxes from the PostgreSQL database. Are they up to date?
Since they have all 4 points from range [0-1], I thought it would be sufficient to multiply X coords by width and Y coords by height, but that does not look right and bounding boxes do not have position which seems right when displayed over a screenshot.

@barcharcraz
Copy link
Contributor

barcharcraz commented Nov 14, 2017 via email

@racinmat
Copy link
Contributor

@barcharcraz I tried that, but unfortumatelly, it contains only semantic segmentation for cars, not for other objects. And if I am not mistaken, it completely lacks depth data.
I wanted to use ImageViewer you have as part of solution with managed plugin, but it does not seem to be working.

@barcharcraz
Copy link
Contributor

barcharcraz commented Nov 14, 2017 via email

@racinmat
Copy link
Contributor

The postprocessing code would be really great.
I checked the bounding box, but it is stored as box (native postgres structure), not box2d (postgis extension). And in the query building here coordinates are but there, not offset and extent, if I understand it correctly.

@barcharcraz
Copy link
Contributor

barcharcraz commented Nov 14, 2017 via email

@racinmat
Copy link
Contributor

Oh, my bad with bounding boxes.
I was confused, because in the C# code, there was new NpgsqlBox(detection.BBox.Max.Y, detection.BBox.Max.X, detection.BBox.Min.Y, detection.BBox.Min.X) but C# persists it to PostgreSQL in form of (MaxX, MaxY, MinX. MinY), which confused me. Now I can display them correctly.

But the method you proposed in your paper uses much better bounding boxes refinement. I was little bit disapointed that I could not find this post processing code, because you did really good job in refining data from both depth and stencil data.
You were right about the coarseness of these native bounding boxs. Really looking forward if you will decide to upload the post processing code. We wanted to use this repository at our university to replicate your research, and to prepare of our own dataset for some other tasks in field of machine learning.

@barcharcraz
Copy link
Contributor

check out https://github.com/umautobots/gta-postprocessing for postprocessing code.

@TommyAnqi
Copy link

@JiamingSuen. Hi, Have you figured out to decode the true depth value from the depth buffer?

@JiamingSuen
Copy link
Author

@TommyAnqi As the author mentioned, the depth is already linearized. So no decoding is needed.

@wujiyoung
Copy link

@JiamingSuen I want to access the real depth value with specific metric, such as meter, what should I do?

@racinmat
Copy link
Contributor

@wujiyoung depth is in NDC, so you need to recalculate it using the incerse of projection matrix.
I describe it in my master thesis where I inspected the GTA V visualization pipeline. https://dspace.cvut.cz/bitstream/handle/10467/76430/F3-DP-2018-Racinsky-Matej-diplomka.pdf?sequence=-1&isAllowed=y
See part 3.6.3 where I describe relation between NDC and camera space. Camera space is in meters, so after transferring it from NDC to camera space you will have it in meters.
It is more described in part 5.1, where I demonstrate projection of points from meters to NDC and backwards.

@barcharcraz
Copy link
Contributor

Well it's in "meters". Do be a little careful since while things like cars and people should be the right size, things like road length and building distance may be a little distorted, just because it's a game.

@wujiyoung
Copy link

@racinmat thank you so much, your master thesis is very useful to me. I have two questions more.

  1. We can get entity position (x, y, z) by native call ENTITY::GET_ENTITY_COORDS, but how can we get the value of w, which is needed in transformation from world coordinate to camera coordinate in part 3.6.2.
  2. How can we get the distance value l, r, t, b in part 5.1?

@racinmat
Copy link
Contributor

@wujiyoung you won't get w value, since it is the coordinate in homogeneous coordinates. So usual way to treat points in homogeneous coordinates is setting w to 1, do all your calculations and then devide all points by their w value, which normalizes them from homogeneous coordinates back to 3D.
I did not caluclate the l, r, t, b directly since I need them only as fractions in the projection matrix, but they can be calculated from the field of view and height/width ratio. The exact creation of projection matrix from field of view, width, height and near clip are in this function: https://github.com/racinmat/GTAVisionExport-postprocessing/blob/master/gta_math.py#L159
it is part of my repo where I perform various postprocessing of data gathered from GTA.

@bachw433
Copy link

@racinmat I used your code to transform NDC into meters, but I don't think the result is correct.
I just adjusted the input size into W=1024 & H=768.
construct_proj_matrix(H=768, W=1024, fov=50.0, near_clip=1.5)
Are there other things i have to change?
p1
p2
p3

@racinmat
Copy link
Contributor

are you sure the near_clip is correct? You need to obtain it from the camera parameters, the default near_clip is 0.15. I left 1.5 here, because I tried some things at time of writing the code, if you have incorrectly set near_clip it messes things a lot. Same goes for fov (GTA camera field of view), but I think 50 is default value.

@xiaofeng94
Copy link

Hi @racinmat , thanks for your sharing. According to your thesis, it seems that the transformation from camera coordinate to NDC is perspective projection. So, does it matter that P_{3,2} (in projection matrix) is 1 or -1. If not, may I use the standard perspective projection matrix (like the one provided by DirectX) to get the depth values in meters?

@racinmat
Copy link
Contributor

it matters, because in the RAGE (engine used by GTA V), in the camera view coordinate system, camera is heading in direction of negative Z, so positive Z values are behind camera, and negative are in front of camera. AFAIK -1 handles this negative Z coordinate. But that is just orientation, so if you use 1 instead of -1, it should work if you care only about depth in meters, and not the whole transformation into camera view.

@bachw433
Copy link

bachw433 commented Aug 30, 2018

thanks, @racinmat
I misunderstood 0.15 is another default number for near clip, just like your magic far clip. XD

Besides, I found a interesting stuff.
from the function :
var data = GTAData.DumpData(Game.GameTime + ".tiff", new List(wantedWeather));
you can get the projection matrix directly by calling data.ProjectionMatrix
and with your code, NDC can be transform into meters perfectly.

(data.ProjectionMatrix would be different whether there's sky(infinite depth) in the screen or not.
but only the matrix with sky can be sued for depth transformation perfectly.)

@racinmat
Copy link
Contributor

racinmat commented Aug 31, 2018

yes, you can use the projection matrix directly, but it's inaccurate.
If you look at how projection matrix is calculated, model_view_projection matrix and model_view matrix are obtained, and projection matrix is obtain by matrix multiplication of these. Because of calculation of inverse of model_view matrix and matrix multiplication, you face numerical instabilities and resulting projection matrix is inaccurate.
Constructing projection matrix from parameters avoids these numerical stability issues.
If you compare it, constructed matrix is slightly more precise than the obtained one from code.

@Yannnnnnnnnnnn
Copy link

@racinmat Thanks for your work in depth post-processing. But it is too hard for people who have no idea about rendering to understand.
To simplify, we only have to use the following formula to convert the depth value gather in the game to real depth in meters.
f=10003.814,n=0.15(default),d_game is the z-buffer value,d_real is the real depth value in meters.
image

@vivasvan1
Copy link

vivasvan1 commented Apr 19, 2021

@Yannnnnnnnnnnn Thank you for helping dummies like me 👍 ! Appreciate your comment! And thanks to the original genius @racinmat. <3.
@racinmat I hope one day i will be able to understand your work in depth & in depth.

@GehaoZhang6
Copy link

@Yannnnnnnnnnnn I have collected some raw files, but the values converted according to the formula are incorrect. Why is this happening?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants