-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add low NHI HCDs #17
Comments
@londumas, this must be for the last version produced by Thomas with N_HI_min = 17.2. Thomas saw that setting N_HI_min = 17.2 reduces the number of DLA with N_HI> 20 by a factor 5. This seems compatible with having a correct distribution when N_HI_min = 20. If this is the case we can focus on understanding what is happening when we reduce N_HI_min. |
@londumas, in the code we directly call I am confused on what's going on 😕, though because if we are using |
@fjaviersanchez, It looks like if you just use --nmin parameter for DLA_Saclay.py it goes only to dNdz(). I believe it should also go into add_DLA_table_to_object_Saclay(0 in order to go to get_N(). |
Ok, so it's likely that there was some problem in the way N_HI_min was reduced. |
I did use --nmin parameter in both dNdz() and get_N() functions (I added the use of --nmin in add_DLA_table_to_object_Saclay) |
I did some tests of pyigm and DLA_Saclay.py
This code gives the plot below and shows that get_N() produces the proper distribution of N_HI up to an ad hoc normalisation (the orange curve is 5.64E12 * exp(f(N_HI)) ). Then I tested dNdz()
In [42]: dNdz(z,Nmin=20)/ dNdz(z,Nmin=17.2) This does not seem consistent with the plot above. Indeed Is it that fN_default.calculate_lox()is not doing what we believe ? I went to github/pyigm and did not find much documentation but may be I did not look t the proper place. |
By the way, @fjaviersanchez from igmhub/LyaCoLoRe#32 |
@jmarclegoff thanks a lot for the plots and for following up on this. I think that you are right and
I tested this with the data that you provided above and it seems to be working. What I did is the following:
As for the London mocks: yes, I am waiting for James to produce the new mocks but I don't know exactly @andreufont and @jfarr03's plans. In any case, I suspect that they'd need to change Please, feel free to test this fix and let me know if there are any other issues. Thanks again! |
@fjaviersanchez Thanks for the suggestion. I tested what you posted (I just copied/pasted the new dNdz function into the DLA_Saclay.py code). I then did 2 other runs, with a factor 20000 in front of dNdz: |
I forgot: |
thanks for testing this so quickly @TEtourneau. Maybe I understood it incorrectly but it sounds like there's a normalization problem. Do you think that using a factor of |
@fjaviersanchez, it will indeed produce DLAs with proper N_HI distribution and about 0.2 DLA per spectra. So on short term it will solve the problem with an ad hoc factor. But I think it would be nice to understand where this factor is coming from. |
My guess (maybe wrong) is that this factor is related to the volume of the simulation or the cell, but I still have to confirm it. |
@fjaviersanchez we have put just a constant ad hoc factor, but this factor depends probably on z, so at some point we should try to understand the factor to get its z dependence. |
The distribution of n_DLA / n_QSO vs z used to be flat around 0.2 in reasonable agreement with the data. With the constant ad hoc factor @TEtourneau got a ratio that is decreasing steeply with z. |
@jmarclegoff, looks very good. Indeed @fjaviersanchez, I think it seems to be linked to the simulation volume or something of the sort. It would be nice to find that out at some point instead of using hadoc corrections. But what @TEtourneau and @jmarclegoff is very good for tests. |
@TEtourneau, can I have a look at your new catalog? Could you send me the path on Nersc? |
@londumas , this is not a big catalogue, it was just for test. |
Here is a catalog with the "new" dNdz: This one include the new function dNdz, the extra factor 20000*6.4 and a linear redshift compensation. |
@TEtourneau, looks good. Could you tell me which is the catalog of quasars |
The master.fits file is here: |
@TEtourneau, looks very good. Do you have a full mock ready somewhere with all the improvements? I be happy to do a full check of the DLA once it is the case. |
Actually there are 4 realisations in .../saclay/v4.4/ |
@TEtourneau, thanks. I see three in |
you can use the realisations v4.4.7, 4.4.8 and 4.4.9. |
@TEtourneau, there is still a hard cut at NHI=20:
gives
|
I'll have a look. It is very likely that I forgot to put the new options (like nhi_min=17.2) |
Looks good now. The raw distribution looks good, however there is still some discrepancies in the n(z) plot. But this plot is not from me but from @alxogm and I don't know if the code still works at NHI<20 and if I am running it correctly. |
@londumas, I guess the factor 10 in the amplitude between mock and data in the first plot is meaningless, as can be expected from the third plot. |
Hi, I’m just coming back from holidays. I can check tomorrow, Wednesday, if there is any issue in plot two in my side. |
@alxogm, thank you for looking into that. instead of updating the QA, can you send me the updated code, and I will do it myself, in order not to loose all other changes to other cells? Also could you tell us why you had to split into three parts? |
Hi @alxogm - I don't follow the reasoning. Why would the distance change when you change the limit of N_HI considered? |
We should aim at having a good fit of f(z,N) at a fixed z, covering the full range of N, without having to tweak anything. |
@andreufont yes we shouldn't tweak anything. This shows the normalization I was using was wrong, I'm fixing it. The plot above only shows, I think, that mocks are ok, but is not the final plot. @londumas yes, I'll send the update of the code once I test it works ok with no splitting in regions, but normalizing correctly. |
fixed |
Sorry, that I'm reopening this issue after so long. I checked my small piece of code to compute the fNHI against the London v9 mocks and works fine, as is, no need to split into regions as I was doing above (and never really improved it). Also gives a very similar result that an independent measure from James on such London mocks. As seen from James's plots of dn/dz for different NHI range, the issue might be precisely here For the computation of fNHI this matters because if you have more DLAS at some NHI bin, the redshift will be contributing to the total absorption length, which influence the full fNHI shape. When I was computing the fNHI by regions, each region was only scaled by the absorption length in that NHI range not the total one, making it more "similar" to expected one, but it was incorrect. |
I managed to use the dn/dz and f(N_HI) distribution from pyigm (using the functions from LyaCoLoRe: https://github.com/igmhub/LyaCoLoRe/blob/master/py/DLA.py) Everything looks good: we recover the input from pyigm, and the other distributions agree with the mocks from LyaCoLoRe. The ratio dn_dla / dn_qso doesn't agree with the ratio measured in data. To do a proper comparison, we should look at this ratio for the DLA reconstructed with the DLA finder algorithm. |
@fjaviersanchez, @TEtourneau tested adding the low NHI HCDs and he does not recover the proper number density of NHI~20 HCDs. do you know what is happening? It is good at ~17 and ~22. Maybe you have a power-low instead of the pygm model.
using /global/cscratch1/sd/tetourne/DesiMocks/v4.3.0/mock_0/output/master_DLA_3.fits
The text was updated successfully, but these errors were encountered: