Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration for DUNE-FD-HD full 10kt with hydra option #108

Open
wants to merge 6 commits into
base: develop
Choose a base branch
from

Conversation

HaiwangYu
Copy link
Contributor

@HaiwangYu HaiwangYu commented Jun 29, 2024

Copy link
Contributor

@absolution1 absolution1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @HaiwangYu
Thanks for sending this over. I've attached comments.

Some other items:

  • a CMakeLists.txt file missing in dune10kt-hd
  • a line needs adding to the CMakeLists.txt in DUNEWirecell to add the dune10kt-hd folder to the compilation

Finally, I've run a quick test with a simple particle gun muon in the 10kt geom. The muon passes through 5 of the TPC volumes. I am seeing peak memory usage of 6.4GB. Is that an expected amount?

@@ -1797,4 +1797,48 @@ protodunehd_nfspimg : {
}
}



dune10kt_hd_sim_nfsp : {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this name be changed to say "dune10kt_horizdrift_sum_nfsp" as its then in keeping with the rest of the naming.

@tomjunk
Copy link
Member

tomjunk commented Aug 23, 2024

still under review?

@HaiwangYu
Copy link
Contributor Author

HaiwangYu commented Aug 23, 2024

@tomjunk I will make some changes to address @absolution1's comments this weekend. Somehow, I missed those earlier.

@HaiwangYu
Copy link
Contributor Author

HaiwangYu commented Aug 26, 2024

@absolution1 , I pushed one update to fix your comments. Could you double check?

Finally, I've run a quick test with a simple particle gun muon in the 10kt geom. The muon passes through 5 of the TPC volumes. I am seeing peak memory usage of 6.4GB. Is that an expected amount?

I saw similar number. See page 8 and 9 of this talk.
2024-05-13_hydra3-full-hd.pdf

You can see from page 8 that 6APA of HD also uses 6GB resident mem.

Do you think more optimizations are needed?

@absolution1
Copy link
Contributor

Hi @HaiwangYu and @tomjunk (apologies for the delay, I was on holiday)

This is OK to merge.

I do think we need to explore further memory optimisations, but that shouldn't stop this PR going in.

@aolivier23
Copy link
Contributor

Looks like we have @absolution1 's approval but we need to hit the "review" button. I'm going to review this and try to merge it Friday. Please leave a message here if you want me to wait.

Copy link
Contributor

@absolution1 absolution1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank for checking in on this @aolivier23
I never actually did take a look. I have now. I did have a couple of small questions. Once addressed, this can be merged.
@HaiwangYu could you take a quick look?

cathode: centerline - cpa_plane,
}
],
} for n in std.range(0,11)], // twelve anodes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like a setup for the 1x2x6. Is this line important?

},

daq: super.daq {
nticks: 6000,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this get overwritten later on? If we change the number of ticks in LArSoft will thie line supersede the larsoft value?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still under review? I think we have a general issue with LArSoft plugins that do not share common configurations. There is a manual step needed to align the configurations across domains owned by different software components. We had a discussion about this in the future frameworks requirements workshop, but I don't think we had a good solution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pinged Haiwang to take a look.
We've had some success in combatting this in FD, by linking the params between config with @local. That obviously only works when both configs are fcl-based.

@HaiwangYu
Copy link
Contributor Author

Hi @absolution1 , I updated my branch for this PR. Could you take a look? Thanks!

Copy link
Contributor

@absolution1 absolution1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @HaiwangYu
The changes look good. Before approving, can I just confirm where far detector dimensions were taken from? The GDML?

@HaiwangYu
Copy link
Contributor Author

Hi @absolution1 , very good question.

First, the x (drift) positions are configured in the jsonnet.
The yz positions are configured by the wire boundaries from the gdml.

So, for the jsonnet configurations, we only consider the x-axis dimensions.

For that, I think current dimensions are derived from hd1x2x6 and adjusted by the gdml.
Comparing these two files, I think the only parameter adjusted is the apa_cpa

1x2x6: 3.63075*wc.m
fullhd: 3.635265*wc.m

https://github.com/DUNE/dunereco/blob/cf74225fa944ac60a56e41679ded241a0c636daf/dunereco/DUNEWireCell/dune10kt-hd/simparams.jsonnet
https://github.com/DUNE/dunereco/blob/develop/dunereco/DUNEWireCell/dune10kt-1x2x6/params.jsonnet
2024-05-13_hydra3-full-hd.pdf

After this adjustment, the wire positions are now synchronized to the gdml:
page 3-6 of this talk:

2024-05-13_hydra3-full-hd.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants