Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use config flag for locality-aware mpi #941

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

geraldc-unm
Copy link

@geraldc-unm geraldc-unm commented Jun 28, 2023

Add configure option for locality-aware MPI.

When creating Comm Pkg, must set comm_pkg->use_neighbor = 1, defaults to 0.
Since we observed that node aware performs worse at finest levels, added a lvl threshold setting that controls a switchover to use node aware MPI. By default, all levels will use node_aware.

@geraldc-unm
Copy link
Author

I can remove the print statements, I was using them to debug

@victorapm
Copy link
Contributor

@geraldc-unm, this looks nice!

Which path should I pass to --with-node-aware-mpi-include to config hypre with node-aware MPI support? i.e., Which is the dependency library here?

Thanks!

@@ -75,6 +86,11 @@ typedef struct _hypre_ParCSRCommPkg
HYPRE_Int num_recvs;
HYPRE_Int *recv_procs;
HYPRE_Int *recv_vec_starts;
HYPRE_Int use_neighbor;
#ifdef HYPRE_USING_NODE_AWARE_MPI
long *global_send_indices;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's better to use hypre_uint64, see

typedef uint64_t hypre_uint64;
or hypre_longint here

{
arg_index++;
hypre_HandleNodeAwareSwitchoverThreshold(hypre_handle()) = atoi(argv[arg_index++]);
if (hypre_HandleNodeAwareSwitchoverThreshold(hypre_handle()) > 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should hypre_HandleNodeAwareSwitchoverThreshold be an option internal to BoomerAMG instead of a global variable?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a first approach to have the value accessible. I'm not sure how we could have it as part of BoomerAMG but yes it should only be used for BoomerAMG.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! @liruipeng, do you support the idea of moving this switch threshold inside BoomerAMG?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @victorapm I totally agreed with you. It would be much better if we can move it into the AMG struct. But a technical issue is that we need to access this value in some functions involved in the solve phase but the AMG struct is not passed along to those functions. So we ended up having a global variable for its simplicity. Any suggestions? Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liruipeng thanks for providing the context. Now, I can see it better! See my follow-up comments at #941 (comment) and #941 (comment)
Sorry to bug with this, but I think it's better if we have less global variables

@geraldc-unm
Copy link
Author

@geraldc-unm, this looks nice!

Which path should I pass to --with-node-aware-mpi-include to config hypre with node-aware MPI support? i.e., Which is the dependency library here?

Thanks!

As of right now, MPI Advance must be separately cloned and built. I run CC=mpicc CXX=mpicxx ./configure --enable-persistent --with-MPI-include=<path to mpi advance>/src --with-MPI-lib-dirs=<path to mpi advance>/build/src/ --with-MPI-libs=mpi_advance --with-node-aware-mpi

And I build MPI advance with mkdir build; cd build; CC=mpicc CXX=mpicxx cmake ..; make

@victorapm
Copy link
Contributor

Thanks @geraldc-unm!

Where can I find the MPI advance library?

@geraldc-unm
Copy link
Author

geraldc-unm commented Sep 14, 2023

Thanks @geraldc-unm!

Where can I find the MPI advance library?

https://github.com/mpi-advance/locality_aware

Thanks so much for the review

@@ -286,6 +286,15 @@ hypre_BoomerAMGCycle( void *amg_vdata,
hypre_GpuProfilingPushRange(nvtx_name);
while (Not_Finished)
{
#ifdef HYPRE_USING_NODE_AWARE_MPI
if (level >= hypre_HandleNodeAwareSwitchoverThreshold(hypre_handle()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liruipeng This switch threshold could be moved inside the AMG struct

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this one, I think it could be in the AMG struct as long as it can still be set when parsing command line args, where the value comes from

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. We have the access to hypre_ParAMGData in this function. Do we need hypre_HandleNodeAwareSwitchoverThreshold in other functions that don't have the access?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can only see it being used at par_cycle.c

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I didn't check these in as permanent changes to the ij driver but to properly use node aware, you must set this hypre_HandleNodeAwareSwitchoverThreshold otherwise it will unoptimally use node aware mpi for every amg level.

This brings up 2 things:

  1. NodeAwareSwitchoverThreshold must be accessible to set like from the ij.c driver. It's quite nice to have it as a command line arg to test different thresholds without recompiling or anything.

  2. We probably want to document this and how to connect to MPI advance. I can compile the steps and maybe we could include it somewhere in the wiki or something? But I don't know how to contribute to the wiki.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Yes, there should be a Set routine for that, e.g., HYPRE_BoomerAMGSetNodeAwareSwitchLevel
  2. Great idea, I'm not sure about the access but if you prepare something in markdown format, we can add it to the wiki.

Copy link
Contributor

@liruipeng liruipeng Sep 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documenting it in the Wiki is a good idea.

#ifdef HYPRE_USING_NODE_AWARE_MPI
if (level >= hypre_HandleNodeAwareSwitchoverThreshold(hypre_handle()))
{
hypre_HandleUsingNodeAwareMPI(hypre_handle()) = 1;
Copy link
Contributor

@victorapm victorapm Sep 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liruipeng however, this other global variable (which turns off/on node aware MPI) cannot. One idea is to move it to the comm_pkg struct.

PS: we could have it defaulted to zero, and implement something like hypre_CommPkgSetNodeAwareMPI to turn comm_pkg->node_aware_mpi on/off

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to do that, comm_pkg would need to be accessible within hypre_BoomerAMGCycle which I don't think it is unless I'm mistaken. I don't see it in hypre_ParAMGData. Maybe it could be added as a field there but would need to be properly set and maintained.

Copy link
Contributor

@victorapm victorapm Sep 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use

#define hypre_ParCSRMatrixCommPkg(matrix) ((matrix) -> comm_pkg)

e.g., hypre_ParCSRMatrixCommPkg(A_array[lvl])

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be wrong...but to what granularity do we want to control hypre_HandleUsingNodeAwareMPI? If we put in commPkg, it would be per commPkg/Parcsrmatrix. I thought it's a more global thing, like the whole AMG solver. Again, putting it into the AMG struct is ideal, if we can find a clean way to do it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can see, the node aware option has been implemented only for the IJ interface. If we want it on the other interfaces in the future, we could extend their respective comm_pkg structs to handle this option.

Note I'm suggesting to put using_node_aware_mpi under the comm_pkg struct here, i.e., not under the AMG struct as for the other case :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may still work but it's my understanding that we have multiple matrices per level that each have a comm pkg. The using_node_aware toggle logically applies to a whole level with multiple matrices. So I think we would need to get the comm pkg for A and P and set them both to use node aware in their comm pkgs but I think that would work. What do y'all think?

Copy link
Contributor

@victorapm victorapm Sep 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @geraldc-unm! I spoke to Rui Peng and shared that's a better approach in my opinion. Let's think about this (moving the variable to inside comm_pkg) a bit more...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants