Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Final charm backend #644

Open
wants to merge 19 commits into
base: devel
Choose a base branch
from
Open

Conversation

epmikida
Copy link

Here is the PR for merging in the Charm++ backend to FleCSI.

To build and test the Charm++ backend:

  1. Clone Charm++ from https://github.com/UIUC-PPL/charm
  2. Build Charm++ by running ./build charm++ mpi-linux-x86_64 --build-shared from within the top level charm directory
  3. Build all other FleCSI dependencies as if you were building the Legion backend (except that Legion is not needed)
  4. Change the following cmake variables within FleCSI

FLECSI_RUNTIME_MODEL charm
CMAKE_CXX_COMPILER <path_to_charm/bin/charmc>
CMAKE_CXX_FLAGS -c++-option --std=gnu++17
CMAKE_C_COMPILER <path_to_charm/bin/charmc>
CMAKE_C_FLAGS -c++-option --std=gnu++17

Copy link
Contributor

@opensdh opensdh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did one pass; there may be more to say after certain simplifications.

flecsi/data/backend.hh Outdated Show resolved Hide resolved
the Charm++ backend.
*/

struct bind_accessors_t : public util::tuple_walker<bind_accessors_t> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's please not add usage of util::tuple_walker; now that we use C++17, it's very easy to provide something like

template<class T>
void operator()(T &&tup) {
  std::apply([this](TT &... tt) { (visit(tt), ...); }, std::forward<T>(tup));
}

Also, new classes shouldn't use _t (which signifies POSIX types and trait aliases).

*/
template<typename DATA_TYPE, size_t PRIVILEGES>
void visit(data::accessor<data::dense, DATA_TYPE, PRIVILEGES> & accessor) {
flog_assert(buf_.size() % sizeof(DATA_TYPE) == 0, "Bad buffer size\n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the one use of buf_, and (as per the comment above) it doesn't seem appropriate (what if there were more than one accessor in a call?).

// is_constructible_v<float&,double&> instead.
template<class... PP, class... AA>
auto
serial_arguments(std::tuple<PP...> * /* to deduce PP */, AA &&... aa) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If all this machinery (including all the serialization in task_wrapper.hh) is needed for multiple backends, it should be factored out somewhere rather than duplicated.

template<class, class>
struct tuple_prepend;
template<class T, class... TT>
struct tuple_prepend<T, std::tuple<TT...>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair warning: this is probably going to go away (it turned out that having "MPI tasks" have an assumed number of point tasks was unnecessarily restrictive). (Sorry that devel is such a moving target in places.)

flecsi/run/charm/context.cc Outdated Show resolved Hide resolved
flecsi/run/charm/context.hh Outdated Show resolved Hide resolved
flecsi/run/charm/context.hh Outdated Show resolved Hide resolved
void visit(data::accessor<data::dense, DATA_TYPE, PRIVILEGES> & accessor) {
flog_assert(buf_.size() % sizeof(DATA_TYPE) == 0, "Bad buffer size\n");
auto & flecsi_context = run::context::instance();
DATA_TYPE* d = (DATA_TYPE*)flecsi_context.getField(accessor.identifier());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use reinterpret_cast to make it obvious what's happening here.

flecsi/run/charm/context.hh Outdated Show resolved Hide resolved
CharmLibExit calls MPI_Finalize if Charm++ is built on the MPI layer, so this was causing crashes. If Charm is not built on the MPI layer, then MPI is never used/initialized in the first place.
@ollielo
Copy link
Contributor

ollielo commented Apr 25, 2022

Closed due to lack of (discontinued) funding for this line of work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants