Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add high-level API Scatter for splitting 1D array #816

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ykkan
Copy link

@ykkan ykkan commented Feb 4, 2024

Following the idea of Gather, a high-level API Scatter for Scatter! is provided. With this API, users can easily scatter a 1D array without handling VBuffer or UBuffer.
For example:

MPI.Init()
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
nprocs = MPI.Comm_size(comm)


arr = nothing
if rank == 0
  arr = [1:10...]
end

local_arr = MPI.Scatter(data, comm)

print("Hello world, I am rank $(rank) of $(nprocs), $(local_arr)\n")

Output (nprocs=3):

Hello world, I am rank 0 of 3, [1, 2, 3, 4]
Hello world, I am rank 2 of 3, [8, 9, 10]
Hello world, I am rank 1 of 3, [5, 6, 7]

Comment on lines +188 to +189
arr_len = MPI.Bcast(arr_len, root, comm)
elm_t = MPI.bcast(elm_t, root, comm)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is one of these Bcast and the other bcast?

Copy link
Author

@ykkan ykkan Jun 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I remember correctly, elm_t is not a isbits type and can only be broadcasted with bcast. My idea was to use Bcast whenever possible, as its overhead should be less than bcast(?). Of course, using bcast for both elm_t and arr_len should work. I think my idea behind is rather a matter of style. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants