Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of large array transfers when using MPI transport. #8

Open
amitmurthy opened this issue Mar 10, 2015 · 2 comments

Comments

@amitmurthy
Copy link
Contributor

Need to implement the equivalent of

JuliaLang/julia#6768 and
JuliaLang/julia#10073

when using MPI for transport

@eschnett
Copy link
Contributor

If you refer to the respective routines in the MPI package:

  • Regarding #6768: The serialization buffer is currently allocated anew each time, so that it never needs to be shrunk. However, the converse may make sense -- keeping a small buffer around instead of allocating it anew each time.
  • Regarding #10073: Transmitting large arrays via serialization will be slow if it requires copying the data on either the sender and receiver side. In addition to "copying the array directly to the socket" (i.e. calling MPI_Isend directly), one also needs to implement the converse, i.e. receiving the array (via MPI_Irecv) directly into the destination array instead of using a receive buffer.

However, I question whether the second item will actually be useful in practice. In many cases (i.e. in many existing MPI codes), the receive will want to reuse an existing array, and this is not possible with Julia's serialization API. I assume that people will instead call MPI routines directly.

@simonbyrne simonbyrne transferred this issue from JuliaParallel/MPI.jl Aug 6, 2019
@ViralBShah
Copy link
Member

ViralBShah commented Apr 21, 2020

I'll note that both the PRs listed above have been merged. Can we close this one? I suspect what we need to do is get the MPI transport running again and set up some new benchmarks and file new issues as we encounter them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants