You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you refer to the respective routines in the MPI package:
Regarding #6768: The serialization buffer is currently allocated anew each time, so that it never needs to be shrunk. However, the converse may make sense -- keeping a small buffer around instead of allocating it anew each time.
Regarding #10073: Transmitting large arrays via serialization will be slow if it requires copying the data on either the sender and receiver side. In addition to "copying the array directly to the socket" (i.e. calling MPI_Isend directly), one also needs to implement the converse, i.e. receiving the array (via MPI_Irecv) directly into the destination array instead of using a receive buffer.
However, I question whether the second item will actually be useful in practice. In many cases (i.e. in many existing MPI codes), the receive will want to reuse an existing array, and this is not possible with Julia's serialization API. I assume that people will instead call MPI routines directly.
I'll note that both the PRs listed above have been merged. Can we close this one? I suspect what we need to do is get the MPI transport running again and set up some new benchmarks and file new issues as we encounter them.
Need to implement the equivalent of
JuliaLang/julia#6768 and
JuliaLang/julia#10073
when using MPI for transport
The text was updated successfully, but these errors were encountered: