Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] There is not any InfiniBand or NVLink in my 4-GPU machine, how can I use mscclpp to communicate? #397

Open
Maphsge4 opened this issue Dec 2, 2024 · 2 comments

Comments

@Maphsge4
Copy link

Maphsge4 commented Dec 2, 2024

IndexError: IB transport out of range: 0 >= 0

Can I avoid using IB to communicate? Thanks for answering my question!

@chenhongyu2048
Copy link

For reference: use cudaipc

mscclpp::Transport transport = mscclpp::Transport::CudaIpc;

@chhwang
Copy link
Contributor

chhwang commented Jan 2, 2025

For reference: use cudaipc

mscclpp::Transport transport = mscclpp::Transport::CudaIpc;

@Maphsge4 Did this help you? If you are working on PCIe, you should also make sure that your GPUs are peer-to-peer accessible, otherwise CudaIpc transport won't work as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants