Replies: 1 comment
-
Hi @chenwanqq, I think that you are correct. Rust supports shadowing, and this does prevent dropping. See this playground or the below: struct Tensor(u32);
impl Tensor {
fn new(id: u32) -> Self {
Self(id)
}
}
impl Drop for Tensor {
fn drop(&mut self) {
println!("Dropping {:?}", self.0);
}
}
fn main() {
let x = Tensor::new(0);
println!("Created 0");
let x = Tensor::new(1);
println!("Created 1");
} What we want/expect is the following output, indicating that
However, this gives the below output:
This means that the shadowed tensor will remain in scope. There is a clippy lint for this which may be helpful to find/fix these occurances. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Considering following facts:
Many current model implementations are 'suspicious' regarding memory usage.
Compare the following two pseudo codes:
Code 1:
Code 2:
If my understanding is correct (although I haven't measured the actual memory usage), Code 2 will keep both original x and new x in memory because the original x is not dropped. It would be even worse if x is marked as variable in training scenario. In essence, variable shadowing could potentially lead to a significant increase in memory usage.
Models like LLaMA, Mistral, and LLaVA (which I have implemented) often rely on variable shadowing. If this issue is indeed valid, I believe numerous code segments should be revised, and a warning about this should be issued.
Beta Was this translation helpful? Give feedback.
All reactions