Skip to content

fix bug when using cuda in ubuntu #83

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

WorldHellooo
Copy link

The Class Variable's init() doesn't return a cuda Variable because the 'data' would be released later. This bug results in free(): invalid pointer error.

The Class Variable's __init__() doesn't return a cuda Variable because the 'data' would be released later. This bug results in free(): invalid pointer error.
@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

How is that different from the previous solution (except that it always produces volatile variables which is incorrect)?

@WorldHellooo
Copy link
Author

The previous solution returns a cpu Variable even if the USE_CUDA is true. As a result, the data is in the cpu while the net model is in the gpu, causing a invalid pointer error.

@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

No, it doesn't. The data is converted to CUDA

@WorldHellooo
Copy link
Author

Could you tell me what's wrong with my experiment?
screenshot

@apaszke
Copy link
Contributor

apaszke commented May 15, 2017

Ah I now see what's wrong. Variables ignore __init__. In fact now I'd consider modifying these classes bad practice and the example should be modified. However your PR doesn't fix the problem.

@WorldHellooo
Copy link
Author

Ok, it doesn't matters.

@chsasank
Copy link
Contributor

Hi,

@WorldHellooo does this PR fix #66 ?
@apaszke What changes do you recommend? I'll make those changes.

@chsasank chsasank reopened this May 17, 2017
@WorldHellooo
Copy link
Author

In my test, it did fix #66.

@apaszke
Copy link
Contributor

apaszke commented May 17, 2017

It might fix it, but it's not a correct patch. We should use this hack but call type

@chsasank
Copy link
Contributor

Cool, I'll create a PR like this

use_cuda = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
.
.
.

var = Variable(data).type(dtype)

@chsasank chsasank mentioned this pull request May 23, 2017
@chsasank chsasank closed this May 29, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants