-
Notifications
You must be signed in to change notification settings - Fork 39
Ensure correct encoding for non-contiguous WF #666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1267,6 +1267,39 @@ def test_encode_to_tensor_long_output(self): | |
|
||
torch.testing.assert_close(self.decode(encoded_tensor), samples) | ||
|
||
def test_contiguity(self): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you provide some narration in the test about what exactly we're testing? I'm afraid I'm not familiar enough with tensor manipulation to fully understand everything. For example, I'm surprised There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. The strides have to do with the representation of the tensor in memory. But it's not what uniquely determines the values of a tensor. E.g. a 2D tensor
can be represented in memory as
or as
They'd have different strides, but equal values. |
||
# Ensure that 2 waveforms with the same values are encoded in the same | ||
# way, regardless of their memory layout. Here we encode 2 equal | ||
# waveforms, one is row-aligned while the other is column-aligned. | ||
|
||
num_samples = 10_000 # per channel | ||
contiguous_samples = torch.rand(2, num_samples).contiguous() | ||
assert contiguous_samples.stride() == (num_samples, 1) | ||
|
||
encoded_from_contiguous = encode_audio_to_tensor( | ||
wf=contiguous_samples, | ||
sample_rate=16_000, | ||
format="flac", | ||
bit_rate=44_000, | ||
) | ||
non_contiguous_samples = contiguous_samples.T.contiguous().T | ||
assert non_contiguous_samples.stride() == (1, 2) | ||
|
||
torch.testing.assert_close( | ||
contiguous_samples, non_contiguous_samples, rtol=0, atol=0 | ||
) | ||
|
||
encoded_from_non_contiguous = encode_audio_to_tensor( | ||
wf=non_contiguous_samples, | ||
sample_rate=16_000, | ||
format="flac", | ||
bit_rate=44_000, | ||
) | ||
|
||
torch.testing.assert_close( | ||
encoded_from_contiguous, encoded_from_non_contiguous, rtol=0, atol=0 | ||
) | ||
|
||
|
||
if __name__ == "__main__": | ||
pytest.main() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirming that this is load-bearing: newly added test fails if we just return
wf
without callingcontiguous()
.