Skip to content

Unbreak benchmark_decoders.py #371

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Nov 13, 2024
Merged

Unbreak benchmark_decoders.py #371

merged 2 commits into from
Nov 13, 2024

Conversation

ahmadsharif1
Copy link
Contributor

benchmark_decoders.py was broken by #359

This PR unbreaks it. Longer term we could add a stronger check but this is good enough for now (and better than the non-throughput checks which doesn't even check for the length of the returned list or tensor).

Tested using:

python benchmarks/decoders/benchmark_decoders.py --bm_video_speed_min_run_seconds 1 --decoders=torchcodec_core_batch,torchcodec_core
video_files_paths=['/home/ahmads/personal/torchcodec/benchmarks/decoders/../../test/resources/nasa_13013.mp4']
video=/home/ahmads/personal/torchcodec/benchmarks/decoders/../../test/resources/nasa_13013.mp4, decoder=TorchCodecCore
video=/home/ahmads/personal/torchcodec/benchmarks/decoders/../../test/resources/nasa_13013.mp4, decoder=TorchCodecCoreBatch
[------------------------------------------------------------------ video=/home/ahmads/personal/torchcodec/benchmarks/decoders/../../test/resources/nasa_13013.mp4 h264 480x270, 13.013s 29.97002997002997fps ------------------------------------------------------------------]
                           |  uniform 10 seek()+next()  |  batch uniform 10 seek()+next()  |  random 10 seek()+next()  |  batch random 10 seek()+next()  |  1 next()  |  batch 1 next()  |  10 next()  |  batch 10 next()  |  100 next()  |  batch 100 next()  |  create()+next()
1 threads: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      TorchCodecCore       |            77.6            |              1001.7              |            53.7           |              722.6              |    17.5    |      246.0       |     22.6    |       228.0       |     67.2     |       777.4        |        20.4     
      TorchCodecCoreBatch  |            67.9            |               978.2              |            66.9           |              813.5              |    24.5    |      279.9       |     26.1    |       255.8       |     76.3     |       765.7        |                 

Times are in milliseconds (ms).

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Nov 13, 2024
@scotts
Copy link
Contributor

scotts commented Nov 13, 2024

The check seems fine to me - I don't think it's too important what the check is. We just have to call future.result() in order to make exceptions propagate. Putting it in an assertion seems the most reasonable thing to me, as if we call it in a void context, we'll probably get yelled at by the linter.

@ahmadsharif1 ahmadsharif1 merged commit ec6662c into pytorch:main Nov 13, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants