Skip to content

Commit 8b1ed83

Browse files
lvourslsvekars
andauthored
added fixes to semi-strutctured sparse tutorial (#2616)
Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
1 parent b51f2b8 commit 8b1ed83

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

prototype_source/semi_structured_sparse.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -315,6 +315,7 @@ Now that those are defined, we just need one additional helper function, which w
315315
We will get started by loading our model and tokenizer, and then setting up our dataset.
316316

317317
.. code:: python
318+
318319
# load model
319320
model_name = "bert-base-cased"
320321
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
@@ -344,6 +345,7 @@ Running the following code gives me an F1 score of 86.9. This is quite close to
344345
training_args = transformers.TrainingArguments(
345346
"trainer",
346347
num_train_epochs=1,
348+
lr_scheduler_type="constant",
347349
per_device_train_batch_size=64,
348350
per_device_eval_batch_size=512,
349351
)
@@ -446,7 +448,7 @@ We will also evaluate the model to show the accuracy degradation of zero-shot pr
446448
with torch.inference_mode():
447449
predictions = trainer.predict(tokenized_squad_dataset["validation"])
448450
pruned = compute_metrics(
449-
*predictions.predictions
451+
*predictions.predictions,
450452
tokenized_squad_dataset["validation"],
451453
squad_dataset["validation"],
452454
)
@@ -498,7 +500,7 @@ Now that we have a model in this format, we can accelerate it for inference just
498500
print("sparse eval metrics: ", metrics_sparse)
499501
sparse_perf = measure_execution_time(
500502
model,
501-
batch_sizes_perf_cuda,
503+
batch_sizes,
502504
tokenized_squad_dataset["validation"],
503505
)
504506
print("sparse perf metrics: ", sparse_perf)

0 commit comments

Comments
 (0)