Skip to content

Add pyspelling config and workflow #2274

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Mar 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions .github/workflows/spelling.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: Check spelling

on:
pull_request:
push:
branches:
- main
jobs:
pyspelling:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
cache: 'pip'
- run: pip install pyspelling
- run: sudo apt-get install aspell aspell-en
- run: pyspelling

25 changes: 25 additions & 0 deletions .pyspelling.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
spellchecker: aspell
matrix:
- name: beginner
sources:
- beginner_source/data_loading_tutorial.py
dictionary:
wordlists:
- tutorials-wordlist.txt
pipeline:
- pyspelling.filters.python:
group_comments: true
- pyspelling.filters.context:
context_visible_first: true
delimiters:
# Exclude figure rST tags
- open: '\.\.\s+(figure|literalinclude|)::'
close: '\n'
# Exclude Python coding directives
- open: '-\*- coding:'
close: '\n'
- pyspelling.filters.markdown:
- pyspelling.filters.html:
ignores:
- code
- pre
42 changes: 21 additions & 21 deletions beginner_source/data_loading_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ class FaceLandmarksDataset(Dataset):

def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
Arguments:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
Expand Down Expand Up @@ -197,7 +197,7 @@ def __getitem__(self, idx):
# swap axes).
#
# We will write them as callable classes instead of simple functions so
# that parameters of the transform need not be passed everytime it's
# that parameters of the transform need not be passed every time it's
# called. For this, we just need to implement ``__call__`` method and
# if required, ``__init__`` method. We can then use a transform like this:
#
Expand Down Expand Up @@ -291,12 +291,12 @@ def __call__(self, sample):
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'landmarks': torch.from_numpy(landmarks)}

######################################################################
# .. note::
# In the example above, `RandomCrop` uses an external library's random number generator
# (in this case, Numpy's `np.random.int`). This can result in unexpected behavior with `DataLoader`
# (see https://pytorch.org/docs/stable/notes/faq.html#my-data-loader-workers-return-identical-random-numbers).
# (in this case, Numpy's `np.random.int`). This can result in unexpected behavior with `DataLoader`
# (see `here <https://pytorch.org/docs/stable/notes/faq.html#my-data-loader-workers-return-identical-random-numbers>`_).
# In practice, it is safer to stick to PyTorch's random number generator, e.g. by using `torch.randint` instead.

######################################################################
Expand Down Expand Up @@ -404,7 +404,7 @@ def show_landmarks_batch(sample_batched):
plt.title('Batch from dataloader')

# if you are using Windows, uncomment the next line and indent the for loop.
# you might need to go back and change "num_workers" to 0.
# you might need to go back and change ``num_workers`` to 0.

# if __name__ == '__main__':
for i_batch, sample_batched in enumerate(dataloader):
Expand Down Expand Up @@ -444,21 +444,21 @@ def show_landmarks_batch(sample_batched):
# which operate on ``PIL.Image`` like ``RandomHorizontalFlip``, ``Scale``,
# are also available. You can use these to write a dataloader like this: ::
#
# import torch
# from torchvision import transforms, datasets
#
# data_transform = transforms.Compose([
# transforms.RandomSizedCrop(224),
# transforms.RandomHorizontalFlip(),
# transforms.ToTensor(),
# transforms.Normalize(mean=[0.485, 0.456, 0.406],
# std=[0.229, 0.224, 0.225])
# ])
# hymenoptera_dataset = datasets.ImageFolder(root='hymenoptera_data/train',
# transform=data_transform)
# dataset_loader = torch.utils.data.DataLoader(hymenoptera_dataset,
# batch_size=4, shuffle=True,
# num_workers=4)
# import torch
# from torchvision import transforms, datasets
#
# data_transform = transforms.Compose([
# transforms.RandomSizedCrop(224),
# transforms.RandomHorizontalFlip(),
# transforms.ToTensor(),
# transforms.Normalize(mean=[0.485, 0.456, 0.406],
# std=[0.229, 0.224, 0.225])
# ])
# hymenoptera_dataset = datasets.ImageFolder(root='hymenoptera_data/train',
# transform=data_transform)
# dataset_loader = torch.utils.data.DataLoader(hymenoptera_dataset,
# batch_size=4, shuffle=True,
# num_workers=4)
#
# For an example with training code, please see
# :doc:`transfer_learning_tutorial`.
23 changes: 23 additions & 0 deletions tutorials-wordlist.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
csv
DataLoaders
dataloader
dataset
datasets
dir
imagenet
io
jpg
ndarrays
Numpy's
numpy
preprocess
preprocessing
pytorch
rescale
runtime
th
subclasses
submodule
tanh
torchvision
uncomment