From a20c3c41ab7292139abe1915993b524f65f33bae Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?H=C3=A5kon=20H=C3=A6gland?= Date: Sun, 31 May 2020 16:57:54 +0200 Subject: [PATCH] Fixed some typos. Corrected some typos and spelling errors. --- source/tutorial-english.html | 96 ++++++++++++++++++------------------ 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/source/tutorial-english.html b/source/tutorial-english.html index 2d4f6d0..523631e 100644 --- a/source/tutorial-english.html +++ b/source/tutorial-english.html @@ -303,7 +303,7 @@

step 1) an online repository on github

step 2) a new module with module-starter

-Open a shell to your script's location and run the program module-starter that comes from Module::Starter. It wants an e-mail address, the author name, and obviously the module name: +Open a shell to your script's location and run the program module-starter that comes from Module::Starter. It wants an e-mail address, the author name, and obviously the module name:
 shell> module-starter --module Range::Validator --author MyName --email MyName@cpan.org
@@ -499,7 +499,7 @@ 

day two: some changes and tests

step 1) POD documentation

-Well first of all some cleaning: open you local copy of the module /path/to/Range-Validator/lib/Range/Validator.pm in your text editor or IDE. Personally I like the POD documentation to be all together after the __DATA__ token rather than interleaved with the code. Inside the code I only like to have comments. POD documentation is for the user, comments are for you! After a week or month you'll never remember what your code is doing: comment it explaining what is happening. +Well first of all some cleaning: open your local copy of the module /path/to/Range-Validator/lib/Range/Validator.pm in your text editor or IDE. Personally I like the POD documentation to be all together after the __DATA__ token rather than interleaved with the code. Inside the code I only like to have comments. POD documentation is for the user, comments are for you! After a week or month you'll never remember what your code is doing: comment it explaining what is happening. So go to the end of the module where the line is the final 1; ( remember all modules must return a true value in their last statement) and add in a new line the __DATA__ token. Move all POD after this token. Also remove the POD and the code of function2 @@ -620,13 +620,13 @@

step 2) first test

Wow! We ran our first test! ..Yes, but in the wrong way. Well not exactly the wrong way but not the way tests are run during installation. -Test are run through a TAP harness (TAP stands for Test Anything Protocol and is present in perl since forever: perl was born the right way ;). +Tests are run through a TAP harness (TAP stands for Test Anything Protocol and is present in perl since forever: perl was born the right way ;). With your perl distribution you have the prove command (see its documentation) that runs tests through a TAP harness. So we can use it. We can call prove the very same way we called perl: prove -I ./lib ./t/00-load.t but we are lazy and we spot prove -l which has the same effect as prove -I ./lib i.e. include ./lib in @INC -Run the very same test through prove instead of perl and you will see slightly different output: +Run the very same test through prove instead of perl and you will see a slightly different output:
 shell> prove -l ./t/00-load.t
@@ -887,19 +887,19 @@ 

step 3) add dependencies in Makefile.PL

);
-This file is run on the target system trying to install your module. It's vaste matter and you can find many, many useful informations in the core documentation of ExtUtils::MakeMaker and in the ExtUtils::MakeMaker::Tutorial and, as always in perl, there many ways to do it. +This file is run on the target system trying to install your module. It's vast matter and you can find many, many useful informations in the core documentation of ExtUtils::MakeMaker and in the ExtUtils::MakeMaker::Tutorial and, as always in perl, there many ways to do it. -In our simple case we only need to know few facts about BUILD_REQUIRES and PREREQ_PM fields. +In our simple case we only need to know a few facts about BUILD_REQUIRES and PREREQ_PM fields. -The first one lists into a hash all modules and their version needed to build up our module, building includes testing, so if you need some module during tests it is the place where insert dependencies. The module-starter program added 'Test::More' => '0' entry for us. This is the right place to state that we intend to use Test::Exception CPAN module during tests. +The first one lists into a hash all modules and their version needed to build up our module, building includes testing, so if you need some module during tests it is the place where to insert dependencies. The module-starter program added 'Test::More' => '0' entry for us. This is the right place to state that we intend to use Test::Exception CPAN module during tests. -By other hand PREREQ_PM lists modules and their minimal versions needed to run your module. As you can see it's a different thing: to run Range::Validator you never need Test::Exception but, for example you 'll need Carp +On the other hand PREREQ_PM lists modules and their minimal versions needed to run your module. As you can see it's a different thing: to run Range::Validator you never need Test::Exception but, for example you 'll need Carp Even if Carp it's a core module is a good practice to include it into PREREQ_PM Read a very good post about dependencies Re: How to specify tests dependencies with Makefile.PL? -Cleaning example lines and given all the above, we will modify Makefile.PL as follow: +Removing the example lines and given all the above, we will modify Makefile.PL as follow:
 use 5.006;
@@ -929,13 +929,13 @@ 

step 3) add dependencies in Makefile.PL

clean => { FILES => 'Range-Validator-*' }, );
-So the moral is: when you add a dependency needed to run your module or to test it remember to update Makefile.PL correspondent part. +So the moral is: when you add a dependency needed to run your module or to test it remember to update Makefile.PL corresponding part.

step 4) run the new test

-Ok, is the above test ok? It returns all we expect? Try it using prove -l but specifying also -v to be verbose and the filename of our new test (now we dont want all test run, just the one we are working on): +Ok, is the above test ok? It returns all we expect? Try it using prove -l but specifying also -v to be verbose and the filename of our new test (now we don't want to run all tests, just the one we are working on):
 shell> prove -l -v ./t/01-validate.t
@@ -957,7 +957,7 @@ 

step 4) run the new test

step 5) commit, add new files and push with git

-What we need more from our first day of coding? To check our status and to synchronize our online repository (pay attention to the following commands because we have a new, untracked file!): +What more do we need from our first day of coding? To check our status and to synchronize our online repository (pay attention to the following commands because we have a new, untracked file!):
 git-client> git status
@@ -1031,7 +1031,7 @@ 

step 5) commit, add new files and push with git

To https://github.com/YourGithubLogin/Range-Validator 49a0690..5083ec3 master -> master
-What a day! We added six lines of code and an entire test file! Are we programming too much? Probably no but we are doing it in a robust way and we discovered it can be hard work. In perl hard work is justified only by (future) laziness and we are doing all these work because we are lazy and we do not want to waste our time when, in a month or a year, we need to take this code base again to enhance it or to debug it. So now it's time for the bed and for deserved colorful dreams. +What a day! We added six lines of code and an entire test file! Are we programming too much? Probably not but we are doing it in a robust way and we discovered it can be hard work. In perl hard work is justified only by (future) laziness and we are doing all these work because we are lazy and we do not want to waste our time when, in a month or a year, we need to take this code base again to enhance it or to debug it. So now it's time for the bed and for deserved colorful dreams. @@ -1049,7 +1049,7 @@

step 1) the educated documentation

The same is for your module users: they hope and expect to find a good documentation and to write it is our duty. Dot. -Documentation content, in my little experience, can be impacted a lot for even small changes in the code or interface so, generally I write the larger part of the docs when the implementation or interface is well shaped. But, by other hand, a good approach is to put in the docs every little statement that will be true since the very beginning of your module development. At the moment we can state our validate sub accepts both strings and ranges and always returns an array. +Documentation content, in my little experience, can be impacted a lot for even small changes in the code or interface so, generally I write the larger part of the docs when the implementation or interface is well shaped. But, on the other hand, a good approach is to put in the docs every little statement that will be true since the very beginning of your module development. At the moment we can state our validate sub accepts both strings and ranges and always returns an array. At the moment the relevant part of the POD documentation is: @@ -1086,7 +1086,7 @@

step 1) the educated documentation

step 2) git status again and commit again

-Since we are now very fast with git commands, let's commit this little change; the push to the remote repository can be left for the end of work session. So status (check it frequently!) and commit +Since we are now very fast with git commands, let's commit this little change; the push to the remote repository can be left for the end of the work session. So run status (check it frequently!) and commit
 git-client> git status
@@ -1113,7 +1113,7 @@ 

step 2) git status again and commit again

step 3) more code...

-Now it's time to add more checks for the incoming string: we do not accept a lone dot between non dots, nor even more than two dots consecutively: +Now it's time to add more checks for the incoming string: we do not accept a lone dot between non dots, nor do we accept more than two dots consecutively:
 # not allowed a lone .
@@ -1122,7 +1122,7 @@ 

step 3) more code...

croak "invalid range [$range] (more than 2 .)!" if $range =~ /[^.]+\.{3}/;
-The whole sub now look like: +The whole sub now looks like:
 sub validate{
@@ -1189,7 +1189,7 @@ 

step 4) ...means more and more tests

Result: PASS
-Fine! But.. to much repetitions in the test code. Are not we expected to be DRY (Dont Repeat Yourself)? Yes we are and since we have been so lazy to put use Test::More qw(no_plan) we can add a good loop of tests (replace the last two dies_ok with the following code in the test file): +Fine! But.. to much repetitions in the test code. Are we not expected to be DRY (Dont Repeat Yourself)? Yes we are and since we have been so lazy to put use Test::More qw(no_plan) we can add a good loop of tests (replace the last two dies_ok with the following code in the test file):
 foreach my $string ( '1.2', '0..2,5.6,8', '1,2,.,3', '.' ){
@@ -1406,7 +1406,7 @@ 

step 5) git: a push for two commits

-Today we committed twice, do you remember? first time just the POD we added for the sub and second time just few moments ago. +Today we committed twice, do you remember? first time just the POD we added for the sub and the second time just few moments ago. We pushed just one time. What's really now in the online repository? Go to the online repository, Insights, Network: the last two dots on the line segment are our two commits, pushed together in a single push. Handy, no? Click on the second-last dot and you will see the detail of the commit concerning the POD, with lines we removed in red and lines we added in green. Commits are free: committing small changes frequently is better than commit a lot of changes all together. @@ -1493,7 +1493,7 @@

step 1) more validation in the code

}
-New features are worth to be pushed on the online repository: you know how can be done. Do it. +New features are worth to be pushed on the online repository: you know how it can be done. Do it. @@ -1501,7 +1501,7 @@

step 1) more validation in the code

step 2) a git excursus

-Did you follow my small advices about git committing and meaningful messages? If so it's time to see why is better to be diligent: with git log (which man page is probably longer than this guide..) you can review a lot about previous activities: +Did you follow my small advices about git committing and meaningful messages? If so it's time to see why it is better to be diligent: with git log (which man page is probably longer than this guide..) you can review a lot about previous activities:
 git-client> git log HEAD --oneline
@@ -1518,7 +1518,7 @@ 

step 2) a git excursus

This is definitevely handy. HEAD is where your activity is focused in this moment. Try to remove the --oneline switch to see also all users and dates of each commit. -As you can understand git is a vaste world: explore it to suit your needs. This is not a git guide ;) +As you can understand git is a vast world: explore it to suit your needs. This is not a git guide ;) @@ -1707,7 +1707,7 @@

step 5) overall check and release

Is our change safe? Well we have a test suit: prove -l -v will tell you if the change impacts the test suit (if tests are poor you can never be sure). -Now our module is ready for production. It just lacks of some good documentation. Not a big deal, but is our duty to document what the sub does effectively. +Now our module is ready for production. It just lacks of some good documentation. Not a big deal, but it is our duty to document what the sub does effectively. Add to the POD of our sub: @@ -1797,13 +1797,13 @@

step 1) the problem of empty lists

Right? The module goes in production and 98% of errors from the foreign part of the code base disappeared. Only 98%? Yes.. -Miss A of department Z call your boss in a berserk state: not all their errors are gone away. They use the list form but Miss A and the developer B are sure no empty lists are passed to your validate sub. You call the developer B, a good fellow, who explain you that list are generated from a database field that cannot be empty (NOT NULL constraint in the database): +Miss A of department Z call your boss in a berserk state: not all their errors are gone away. They use the list form but Miss A and the developer B are sure no empty lists are passed to your validate sub. You call the developer B, a good fellow, who explains you that lists are generated from a database field that cannot be empty (NOT NULL constraint in the database): You - Listen B, if I emit a warning you'll be able to trap which list generated from the database provoked it? B - Sure! Can you add this? -You - Yes, for sure. I can use a variable in Range::Validator namespace, let's name it warnings and you'll set it to a true value and only you, and not the rest of the company, will see errors on STDERR. Ok? +You - Yes, for sure. I can use a variable in the Range::Validator namespace, let's name it warnings and you'll set it to a true value and only you, and not the rest of the company, will see errors on STDERR. Ok? B - Fantastic! I'll add the variable as soon as you tell me. @@ -1830,7 +1830,7 @@

step 2) adding a Carp to the lake

# STRING PART ... } elsif ( $WARNINGS == 1 and @_ == 0 ){ - carp "Empty list passed in! We assume all element will be processed."; + carp "Empty list passed in! We assume all elements will be processed."; } # otherwise we received a list else{ @@ -1848,9 +1848,9 @@

step 2) adding a Carp to the lake

step 3) prepare the fishing road: add a dependency for our test

-To grab STDERR in test we have to add a dependency to Capture::Tiny module which is able, with its method capture to catch STDOUT STDERR and results emitted by an external command or a chunk of perl code. Handy and tiny module. +To grab STDERR in the test we have to add a dependency to Capture::Tiny module which is able, with its method capture to catch STDOUT STDERR and results emitted by an external command or a chunk of perl code. Handy and tiny module. -Do you remeber the place to specify a dependency? Bravo! Is in Makefile.PL and we did the same in "day three step 3" when we added two modules to the BUILD_REQUIRES hash. Now we add Capture::Tiny to this part (remeber to specify module name in a quoted string): +Do you remember the place to specify a dependency? Bravo! It is in Makefile.PL and we did the same in "day three step 3" when we added two modules to the BUILD_REQUIRES hash. Now we add Capture::Tiny to this part (remember to specify the module name in a quoted string):
 BUILD_REQUIRES => {
@@ -1956,7 +1956,7 @@ 

step 3) another kind of test: MANIFEST

In the the "day one - preparing the ground" we used module-starter to create the base of our module. Under the /t folder the program put three test we did not seen until now: manifest.t pod-coverage.t and pod.t -These three tests are here for us and they will help us to check our module distribution is complete. Let's start from the first +These three tests are here for us and they will help us to check that our module distribution is complete. Let's start from the first
 shell> prove ./t/manifest.t
@@ -1966,7 +1966,7 @@ 

step 3) another kind of test: MANIFEST

Result: NOTESTS
-Ok, no test run, just skipped. Go to view what is inside the test: it skips all actions unless RELEASE_TESTING environment variable is set. It also will complain unless a minimal version of Test::CheckManifest is installed. So set this variable in the shell (how to do this depends on your operating system: linux users probably need export RELEASE_TESTING=1 while windows ones will use set RELEASE_TESTING=1) and use your CPAN client to install the required module (normally cpan Test::CheckManifest is all you need) and rerun the test again: +Ok, no test run, just skipped. Go to view what is inside the test: it skips all actions unless the RELEASE_TESTING environment variable is set. It also will complain unless a minimal version of Test::CheckManifest is installed. So set this variable in the shell (how to do this depends on your operating system: linux users probably need export RELEASE_TESTING=1 while windows ones will use set RELEASE_TESTING=1) and use your CPAN client to install the required module (normally cpan Test::CheckManifest is all you need) and rerun the test again:
 shell> prove ./t/manifest.t
@@ -1980,10 +1980,10 @@ 

step 3) another kind of test: MANIFEST

Omg?! What is all that output? The test complains about a lot of files that are present in the filesystem, in our module folder but are not specified in the MANIFEST file. This file contains a list (one per line) of files contained within the tarball of your module. -In the above output we have seen a lot, if not all, files under the .git directory. Obviously we do not want them included in our module distribution. How can we skip them? Using MANIFEST.SKIP file that basically contains regular expressions describing which files should be excluded from the distribution. +In the above output we have seen a lot, if not all, files under the .git directory. Obviously we do not want them included in our module distribution. How can we skip them? Using the MANIFEST.SKIP file that basically contains regular expressions describing which files should be excluded from the distribution. So go create this file in the main folder of the module and add inside it a line with a regex saying we do not want the .git directory: -^\.git\/ and add this file with git add MANIFEST.SKIP and commit this important change. +^\\.git\/ and add this file with git add MANIFEST.SKIP and commit this important change. Rerun the test (added some newlines for readability): @@ -2079,7 +2079,7 @@

step 4) another kind of test: POD and POD coverage

step 5) some README and final review of the work

-The README must contain some general information about the module. Users can read this file via cpan client so put a minimal description in it. Gihub website use it as default page, so it is useful have some meningful text. Someone generates the text from the POD section of the module. Put a short description, maybe the sysnopsis and commit the change. Push it online. +The README must contain some general information about the module. Users can read this file via a cpan client so put a minimal description in it. The GitHub website uses it as a default page, so it is useful have some meningful text. Someone generates the text from the POD section of the module. Put a short description, maybe the synopsis and commit the change. Push it online. Now we can proudly look at our commits history in a --reverse order: @@ -2112,7 +2112,7 @@

step 5) some README and final review of the work

3c0da4f (HEAD -> master, YourGithubLogin/master) modified README
-A good glance of two dozens of commits! We have done a good job, even if with some errors: committing multiple changes in different part of the project (like in our third commit) is not wise: better atomical commits. We have also some typo in commit messages.. +A good glance of two dozens of commits! We have done a good job, even if with some errors: committing multiple changes in different part of the project (like in our third commit) is not wise: it is better to do atomical commits. We have also some typos in the commit messages.. @@ -2122,7 +2122,7 @@

step 5) some README and final review of the work

step 6) try a real CPAN client installation

-It's now time to see if our module can be installaed by a cpan client. Nothing easier: if you are in the distribution folder extracted from a tarball created by make dist, just run cpan . and enjoy the output (note that this command will modify the content of the directory!). +It's now time to see if our module can be installed by a CPAN client. Nothing is easier: if you are in the distribution folder extracted from a tarball created by make dist, just run cpan . and enjoy the output (note that this command will modify the content of the directory!). @@ -2138,7 +2138,7 @@

day eight: other module techniques

option one - the bare bone module

-This is option we choosed for the above example and, even if it is the less favorable one, we used this form for the extreme easy. The module is just a container of subs and all subs are available in the program that uses our module but only using their fully qualified name, ie including the name space where they are defined: Range::Validator::validate was the syntax we used all over the tutorial. +This is the option we choosed for the above example and, even if it is the less favorable one, we used this form since it is extremely easy. The module is just a container of subs and all subs are available in the program that uses our module but only using their fully qualified name, ie including the name space where they are defined: Range::Validator::validate was the syntax we used all over the tutorial. Nothing bad if the above behaviour is all you need. @@ -2150,11 +2150,11 @@

option two - the Exporter module

If you need more control over what to be available to the end user of your module Exporter CORE module will be a better approach. -Go read the module documentation to have an idea of its usage. +Go and read the module documentation to get an idea of its usage. -You can leverage what to export into the program using your module, so no more fully qualified package name will be needed. I suggest you to no export nothing by default (ie leaving @EXPORT empty) using instead @EXPORT_OK to let the end user to import sub from your module on, explicit, request. +You can leverage what to export into the program using your module, so no more fully qualified package name will be needed. I suggest you to not export anything by default (ie leaving @EXPORT empty) using instead @EXPORT_OK to let the end user to import subs from your module on explicit request. -With the use of Exporter you can also export variables into the program using your module, not only subs. It's up to you to decide if this is the right thing to do. Pay attention with names you use and the risk of name collision: what will happen if two module export two function with the same name? +With the use of Exporter you can also export variables into the program using your module, not only subs. It's up to you to decide if this is the right thing to do. Pay attention with names you use and the risk of name collisions: what will happen if two modules export two functions with the same name? Perl is not restrictive in any meaning of the word: nothing will prevent the end user of your module to call Your::Module::not_exported_at_all_sub() and access its functionality. A fully qualified name will be always available. The end user is breaking the API you provide, API where not_exported_at_all_sub is not even mentioned. @@ -2163,16 +2163,16 @@

option two - the Exporter module

option three - the OO module

-Preferred by many is the Object Oriented (OO) way. OO it's not better nor worst: it's a matter of aptitude or a matter of needs. See the relevant section on the core documentation: To-OO-or-not-OO? about the choice. +Preferred by many is the Object Oriented (OO) way. OO is not better nor worse: it's a matter of aptitude or a matter of needs. See the relevant section on the core documentation: To-OO-or-not-OO? about the choice. An object is just a little data structure that knows the class (the package) it belongs to. Nothing more complex than this. The data structure is generally a hash and its consciouness of its class (package) is provided by the bless core function. -Your API will just provide a constructor (conventionally new) and a serie of methods this object can use. +Your API will just provide a constructor (conventionally new) and a series of methods this object can use. -Again: nothing prevents end user to call one of your function by its fully qualified name as in Your::Module::_not_exported_at_all_sub() +Again: nothing prevents the end user to call one of your function by its fully qualified name as in Your::Module::_not_exported_at_all_sub() it's just matter of being polite. -The core documentation include some tutorial about objects: +The core documentation includes some tutorials about objects: perlobj @@ -2190,7 +2190,7 @@

advanced Makefile.PL usage

Until now we modified the BUILD_REQUIRES to specify dependencies needed while testing our module and PREREQ_PM to include modules needed by our module to effectively run. -The file format is described in the documentation of ExtUtils::MakeMaker where is stated that, since version 6.64 it is available another field: TEST_REQUIRES defined as: "A hash of modules that are needed to test your module but not run or build it". This is exactly what we need, but this force us to specify, also in Makefile.PL that we need 'ExtUtils::MakeMaker' => '6.64' in the CONFIGURE_REQUIRES hash. +The file format is described in the documentation of ExtUtils::MakeMaker where it is stated that, since version 6.64 it is available another field: TEST_REQUIRES defined as: "A hash of modules that are needed to test your module but not run or build it". This is exactly what we need, but this force us to specify, also in Makefile.PL that we need 'ExtUtils::MakeMaker' => '6.64' in the CONFIGURE_REQUIRES hash. The 6.64 version of ExtUtils::MakeMaker was released in 2012 but you cannot be sure end users have some modern perl, so we can safely use BUILD_REQUIRES as always or use some logic to fallback to "older" functionality if ExtUtils::MakeMaker is too old. You can use WriteMakefile1 sub used in the Makefile.PL of App::EUMM::Upgrade @@ -2199,15 +2199,15 @@

other testing modules

In the current tutorial we used Test::Exception to test failures: consider also Test::Fatal -Overkill for simple test case but useful for complex one is the module Test::Class +Overkill for a simple test case but useful for complex ones is the module Test::Class -Other modules worth to see are in the Task::Kensho list. +Other modules worth to check out are in the Task::Kensho list.

advanced testing code

-If in your tests you have the risk of code repetition (against the DRY - Dont Repeat Yourself principle) you can find handy to have a module only used in your tests, a module under the /t folder. +If in your tests you have the risk of code repetition (against the DRY - Dont Repeat Yourself principle) you can find it handy to have a module only used in your tests, a module under the /t folder. You need some precautions, though. @@ -2345,7 +2345,7 @@

further readings about testing

acknowledgements

As all my works the present tutorial would be not possible without the help of the perlmonks.org community. -Not being an exhaustive list I want to thanks: Corion, choroba, Tux, 1nickt, marto, hippo, haukex, Eily, eyepopslikeamosquito, davido and kschwab (to be updated ;) +Not being an exhaustive list I want to thank: Corion, choroba, Tux, 1nickt, marto, hippo, haukex, Eily, eyepopslikeamosquito, davido and kschwab (to be updated ;)