The lint task above runs JsHint, which will look for a .jshintrc file, and a .jshintignore file, if present.
Use this sample .jshintrc file as a starting point, and modify it to change that value of "node", under the Environments section, to true.
Use this .jshintignore file to prevent JsHint from linting your dependencies.
In .gitignore, do the same thing, this time, to ensure that node_modules do not get checked in.
Note that the node_modules directory is ignored by default, but this is just the starting point, and will be building up from here.
Now run the lint command in the terminal:
npm run lint
You should see the expected output - either no errors when there are none; or errors output to the console when they are present. Also check that when there are no errors, the process exit code should be zero; and when there are errors, the process exit code should be non-zero. In a bash terminal, you can do this using the following command:
At this point, commit and push these changes to the Github remote, and Travis should automatically queue a build for it. The intent is for the build to fail if there are any JsHint errors.
That is all that we need to do, to make a project use continuous integration, and have linting.
We edit .npmignore to be the same as .gitignore, except that we exclude the folder containing the tests. Other projects depending on this project should not have to run its tests, only this project should need to run its own tests.
/node_modules /.travis.yml /test
Unlike linting, which works out of the box using only configuration files, testing needs a lot of additional work to be done: Writing the tests.
Unfortunately, writing tests is out of the scope of this article Refer to the following documentation:
Now, once you have written some tests, place the test files into the tests directory, and you should be able to run npm run test.
You should see the test results. As with linting, introduce a bug in your source code that is intended to fail the tests, and run the tests again, to make sure a failure does indeed get captured. Also test that the process exits with the expected exit codes.
After verifying these, it is time to add these tests to Travis. Simply add npm run test to .travis.yml, like so:
language: node_js node_js: -"0.10" -"0.11" -"0.12" -"iojs" -"iojs-v1.0.4" script: - npm run lint - npm run test
As before, commit and push your changes, with and without intentional test failures, to check that Travis fails the continuous build, and then passes it again.
Pat yourself on the back, because adding tests is the hardest task in this process. The remainder are relatively easy to do.
The cover command is a little more complex than the lint command and the test command. Here, we are running the istanbul tool, and passing it in the path of the script that runs jasmine-node and all of its parameters. istanbul is smart enough to augment jasmine-node's test runner, for its code coverage purposes, without jasmine-node even knowing what is happening to it. Think of istanbul as the test runner runner.
To test that it works, run npm run cover. You should see the same output as there was previously when running npm run test, but then followed by some output that shows you test coverage, which should look similar to this.
The output merely shows you the headline statistics, and the detailed output lies in the reports, which can be found in the coverage folder.
To see the HTML report, run this command (any browser of your choice will do):
firefox coverage/lcov-report/index.html &
This is the human-readable report.
There is another report however, which is output to coverage/lcov.info, and this is the file which other code coverage tools, such as Coveralls, will need to analyse.
To verify that the coverage runs correctly, simply add an empty function that does not do anything and check to see if the coverage percentage decreases from its previous count. Next enter many of such useless functions, to the point that istanbul fails the build. Then get rid of all the excess functions, and run the command again to verify that that istanbul passes the build once more, and that the code coverage percentages go right back up. While doing this, ensure that the process exit codes match accordingly.
Let us install coveralls by entering the following command in the terminal:
Coveralls does not actually do its own builds, so Travis has to do the build, and selected coverage related artefacts from the build need to be sent to Coveralls. This is what the coveralls module is responsible for: reading in a coverage report, in lcov format, and communicating that report into Coveralls. The coveralls command which we just defined runs the cover command, and then pipes the lcov file output by istanbul to coveralls.
We are now almost ready for Coveralls, but first we need to tell Travis to run the cover and coveralls commands, instead of test as it was doing previously. It is not necessary to do both test and coveralls, because that will mean that test runs twice!
Edit .travis.yml to add npm run coveralls:
language: node_js node_js: -"0.10" -"0.11" -"0.12" -"iojs" -"iojs-v1.0.4" script: - npm run lint - npm run cover after_success: - npm run coveralls
Lastly, we have to do some house keeping, ensuring that the coverage related files are ignored.
Edit .gitignore to ignore coverage output and .coveralls.yml. The Coveralls repository ID is supposed to be kept private.
Now, when you commit and push these changes, you should see Travis build the project. In the mean time, you should see Coveralls idle. Once the Travis build succeeds, the Coveralls build should trigger, and you should see a code coverage report.
Documentation is requires the most CI work, because it is not just about running a script, in a fire-and-forget style. It involves an additional step of collecting the build artifacts - the generated documentation - and publishing them too. We have already done this previously, after generating the code coverage reports, we export them to coveralls.io. This was fairly easy to do because there was an npm package that simply took care of these steps. Documentation, however, is more work, because we have to write the steps to do this manually.
We can run the generatedocs command locally, and then access the documentation by opening it in the browser, for example, using firefox documentation/index.html. However, this is not very useful from a continuous integration perspective, as any build artefacts produced are discarded after the build has completed. In Travis, the only artefact that is preserved automatically are the output logs from the build. It would be much better if Travis were to actually copy the documentation produced during the build to a a web server which users of this project could use as a reference.
We will be publishing our documentation on Github Pages, which hosts static files belonging to Github projects for free.
Firstly, set up Travis with permissions to write to your Github repository, using instructions from the general page. You should have GH_TOKEN encrypted and stored in env.global.secure in .travis.yml.
We now add additional build steps to .travis.yml
after_success: - npm run autodocs - npm run coveralls env: global: -secure: YOUR_ENCRYPTED_GITHUB_ACCESS_TOKEN
In env.global, we define a list of environment variables that should be set during builds. The first one should have already been set (secure), and any other configuration settings for autodocs should be set here if required.
By default, autodocs will incpect the environment variables set by Travis, and determine whether:
Not a pull request
Currently on the "master" branch
Current job number is the first within the current build
If all of these are true, it will do npm run generatedocs, and then publish the generated documentation to Github pages.
You can configure it to use a different branch, and even to build on releases (when a tag is pushed, instead of a branch). Refer to autodocs API documentation for the full set of options that may be configured.
We then add the autodocs command to the scripts section in package.json