Things
- Library vs framework
- Automated tools
- Gotchas
- Mocks
- Time warps
- Watch
- Config in files
- Snapshots
- Resources
- Is Jest worth it?
Library vs framework
- Mocha: Library
- Import
chai
for assertions
- Import
sinon
& lolex
for mocks, and time manipulation
- Import
istanbul
for code coverage
- Jest: Framework
- Automated: Jest Codemods
- Semi-automated:
sed
Automated
yarn global add jest-codemods
- Converts the test runner parts only
- Jest uses Jasmine under the hood
Semi-automated
- Use
sed
or some other regex replacement tool
- Gist
- Requires a manual step to "clean up" after
Gotchas
- Serial vs parallel
- File level sandbox
done(err)
stdout
/ stderr
truncation
null
or undefined
assertions
Serial vs parallel
- Mocha default: Serial
- Jest default : Parallel
File level sandbox (1)
- In mocha, side effects from one test can run over into another
- In Jest, there is a file-level sandbox
- All in-memory effects are undone
- But not any persisted effects (e.g. DB)
File level sandbox (2)
- Can also be problematic
- Extra steps to ensure that unwanted stuff is cleared
- After each file
- e.g. Drain database connection pool
File level sandbox (3)
afterAll((done) => {
require('./path/to/server.js')._testTearDown((err) => {
if (err) {
console.error('tearDownApis fail', err);
}
// done(); //NOTE normally would be `done()` here
});
done();
});
- not necessary for it to complete prior to advancing to the next test file
done()
before the teardown is complete: ↓ the total run time of suite
File level sandbox (4)
- Jest configuration per test file
beforeAll()
& afterAll()
will execute
- ∴ define a test-specific teardown if necessary
if (typeof process.env.TEST_TYPE === 'string') {
server._testTearDown = () => { /* test-specific tear down */ }
}
server.on('exit', () => { /* regular tear down */ });
{
"setupTestFrameworkScriptFile": "./jest-per-file.js"
}
done(err)
- 1st parameter is set on the
done()
callback ⇒
- Mocha: Test fails
- Jest: Test passes (same as Jasmine)
stdout
/ stderr
truncation
- Jest proxies all test output and application output
- H/W, the per-test output from Jest is truncated by string length
- Minimise output!
null
or undefined
assertions
- How to assert that something is
null
or undefined
?
- The
chai
way:
- Jest has no obvious equivalent, closest option is:
Mocks
- Basic mocks
- Mocking Chained APIs
- Spies
- Module level mocks
Basic mocks (1)
myModule.foo = jest.fn().mockImplementation(myTestImpl);
Basic mocks (2)
expect(myModule.foo.calls[0]).toEqual(['bar', 123]);
- Asserting invocation of a mock function
Spies
- Jest spies are limited
- Cannot access
.mock.calls
array as you can with mocks
- Use mocks instead of spies, for the moment
Module level mocks (1)
- Jest allows you to mock entire modules
- Not limited to individual functions
- In practice, this does not work so well
Module level mocks (2)
- Easier option: Manually create a module level mock
- In a
beforeAll()
, require()
the original
mockMyModule = Object.assign({}, originalMyModule, {
myFunc: jest.fn().mockImplementation(() => {}),
});
- Works because Jest hoists mocks prior to first
require()
Time warps
- Time manipulation basics
Date
- Macro & micro tasks
Time manipulation basics
- Application uses the system time as input
- This is a rather common occurrence in tests
- ∴ tests need to fake the time in order to get repeatable tests
- In JS, also involves the event loop queue
Date
- Jest does not provide a means to mock
Date
- In mocha, the default approach would be to use
lolex
from sinon
- H/W
lolex
doesn't play well with Jest
Macro & micro tasks (1)
- Set up:
- Use
jest.useFakeTimers();
- Macro tasks:
setImmediate()
, setTimeout()
, setInterval()
- Use
jest.runTimersToTime(ms)
- Micro tasks:
new Promise()
, process.nextTick()
- Use
jest.runAllTicks();
Macro & micro tasks (2)
- In
lolex
, when you do clock.tick(ms);
Date
gets advanced
- Micro tasks get executed
- Macro tasks (up to that
ms
) get executed
- In Jest
- Control exactly which ones you wish to execute
Watch
- Filter to changes in
git diff
- Filter to a regex
- With coverage
Filter to changes in git diff
- Run
jest --watch
- This is the (very handy) default behaviour
- Rerun: Save either a test or application file
Filter to a regex
- While in
--watch
mode, hit p
(for pattern)
- Type a regular expression to match a test file name
- Only these test files will run
With coverage
- Run
jest --watch --coverage
- Generates code coverage reports via
istanbul
- Out of the box - no config necessary!
Configuring Jest
- CLI config
- File based config
CLI config
- Use CLI flags such as
--watch
- Once you figure out the right flags,
put them in
package.json
as a run script
File based config
- Use the
--config
CLI flag
- Create a JSON file within the project
- Can be quite powerful for running:
- Tests in different environments
- Different sub-sets of tests
Snapshots
- Basic snapshots
- Robust Snapshots
- DRY vs DAMP
- Write your own snapshot serialiser
- Test driven Development
- Pitfall
Basic snapshots (1)
- Mechanism in which the test runner
- creates expectations from results
- serialises them
- assembles into collection + save to disk
Basic snapshots (2)
- When a test is run for which a snapshot already exists:
- When update flagged, overwrite expectations with actual
- Otherwise, diff actual against expectations
Basic snapshots (3)
expect(result).toMatchSnapshot();
- When running
foo.test.js
- Snapshots saved into
__snapshots__/foo.test.js.snap
- In
--watch
mode, hit u
to update filtered set of snapshots
- In a regular test mode, use
--update
to update all snapshots
Robust Snapshots
- Don't use a result object directly in a snapshot
- Instead transform the result object with a filtered set of properties
- Also, add any additional meta-data to the result object
- "What would I want to
console.log()
here when debugging?"
DRY vs DAMP
- General software engineering principle: DRY
- "Don't repeat yourself" is valid in application code
- H/W in test code, DAMP is considered best practice
- Test cases: Fully self-descriptive
- Snapshots provide a means to remove some of the repetition in test cases
Pitfall
- Can be liberating not to have to write assertions by hand
- ∵ you are not writing them, it is easy to ignore
- Need to take extra care to inspect the snapshots by hand
Write your own snapshot serialiser
"snapshotSerializers": [
"jest-object/serialise-js-object.js"
]
TDD (1)
- Use
expect(result).toMatchSnapshot()
- Use
--watch
CLI flag
- 1st run: Generate incorrect snapshots
- 2nd and subsequent runs:
- Either: Hand edit the snapshots to the real expected result
- Or: Change impl. such that actual matches expected, then hit
u
TDD (2)
- This is not "pure" test-driven development
- But it is pretty close
- Also: Closest I have ever gotten to it myself!
TDD (3)
- Snapshots are especially useful in parametric tests
- e.g. Hitting the same API endpoint repeatedly
- Vary the input each iteration
- Vary the (fake) time each iteration
- 1st: Write the test with a loop, but only one iteration
- Next: Make more iterations
TDD (4)
Re-emphasis:
- Inspect generated snapshots with a fine-toothed comb
- Consider writing (parts of) snapshots by hand
Is Jest worth it?
- Starting new projects
- Existing simple projects
- Existing complex projects
Fin
- Migrating from Mocha to Jest is pretty difficult
- Jest's "batteries included" approach saves a lot of time
- Snapshots combine with
--watch
are a killer combo