Integration Tests are generated during development, (e.g. TDD), that are applied to units of code that have dependencies on other units Unit Tests are small, highly focused tests which are applied to a unit of code in isolation
The precise boundary of when a developer generated test is considered unit vs integration is squishy at best, and depending who you ask or where you look you will find different answers, (see M. Fowler). Even with this distinction between the test types, both may be implemented with the same framework (i.e. NUnit). Ultimately, test early and test often.
Using the TheoryAttribute allows testing a single unit for which the inputs are a bit more complex than simple parameters, and the behavior under test may have a broader range of edge cases. Integration tests can be used to exercise a method for ‘typical’ scenarios, and I’ve often found that trying to cover every possible scenario can be unrealistic. From the NUnit documentation
A Theory is a special type of test, used to verify a general statement about the system under development. Normal tests are example-based.
(*) as of this post, this statement is applicable to NUnit version 2.5 through the 3.0 beta.
The code samples below are part of a project to generate Scenario objects that contain the input data for testing the functionality of a graph library (yes, I’m testing a library for testing). The Scenarios start out as YAML files and are parsed to an object. Data (of varying types) can be associated with each scenario and is read in from separate YAML data files. All of the parsing and hard work is handled by YamlDotNet by Antoine Aubry. You do not need to worry about YAML or graph functionality in the code samples below, this is just a little background.
YamlScenarioReader: This class reads a scenario data file and returns a scenario object
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Even though YamlDotNet will be doing the heavy lifting, I still want to run some tests for reading in every file as a cursory inspection of the file formats, since the test scenario files tend to change often post on Yaml. These are an example of what I consider integration tests.
For reading the data, there is a corresponding reader with a T type parameter indicating the type of the underlying data to be read in. Again, YamlDotNet is doing the hard work, all we need to do is provide a way of validating the files.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The test fixture is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
The test method annotated with the Theory attribute uses the data properties annotated with Datapoints to provide the input data. Each Datapoints property should correspond to one of the types indicated in the TestFixtureAttributes atop the ReadFile
YamlNodeDataReaderFixture has “fixture” in its name, but it does not have a unit test attribute. I use the convention of a class for each class under test, then subclasses for each method under test, see Structuring Unit Tests by Phil Haack.
DataFilesOfDouble is a property which returns an array of DataFileWrapper
DataFilesOfDummyItem is a property which returns an array of DataFileWrapper
Returns_Dictionary() is the test method to which the input data is applied to the ReadFile() method under test.
DummyItem is simply a class used to represent more complex data in the data files.
DataFileWrapper is a helper class needed in the example to provide a single generic input parameter to the Returns_Dictionary() test method and also provides methods to return which type of reader is under test based on the type of the data. While not absolutely necessary, it makes the tests cleaner and the maintainability of the tests more robust.
Here is an example of the output (in Resharper) showing how the embedded test classes or each method show up:
One annoyance I have with parametized Theory tests is that there is no way to distinguish between the individual tests by the respective parameter values. This can cause a little grief when trying to identify which parameters failed and isolate them, but it is a minor inconvenience for the ability to beat on your code with automated tests.