Test-driven Development (TDD) is a software development process introduced by Kent Beck as part of the extreme programming methodology. In TDD, unit tests are written first before any implementations are developed.
The requirements are captured as test cases. As new requirements are added to the software, so too are new test cases. As the system is developed, we build up a bigger picture of the requirements and increase the coverage of the unit tests.
The process recommended by TDD is as follows:
A key point here is that the tests are added first before any development is done to implement the functionality. Writing the simplest solution allows the test to pass, then this can be refined by refactoring. The tests then ensure that any refactoring has not introduced any defects.
Test-driven development usually produces better quality code and reduces the scope for defects. It encourages the use of small iterations when developing software.
A useful side-effect is that the tests serve as a form of documentation for the requirements of the software.
Code developed under TDD can lead to flexible and extensible code. This is because developers often have to think in terms of smaller units that can be tested in isolation and integrated together. Since only the minimum amount of code is added for the tests to pass, the tests themselves are more likely to cover all of the code paths in the software.
When used in conjunction with a source control system, in the event of failing tests the code can be rolled back to the last version that had a full set of tests that passed.
Whilst it is easy to use TDD with new projects, it is difficult to apply to existing or legacy systems. If new code is added to legacy software, then clearly these new parts can be developed with TDD, but adding tests to legacy code can be difficult and time-consuming.
Some processes are difficult to test, such as User Interface processes or database connectivity.
A large number of passing tests can offer a false sense of security, leading to insufficient testing of other activities.
Developing and maintaining a large amount of tests can be a time-consuming process.
Test-driven development can be easily accomplished using DUnitX, the unit testing framework which is provided with Delphi. Using the wizard built into Delphi, it is easy to create a DUnitX testing project, complete with a test unit and some sample tests.
Writing a failing test can be as simple as writing an empty test which does not call any functionality:
unit Tests.VATCalculator; interface uses DUnitX.TestFramework; type [TestFixture] TTestVATCalculator = class public [Test] [TestCase ('100_Pounds', '100,20')] procedure TestGetVATAmount(const NetAmount: Currency; const ExpectedAmount: Currency); end; implementation procedure TTestVATCalculator.TestGetVATAmount(const NetAmount: Currency; const ExpectedAmount: Currency); begin var ReturnValue: Currency; // No implementation Assert.AreEqual(ExpectedAmount, ReturnValue); end; initialization TDUnitX.RegisterTestFixture(TTestVATCalculator); end.
Running this test will result in the test failing. Implement the functionality for the test with the simplest code possible:
The tests will now pass and the process can be continued to add more requirements and to refactor as necessary.
Test-driven development is a powerful tool to capture requirements and ensure that tests are developed at the same time as the software. It is best used for new projects or for adding new modules to existing projects. Although it can be time-consuming, the tests provide a valuable mechanism to ensure that any new additions to the software do not break any existing functionality. It remains a very useful tool in a software developer’s toolbox.