Introduction
Within a Quality Assurance process, we have mainly two kinds of tests:
- Unit tests (or acceptance tests): a set of verifications we can make to each logic unit in our system. With each test, we're checking its behavior, without keeping in mind all collaborations with other units.
- System tests (or integration tests): every test allows you to check system's behavior, emphasizing unit collaborations.
We're going to speak about "unit testing" and how we can apply it in our C/C++ project, through CPPUnit unit testing framework.
I'm going to consider, you know what unit testing is, and why it is very important in software development process. If you want to read more about unit testing basis, you can check JUnit web site.
Unit tests design
Think about a typical scenario in a development team: a programmer is testing his or her code using the debugger. With this tool, you can check each variable value in every program at any time. Running step by step, you can verify if a variable has the expected value. This is powerful, but pretty slow and might have plenty of errors. A few programmers can keep their mind in a deep, hard and long debugging process and, after one or two hours, the programmer's brain is near break down. All these repetitive and hard verifications can be done automatically, with a few programming instructions and proper tools.
These tools I'm going to speak about are called "unit testing frameworks", with them you can write small modules which help you to test modules (classes, functions or libraries) of your applications.
Let's see this example: we're programming a small program module, whose main responsibility is to just add two numbers. As we're coding in plain C, this module is represented by a C function:
BOOL addition(int a, int b)
{
return (a + b);
}
Our testing unit should be coded with another module, that is: another C function. This function checks all possible addition cases, and returns TRUE
or FALSE
, denoting if the module does or doesn't pass the test:
BOOL additionTest()
{
if ( addition(1, 2) != 3 )
return (FALSE);
if ( addition(0, 0) != 0 )
return (FALSE);
if ( addition(10, 0) != 10 )
return (FALSE);
if ( addition(-8, 0) != -8 )
return (FALSE);
if ( addition(5, -5) != 0 )
return (FALSE);
if ( addition(-5, 2) != -3 )
return (FALSE);
if ( addition(-4, -1) != -5 )
return (FALSE);
return (TRUE);
}
As we can see, we've tested all possible addition cases:
- Positive + Positive
- Zero + Zero
- Positive + Zero
- Negative + Zero
- Positive + Negative
- Negative + Positive
- Negative + Negative
Each test compares the addition result with expected value, and it returns FALSE
if result is a value which is different than expected one. If execution path reaches last line, we consider that all tests have been passed correctly, and it returns TRUE
.
This small module (or function) is called Test Case, and it shows a set of checks we do over a single unit. Every verification must be related with a single unit scenario. In this case, we check how "addition operation" behaves about operand's sign. We can write other Test Cases, for checking others scenarios. For example, we can code another Test Case in order to check our module behavior with typical addition properties:
int additionPropertiesTest()
{
if ( addition(1, 2) != addition(2, 1) )
return (FALSE);
if ( addition(1, addition(2, 3)) != addition(addition(1, 2), 3) )
return (FALSE);
if ( addition(10, 0) != 10 )
return (FALSE);
if ( addition(10, -10) != 0 )
return (FALSE);
return (TRUE);
}
In this example, we've checked some mathematical addition properties. These two Test Cases, build a Test Suite, that is: a collection of Test Cases which test the same unit.
All those Test Cases and Test Suites must be developed while we're coding the units, and every time the unit changes, corresponding unit test should reflect changes, modifying a Test Case or adding new one.
For instance, if we improve our "addition" module in order to add decimal numbers, we have to change our tests, adding for example a new addDecimalNumbersTest
Test Case.
Extreme programming recommends you that you code all these unit tests before you code the target unit. Main reason is very simple: when you're involved in a development process, you're in a permanent research stage, in which you're thinking about how a unit should behave, what public interface you should publish, what parameters you should pass in methods, and other concrete aspects about external access, internal behavior... Coding "unit tests" before its development, you're getting this set of knowledge, and, when you code the main unit, you'll be able to develop faster and better than the other way.
Each time a team wishes to deploy a new release, they should perform a complete unit tests battery. All units must pass their unit (or acceptance) tests, and in this case, we can release a successful new version. If at least one unit doesn't pass all its tests, then we've found a bug. In that case, we must code another test, even add a new Test Case if its necessary, checking all conditions to reproduce this bug. When our new coded test can reproduce the bug properly, we can fix it, and perform the test again. If unit passes the test, we consider bug is resolved and we can release our new bug-free version.
Adding new tests cases for each bug found is very important, because that bug can reappear, and we need a test that detects that bug when it comes back again. In this way, our testing battery is growing bigger and bigger, and all possible errors, and all historic bugs, are covered.
Testing tools
Once upon a time, two guys called Kent Beck & Eric Gamma, wrote a set of Java classes in order to make unit testing as automatic as they can. They called them JUnit and it became a great hit in unit testing world. Other developers ported their code to other languages, building a big collection of products, called xUnit frameworks. Among them, we can find one for C/C++ (CUnit and CPPUnit), Delphi (DUnit), Visual Basic (VBUnit), NUnit (.NET platform), and many others.
All these frameworks apply similar rules, and probably you can use one if you've used another one, with few language-dependency exceptions.
Now, we're going to explain how you can use CPPUnit in order to write you own unit tests and improve your units' quality.
CPPUnit uses object oriented programming, so we're going to work with concepts like inheritance, encapsulation and polymorphism. Also, CPPUnit uses C++'s SEH (Structured Exception Handling), so you should understand concepts like "exception" and instructions and structures like throw
, try
, finally
, catch
and so on.
CPPUnit
Each Test Case should be coded inside a class derived from TestCase
. This class brings us all basic functionality to run a test, register it inside a Test Suite, and so on.
For instance, we've wrote a small module which stores some data in disk. This module (coded as a class called DiskData
) has mainly two responsibilities: load and store data inside a file. Let's take a look:
typedef struct _DATA
{
int number;
char string[256];
} DATA, *LPDATA;
class DiskData
{
public:
DiskData();
~DiskData();
LPDATA getData();
void setData(LPDATA value);
bool load(char *filename);
bool store(char *filename);
private:
DATA m_data;
};
For now, it isn't important how these methods are coded, because most important thing is that we must be sure this class is doing all the things it must do, that is: load and store data correctly into a file.
In order to do this verification, we're going to create a new Test Suite with two test cases: one for load data and another for store data.
Using CPPUnit
You can get latest CPPUnit version here, where you can find all libraries, documentation, examples and other interesting stuff. (I've downloaded 1.8.0 and it works fine)
In Win32 world, you can use CPPUnit under Visual C++ (6 and later), but as CPPUnit uses ANSI C++, there are few ports to other environments like C++Builder.
All steps and information about building libraries can be found in INSTALL-WIN32.txt file, inside CPPUnit distribution. Once all binaries are built, you can write your own Test Suites.
In order to write your own unit test applications, under Visual C++, you must follow these steps:
- Create a new Dialog based MFC application (or doc-view one)
- Enable RTTI: Project Settings - C++ - C++ Language
- Add CPPUnit\include folder to include directories: Tools - Options - Directories - Include.
- Link your application with cppunitd.lib (for static link) or cppunitd_dll.lib (for dynamic link), and testrunnerd.lib. If you're compiling under "Release" configuration, you should link with same libraries, bout without "d" suffix.
- Copy testrunnerd.dll in your executable folder, or any other folder in your path, and cppunitd_dll.dll if you linked dynamically (or testrunner.dll and cppunit_dll.dll if you're under Release)
Once your project is ready, we can code our first unit test class.
We're going to test our DiskData
class, which mainly performs two operations: load and store data into a disk file. Our test case should test this two operations, with two Test Cases: one for load and the other for store the data.
Let's take a look at the unit test class definition:
#if !defined(DISKDATA_TESTCASE_H_INCLUDED)
#define DISKDATA_TESTCASE_H_INCLUDED
#if _MSC_VER > 1000
#pragma once
#endif // _MSC_VER > 1000
#include <cppunit/TestCase.h>
#include <cppunit/extensions/HelperMacros.h>
#include "DiskData.h"
class DiskDataTestCase : public CppUnit::TestCase
{
CPPUNIT_TEST_SUITE(DiskDataTestCase);
CPPUNIT_TEST(loadTest);
CPPUNIT_TEST(storeTest);
CPPUNIT_TEST_SUITE_END();
public:
void setUp();
void tearDown();
protected:
void loadTest();
void storeTest();
private:
DiskData *fixture;
};
#endif
First of all, we must include TestCase.h and HelperMacros.h. First one, lets us derive our new class from TestCase
base class. Second one, helps us with some macros to define unit tests faster, like CPPUNIT_TEST_SUITE
(for starting Test suite definition), CPPUNIT_TEST
(for defining a test case) or CPPUNIT_TEST_SUITE_END
(for ending our test suite definition).
Our class (called DiskDataTestCase
) overrides two methods called setUp()
and tearDown()
. These methods are called automatically, and are executed when each Test Case starts and ends, respectively.
Protected methods implement our test logic, one for each Test Case. Few lines below, we're going to explain how you can code you test logic.
And finally, we define an attribute called fixture
. This pointer will hold target object of our tests. We should create this object inside setUp()
method, which is called before each Test Case. Then, Test Case code will be executed using our fixture
object, and finally we destroy this object inside tearDown
, after each Test Case execution. In this way, we get a new fresh object each time we execute a test case.
Our test sequence should be something like this:
- Start test application
- Click "Run" button
- Call
setUp()
method: create our fixture
object
- Call first test case method
- Call
tearDown()
method: free fixture
object
- Call
setUp()
method: create our fixture
object
- Call second test case method
- Call
tearDown()
method: free the fixture
object
- ...
Our test sequence should be something like this:
#include "DiskDataTestCase.h"
CPPUNIT_TEST_SUITE_REGISTRATION(DiskDataTestCase);
void DiskDataTestCase::setUp()
{
fixture = new DiskData();
}
void DiskDataTestCase::tearDown()
{
delete fixture;
fixture = NULL;
}
void DiskDataTestCase::loadTest()
{
}
void DiskDataTestCase::storeTest()
{
}
Implementation is very simple for now: setUp
and tearDown
methods, create and free fixture
object respectively. Next you can see test case methods, which we're going to explain.
Test case programming
Once we know what aspects we should test, we must be able to program it. We can perform all operations we need: use base library calls, 3rd party library calls, Win32 API calls, or simply use internal attributes with C/C++ operators and instructions.
Sometimes, we'll need external helps like an auxiliary file or database table which stores correct data. In our test case, we should compare internal data with external file data to check they're the same.
Each time we find an error (for instance, if we detect internal data isn't the same as external correct data), we should raise a concrete exception. You can do this with CPPUNIT_FAIL(message)
helper macro which raises an exception showing message parameter.
There is another way to check a condition and raise an exception if it's false, all in a single step. The way to do this is through assertions. Assertions are macros that let us check a condition, and they raise proper exception if condition is false, with other options.
There're some assertion macros:
CPPUNIT_ASSERT(condition)
: checks condition and throws an exception if it's false.
CPPUNIT_ASSERT_MESSAGE(message, condition)
: checks condition and throws an exception and showing specified message if it is false.
CPPUNIT_ASSERT_EQUAL(expected,current)
: checks if expected
is the same as current
, and raises exception showing expected and current values.
CPPUNIT_ASSERT_EQUAL_MESSAGE(message,expected,current)
: checks if expected
is the same as actual, and raises exception showing expected and current values, and specified message.
CPPUNIT_ASSERT_DOUBLES_EQUAL(expected,current,delta)
: checks if expected
and current
difference is smaller than delta
. If it fails, expected and current values are shown.
Following with our example, we should code our loadTest
method. We're going to follow next algorithm: we need an auxiliary file which stores one correct DATA
structure. The way of creating this auxiliary file isn't important, but it is very important this file must be correctly created and DATA
structure must be correctly stored. In order to check our load
method behavior, we're going to call it with our auxiliary file, and then check if loaded data is, the same we know is stored in our file. We can code like this:
#define AUX_FILENAME "ok_data.dat"
#define FILE_NUMBER 19
#define FILE_STRING "this is correct text stored in auxiliar file"
void DiskDataTestCase::loadTest()
{
TCHAR absoluteFilename[MAX_PATH];
DWORD size = MAX_PATH;
strcpy(absoluteFilename, AUX_FILENAME);
CPPUNIT_ASSERT( RelativeToAbsolutePath(absoluteFilename, &size) );
CPPUNIT_ASSERT( fixture->load(absoluteFilename) );
LPDATA loadedData = fixture->getData();
CPPUNIT_ASSERT(loadedData != NULL);
CPPUNIT_ASSERT_EQUAL(FILE_NUMBER, loadedData->number);
CPPUNIT_ASSERT( 0 == strcmp(FILE_STRING,
fixture->getData()->string) );
}
With a single test case, we're testing four possible errors:
load
method's return value
getData
method's return value
number
structure member's value
string
structure member's value
In our second test case, we'll follow a similar scheme, but things are getting little harder. We're going to fill our fixture data with known data, store it in another temporal disk file, and then open both files (new one and auxiliary one), read them and compare contents. Both files should be identical because store
method must generate same file structure.
void DiskDataTestCase::storeTest()
{
DATA d;
DWORD tmpSize, auxSize;
BYTE *tmpBuff, *auxBuff;
TCHAR absoluteFilename[MAX_PATH];
DWORD size = MAX_PATH;
d.number = FILE_NUMBER;
strcpy(d.string, FILE_STRING);
strcpy(absoluteFilename, AUX_FILENAME);
CPPUNIT_ASSERT( RelativeToAbsolutePath(absoluteFilename, &size) );
fixture->setData(&d);
CPPUNIT_ASSERT( fixture->store("data.tmp") );
tmpSize = ReadAllFileInMemory("data.tmp", tmpBuff);
auxSize = ReadAllFileInMemory(absoluteFilename, auxBuff);
CPPUNIT_ASSERT_MESSAGE("New file doesn't exists?", tmpSize > 0);
CPPUNIT_ASSERT_MESSAGE("Aux file doesn't exists?", auxSize > 0);
CPPUNIT_ASSERT(tmpSize != 0xFFFFFFFF);
CPPUNIT_ASSERT(auxSize != 0xFFFFFFFF);
CPPUNIT_ASSERT(tmpBuff != NULL);
CPPUNIT_ASSERT(auxBuff != NULL);
CPPUNIT_ASSERT_EQUAL((DWORD) sizeof(DATA), tmpSize);
CPPUNIT_ASSERT_EQUAL(auxSize, tmpSize);
CPPUNIT_ASSERT( 0 == memcmp(tmpBuff, auxBuff, sizeof(DATA)) );
delete [] tmpBuff;
delete [] auxBuff;
::DeleteFile("data.tmp");
}
As we can see, we've configured a DATA
structure with know data, and stored it using our fixture object. Then, we read resulting file (data.tmp) and compare it with our pattern file. We made all kind of verifications, like buffers and files sizes or buffers' contents. If both buffers are identical, then our store
method works fine.
Launching user interface
And finally, we're going to see how we can show a MFC based user interface dialog, compiled inside TestRunner.dll library.
We should open our application class implementation file (ProjectNameApp.cpp) and add these lines to our InitInstance
method:
#include <cppunit/ui/mfc/TestRunner.h>
#include <cppunit/extensions/TestFactoryRegistry.h>
BOOL CMy_TestsApp::InitInstance()
{
....
CppUnit::MfcUi::TestRunner runner;
runner.addTest( CppUnit::TestFactoryRegistry::getRegistry().makeTest() );
runner.run();
return TRUE;
}
This is simpler isn't it? Just define a "runner" instance, and add all registered tests. Tests are registered through CPPUNIT_TEST_SUITE_REGISTRATION
macro call inside our CPP file. Once tests are registered and added to runner, we can show the dialogs with run
method.
Now, we're ready to run our test cases. Just compile your new project and run it from Visual Studio. You'll see MFC based dialog as above. Just click on browse and you'll see this dialog:
Just select one test (green node), or select parent blue node to run all registered tests.