UD - Logo

Log Parameters in CUnit Tests

Dr. Uwe Doetzkies
created: 2015/04/12
last change: 2015/04/12 
Uwe Doetzkies

I know that my English knowledge is not the best one. Please do not hesitate to send me comments and corrections to this text. Thanks. U.D.

Result (Abstract)

The log of a CUnit testsuite contains only the executed tests and the failed assertions. But it seems not possible to log parameters of the tests, e.g. if a test is executed more than once. This paper shows a solution for logging parameters in CUnit tests without modifying the CUnit framework.

Introduction

Some weeks ago I started to use the CUnit framework for testing. Because most tests were almost identical except of some parameters I tried to implement it only once and execute in in a loop like:

    void test (void) {
for (int i=0; i < 7000; i++) {
CU_ASSERT_EQUAL (foo(i), foo_expected(i));
}
}
Protocol
    1. Example.c:3  - CU_ASSERT_EQUAL (foo(i), foo_expected(i))
    2. Example.c:3  - CU_ASSERT_EQUAL (foo(i), foo_expected(i))
    3. Example.c:3  - CU_ASSERT_EQUAL (foo(i), foo_expected(i))
    4. Example.c:3  - CU_ASSERT_EQUAL (foo(i), foo_expected(i))

But when you look into the log you can see that there are failures, but you have no possibility to view the current value of the running variable to identify the tests failed.
Of course, you can create a testsuite with 7000 identical tests - but who want to do this?
So I asked in the CUnit discussion list for a solution of this problem. Because CUnit is used since many years I could not imagine that this problem has no solution.
Till now there was no answer...

Solution Requirements

So I decided to solve this problem. There where some requirements for the solution:

Solution Architecture

I create a new level of tests - called test steps or sub tests. The creation function can log all parameters of the test. Additionally I use the fact, that the finalization function of a test (basic_test_complete_message_handler in Basic.c for basic tests) can be called more than once during a test because it doesn't affect the state of a test. So I can use it to protocol the result of a test step.

Solution


#include <stdarg.h>

static void basic_teststep_msg (const char* pFmt, va_list args) {
// adapted from basic_test_start_message_handler (Basic.c)
// TODO: Write similar function for the other cunit test modes
CU_BasicRunMode mode = CU_basic_get_mode();
if (CU_BRM_SILENT != mode) {
fprintf(stdout, "\n");
if (NULL != pFmt) {
vfprintf(stdout, pFmt, args);
}
}
}

typedef CU_pFailureRecord CU_pTeststep;

static void (*pNewStepMsg)(const char*, va_list) = basic_teststep_msg;
// TODO: Write a getter/setter

CU_pTeststep initStep(const char* fmt, ...) {
va_list args;
CU_pFailureRecord __n;
va_start (args, fmt);
if (pNewStepMsg != NULL) {
pNewStepMsg (fmt, args);
}
va_end (args);
__n = CU_get_failure_list();
while (__n->pNext != NULL) __n=__n->pNext;
return __n;
}

void exitStep(CU_pTeststep pTs) {
if (NULL == CU_get_test_complete_handler ()) return;
CU_get_test_complete_handler () (
CU_get_current_test(),
CU_get_current_suite(),
pTs->pNext);
}

void continueTest () {
initStep ("%s continued ...", CU_get_current_test()->pName);
}


Comments on Solution


Example Implementation


static void testUseSteps(void)
{
int nr = 0;
char buffer[100];
CU_pTeststep step;
const char* test = CU_get_current_test()->pName;
CU_FAIL ("Failure before the first test step");

for (nr=0; nr < 15; nr++) {
step = initStep (" ==> %s (nr=%2d) ...", test, nr);
CU_ASSERT_NOT_EQUAL(nr, 11);
CU_ASSERT_NOT_EQUAL(0, nr);
CU_ASSERT_NOT_EQUAL(-12, nr);
CU_ASSERT_NOT_EQUAL(10, nr);
exitStep(step);
}
continueTest();
CU_FAIL ("Failure after the first test step set");

for (nr=0; nr < 15; nr++) {
step = initStep (" ==> %s (2nd set: nr=%2d) ...", test, nr);
CU_ASSERT_NOT_EQUAL(nr, 8);
CU_ASSERT_NOT_EQUAL(3, nr);
CU_ASSERT_NOT_EQUAL(-4, nr);
CU_ASSERT_NOT_EQUAL(-nr, nr);
exitStep(step);
}
continueTest();
CU_FAIL ("Failure after the last test step");
}
Suite: TestUseSteps
Test: testUseSteps ...
==> testUseSteps (nr= 0) ...FAILED
1. ExampleTests.c:304 - CU_ASSERT_NOT_EQUAL(0,nr)
==> testUseSteps (nr= 1) ...passed
==> testUseSteps (nr= 2) ...passed
==> testUseSteps (nr= 3) ...passed
==> testUseSteps (nr= 4) ...passed
==> testUseSteps (nr= 5) ...passed
==> testUseSteps (nr= 6) ...passed
==> testUseSteps (nr= 7) ...passed
==> testUseSteps (nr= 8) ...passed
==> testUseSteps (nr= 9) ...passed
==> testUseSteps (nr=10) ...FAILED
1. ExampleTests.c:306 - CU_ASSERT_NOT_EQUAL(10,nr)
==> testUseSteps (nr=11) ...FAILED
1. ExampleTests.c:303 - CU_ASSERT_NOT_EQUAL(nr,11)
==> testUseSteps (nr=12) ...passed
==> testUseSteps (nr=13) ...passed
==> testUseSteps (nr=14) ...passed
testUseSteps continued ...
==> testUseSteps (2nd set: nr= 0) ...FAILED
1. ExampleTests.c:317 - CU_ASSERT_NOT_EQUAL(-nr,nr)
==> testUseSteps (2nd set: nr= 1) ...passed
==> testUseSteps (2nd set: nr= 2) ...passed
==> testUseSteps (2nd set: nr= 3) ...FAILED
1. ExampleTests.c:315 - CU_ASSERT_NOT_EQUAL(3,nr)
==> testUseSteps (2nd set: nr= 4) ...passed
==> testUseSteps (2nd set: nr= 5) ...passed
==> testUseSteps (2nd set: nr= 6) ...passed
==> testUseSteps (2nd set: nr= 7) ...passed
==> testUseSteps (2nd set: nr= 8) ...FAILED
1. ExampleTests.c:314 - CU_ASSERT_NOT_EQUAL(nr,8)
==> testUseSteps (2nd set: nr= 9) ...passed
==> testUseSteps (2nd set: nr=10) ...passed
==> testUseSteps (2nd set: nr=11) ...passed
==> testUseSteps (2nd set: nr=12) ...passed
==> testUseSteps (2nd set: nr=13) ...passed
==> testUseSteps (2nd set: nr=14) ...passed
testUseSteps continued ...FAILED
1. ExampleTests.c:299 - CU_FAIL("Failure before the first test step")
2. ExampleTests.c:304 - CU_ASSERT_NOT_EQUAL(0,nr)
3. ExampleTests.c:306 - CU_ASSERT_NOT_EQUAL(10,nr)
4. ExampleTests.c:303 - CU_ASSERT_NOT_EQUAL(nr,11)
5. ExampleTests.c:310 - CU_FAIL("Failure after the first test step set")
6. ExampleTests.c:317 - CU_ASSERT_NOT_EQUAL(-nr,nr)
7. ExampleTests.c:315 - CU_ASSERT_NOT_EQUAL(3,nr)
8. ExampleTests.c:314 - CU_ASSERT_NOT_EQUAL(nr,8)
9. ExampleTests.c:321 - CU_FAIL("Failure after the last test step")


In the log you can see, which test steps (or sub tests) where passed and which failed. And after the last continue message you'll find the normal CUnit protocol output with the overall verdict of the test and all failures (of course including failures outside of a test step)

Result

In my projects I'm now able to log the parameters of test steps or sub tests (it doesn't make any difference for me how you call it). It's okay for me - and I hope you can use it in your own projects. To integrate it in the CUnit package one has to do some tasks:
Send comments