Article Options
Premium Sponsor
Premium Sponsor

 »  Home  »  Windows Development  »  Delivering Reliable Software: Automated Unit Testing Applied to .NET. Part 1
Delivering Reliable Software: Automated Unit Testing Applied to .NET. Part 1
by Dmytro Lapshyn | Published  07/17/2002 | Windows Development | Rating:
Dmytro Lapshyn

Dmytro Lapshyn works as a CTO for Validio Ukraine, which is an official partner of Validio Software, LLC. He previously worked as a programmer in a volunteer student scientific and production group "Programmist" of Kharkov Technical University of Radio-Electronics.

During more than 7 years of his programmer career Dmytro has developed various applications including desktop, client-server and Internet development. He has been working with Microsoft technologies since 1998 and has been developing with Microsoft .NET since the Beta 2 release in 2001. His primary areas of expertise are Visual Basic, ASP, COM+ and .NET.

Dmytro is 27 years old and lives in Kharkov, Ukraine. He has a Bachelors and a Masters degree in Computer Systems Security from Kharkov Technical University of Radio-Electronics.

Company Profile

Founded in 1992, Alis Software successfully operated in software development market for 5 years. In 1997 the company was acquired by Miik Ltd and became its Information Technologies Division. Three years of flourishing accompanied by the dynamic growth made the division the priority of the company business and since 2000 Miik Ltd. almost entirely proceeded to producing software, computer graphics, and Web applications in partnership with foreign and national companies for clients both in Ukraine and abroad.

In summer 2005, Information Technologies Division of MIIK Ltd. was reorganized into Validio Ukraine.

Validio Software provides outsourced software development services to high-tech companies and businesses that rely on technology. Based in Seattle, Washington, Validio's services include design, management, and implementation of complete projects using experienced development teams, as well as providing skilled development resources for customer driven projects. By maintaining staff of qualified software developers and experienced project managers in both the U.S. and Ukraine, Validio offers its clients technical expertise that is both scalable and cost effective.

 

View all articles by Dmytro Lapshyn...
Delivering Reliable Software: Automated Unit Testing Applied to .NET. Part 1

An Introduction To The Theory of Unit Testing

Extreme Programming (http://www.xprogramming.com/), also known as XP, becomes more and more popular for projects with unstable requirements, tight deadlines and small teams. Its ability to better satisfy customer needs by releasing more reliable and valuable for the customer's business software looks very appealing and creates strong motivation for adoption of this software development discipline in the current tough economic conditions.

Testing is one of the basic rules and practices of Extreme Programming (the detailed list of the rules and practices may be found at http://www.extremeprogramming.org/rules.html). In XP, the testing process completely differs from the traditional approach where code is developed first and then tested. Unit testing requires just the opposite thing - tests are written before the class being tested. More than that, source code for test classes is stored together with the source code of the system being developed and it is not allowed to release source code without the appropriate unit test or if the appropriate unit test fails. I will shed more light on this in the next section.

You may be curious what are the benefits for those who strictly follow the rules. There is no secret - first of all, you get more reliable code. Second, you facilitate potential changes - having a test on hand, you are always able to check whether system left operational after your recent changes. The third bonus is, actually, the consequence from the second, but it does not make it less important - a well-established unit testing process and a framework serve as a basement for another important Extreme Programming feature - Refactoring (http://www.refactoring.com), an excellent technique to constantly improve the quality of your code and to get free from a well-known, but rarely achievable waterfall-like "make the complete design, and write code only after the design is ready" sequence.

Good Practices for Unit Testing and Continuous Integration

Obviously, it is not enough just to know about some technique to use it effectively. In other words, you cannot learn to swim just by reading a guidebook. You need a practice, and a lot of it, I'd say. The situation with unit testing is the same - despite of its apparent simplicity, you have to gain some personal experience to obtain maximum advantage it can offer. On the other hand, the glorious times of brave explorers seem to be far behind, and people following XP methodologies have developed several good practices that proved their effectiveness and are rather easy to adopt:

  • A unit test framework (which is the main topic of this article) should be created or developed;
  • All classes in the system should be tested;
  • Test should be created first, before the code;
  • Any change into the system code must not cause any test to fail. If some test fails, either the change has introduced incorrect behavior, or the test had bugs.
  • The system should undergo continuous integration.

Many developers who just begin to learn XP are reluctant to the "Tests first, code second" principle, thinking that tests can be written later without loosing the efficiency. Nothing can be farther from the truth, and I would like to warn the reader to avoid this dangerous way from the start.

Another important issue here is continuous integration. These "magic" words mean nothing but continuous builds followed by comprehensive testing. Ideally, this process should be fully automated with minimum manual intervention required (usually only when things go wrong), and this means that build automation tools have to cooperate with the testing framework, or, even better, become a part of it. I will postpone the further explanation to the next section not to get you overloaded just from the beginning.

Architecting The Unit Testing Framework

First things first, so let me begin with the terms that I will use throughout the article. Let's start with test case, a piece of code testing particular feature of some class. The next is test suite, a set of test cases covering all features of the class. And the last one is fixture, a set of test data used by all tests in a test suite. Personally I don't like the word "fixture", but it seems to become a standard-like term so I will just follow this trend.

Finished with the concepts, let's try to find relationships between them. Obviously, there is an aggregation relationship between test suite and test cases composing the suite. Another relationship that lies on the surface is the association between a test suite and the appropriate fixture. Going on with our search for relationships, we see that all test suites are aggregated by the abstract concept of the "system".

Now, when relationships are specified, the time has come to have more detailed look on the concepts and determine their important characteristics. We will assume that the following properties will be of interest:

ConceptProperties
Test CaseTest Case Name, Developer Name
Test SuiteTest Suite Description
Fixture 
System 

See Figure 1 for the complete sketch of the testing framework structure.

"Enough theory, let's make something useful", I hear you moan. Well, this is the right time to turn our abstract knowledge into a source code. The first move you would probably make is turning the described concepts right into the classes. This is what object-oriented design techniques teach us to do, and there is nothing wrong with such an approach, at least from the object-oriented point of view.

But let's think a little bit if we really need classes for every concept? Obviously, there is no reason to make a class of a system, because it has neither properties nor responsibilities that would be of interest. The second question - should we represent test cases by classes - is far more intriguing. At the first sight, it is natural to do so, preparing fixture in a constructor, exposing properties and publishing test methods. Looks appealing, doesn't it? Not quite. Remember the definition of a test case - its responsibility is to test a small piece of class functionality. This means that we will most likely have a single method in most test case classes, and construction/destruction process along with polymorphic calls of property get/set methods would be a real overkill. So, we need some more acceptable solution. But before we continue, let's take a closer look at a test suite.

A test suite is the best candidate available to become a class. First of all, there will be a convenient correspondence between a development class and its test suite. Secondly, it will be easy to store the development class and the test suite code together in a single source file without making it much bigger. And, last but not least, we will easily eliminate the test code from release builds by using a preprocessor. Thus, we have the first piece of the source code - the declaration of the TestSuite class:

public abstract class TestSuite
{
    private string _description;

    public TestSuite(string description)
    {
        _description = description;
    }

    public string Description
    {
        get
        {
            return _description;
        }
    }

    public virtual void Initialize() {};
    public virtual void Finalize() {};
}

Still, what should we do with test cases? What if we will make them members of a test suite? Sounds good, right? The one obstacle here is test case properties - generally, class members cannot have custom properties. Anywhere but not in .NET where his majesty Attribute steps in. I am pretty sure that you are familiar with many attributes existing within the .NET Framework, and you may even have heard that it is possible to create custom attributes. Yes, it is really possible and that's how our custom attribute will look like:

[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class TestCaseAttribute: Attribute
{
    protected string _testName;
    protected string _developerName;


    public TestCaseAttribute(string testName, string developerName):base()
    {
        _testName = testName;
        _developerName = developerName;
    }

    public string TestName
    {
        get
        {
            return _testName;
        }
        set
        {
            _testName = value;
        }
    }

    public string DeveloperName
    {
        get
        {
            return _developerName;
        }
        set
        {
            _developerName = value;
        }
    }
}

With this attribute, we can mark any method within a test suite as a test case having its own properties.

Fixture is the most arguable point. You can make a class from it, but I don't see any reason for myself to do so. Fixture may be absolutely anything, after all - a simple data file, a set of records in a database, a complex data structure etc. - so, commonness is only in that all these things are fixtures. On the other hand, you are free to make your own decision (or mail me your arguments if you find them reasonable).

Since tests are meaningless unless one can find out whether the test has passed or failed, we will need some unified mechanism that would allow test cases to report any problems occurred. We really need a unified one since it is always a good idea to separate data processing from any input/output interfaces and, in addition, to eliminate duplicate code. The most interesting thing here is that we already have such a mechanism available - good old exceptions. It is enough to create our own exception class, possibly inheriting it from an appropriate .NET Framework exception, and we're done. I will not give the complete code, since it is simple and obvious, and just tell you the name - AssertionFailedException. Again, to prevent code duplication, I will add protected Assert method to the TestSuite class:

protected void Assert(bool condition, string message)
{
    if (!condition)
    {
        throw new AssertionFailedException(message);
    }
}

Now all pieces of the puzzle are in place, so let's try to implement some very simple test suite:

public class SampleSuite: TestSuite
{
    private Calculator _calc;

    public SampleSuite(string description): base(description)
    {
    }

    public override Initialize()
    {
        _calc = new Calculator();
    }

    public override Finalize()
    {
        _calc.Dispose();
        _calc = null;
    }

    [TestCase("Add Test", "Lapshin")]
    public void AddTest()
    {
        double result = _calc.Add(2, 2);
        Assert(4 == result, "Unbelievable!");
    }
}

There is one question left unanswered - how to make our architecture available to the applications being developed? Again, with .NET it is a very simple task - it is enough to package all our code into a separate assembly. This assembly, given it has a strong name, may be placed into Global Assembly Cache if you don't wish to have multiple copies of the same file across your hard drive.

Automating The Test Execution

The architecture we have designed is just one part of the successful unit testing solution. The second one is the test execution engine. Unfortunately, this is just an article, and it is impossible to describe the complete solution within its volume, so I will give you only key ideas, leaving the implementation details to the interested reader

The first thing we have to define is the execution sequence. I propose a very simple one. Any system developed for the Microsoft .NET Framework consists of several assemblies. Each assembly, in turn, consists of one or more classes, including test suites. So, our task is to establish three loops - one iterating on assemblies, one iterating on test suites within the assembly, and the last one iterating on all test cases within the test suite. Within the innermost loop every test case is executed and the execution result is logged.

It is easy to iterate on assemblies - they are just files, listed manually or extracted from the solution configuration. Iterating on classes contained in the assembly is much more challenging, and that's where powerful .NET reflection magic comes into play. So, to establish our second loop, we need to obtain the list of classes exported by the assembly and to determine, which of them are test suites. You may be surprised, but this is achieved just in several lines of code:

Assembly _assemblyInstance = Assembly.LoadFrom(filename);

foreach(System.Type type in _assemblyInstance.GetExportedTypes())
{
    if (type.IsClass && !type.IsAbstract)
    {
        if ("TestSuite" == type.BaseType.FullName)
        {
            // We have found a test suite.
        }
    }
}

The third loop iterating on test cases is as simple as the previous one:

foreach(MethodInfo info in type.GetMethods())
{
if (info.GetCustomAttributes(typeof(TestCaseAttribute), false).Length > 0)
    {
        // We have found a test case.
    }
}

And now, the trickiest thing: how to invoke test cases found. Thank to the Great Gods of Reflection, there is a magic code spell named ... right, InvokeMember, which can be cast on any System.Type instance. Another helper spell we will need is the one creating instances of given type. And, fortunately, it is also available. Let's have a look on how the trick implementation looks like:

ConstructorInfo constructor = type.GetConstructor(Type.EmptyTypes);
object instance = constructor.Invoke(new object[] {});

type.InvokeMember(methodName,
BindingFlags.Instance | BindingFlags.Public | BindingFlags.InvokeMethod,
    null,
    instance,
new object[] {});

But beware of the TagretInvocationException demon when casting the InvokeMember spell. It can be raised under different conditions, and one of them is when the member being invoked throws any exception. Remember our assertion mechanism that does just that in case of failed assertion. So, to be safe, always catch TargetInvocationException and analyze its innerException property. In case of exception thrown by the invoked method, this exception will be stored right there. In other cases, this property can be null and you'd better be protected.

The rest is rather simple - you run each test case, analyze and output its execution result, repeating this sequence for every test case found. Of course, you should invoke Initialize member of the test suite before running contained test cases, and Finalize method when all test cases from the suite were executed. The second loop is the most appropriate place to do so.

As I promised, I return to the problem of continuous integration. Now we are ready to solve it since we have the core components, the test execution engine and the testing framework, available.

Our first task will be to implement automated builds. Fortunately, it won't require as much coding as you may expect, since Microsoft Visual Studio .NET has built-in batch mode features. It is enough to invoke it with specific command line arguments to get the job done and then parse the output file to determine whether the build was successful. You may find detailed description of the command line arguments here: http://msdn.microsoft.com/library/en-us/vsintro7/html/vxgrfCommandLineSwitches.asp.

The second task is to obtain the latest version of the source code from the version control system. Since .NET technology is very young, hardly any VSS supports native .NET interfaces, but at least Microsoft Visual SourceSafe and StarTeam support COM interfaces. How can this help us, you ask? Thank to Microsoft, .NET Framework can interact with COM almost seamlessly by using Interop Services. The rest depends on a VSS - for example, for Visual SourceSafe it is enough to write several lines of code, for other version control systems more programming may be required.

That's how piece of code for Microsoft Visual SourceSafe may look like:

VSSDatabaseClass database;
database.Open(_iniFile, _userName, _password);

int flags = (int)VSSFlags.VSSFLAG_FORCEDIRNO;
if (recursive)
{
flags |= (int)VSSFlags.VSSFLAG_RECURSYES;
}

VSSItem project = _database.get_VSSItem(projectPath, false);
project.Get(ref localPath, flags);

After the project has been built, find all resultant assemblies and execute all tests available. Remember that the integration process should not require any user interaction. Therefore, when some tests fail you should, for example, send e-mail instead of displaying messages on the screen. You should also think of some logging, at least at the debug stage. You may use plain text files, Windows Event Log or something more exotic, depending on your tasks.

Well, now, after the key ideas are explained, you might be already eager to implement your own testing framework. And, if you want to gain various .NET programming experience, you definitely should try to do so - it's rather exciting.

Summary

Of course, this is just a beginning, the first step in our long journey into the world of reliable software. And, if you liked it, and I really hope that you did, please go on and visit the home page of X-Unity - the unit testing framework built with Microsoft .NET and for Microsoft .NET.

In the Part 2 of the article, I will give an overview of the X-Unity framework as well as a number of practical examples of its usage in a real world development, so stay tuned.

How would you rate the quality of this article?
1 2 3 4 5
Poor Excellent
Tell us why you rated this way (optional):

Article Rating
The average rating is: No-one else has rated this article yet.

Article rating:2.14285714285714 out of 5
 7 people have rated this page
Article Score17584
Sponsored Links