Windchill, which is the base of both PDMLink™ and ProjectLink™, is written in Java and most extensions/customizations are also written in Java. It is clearly very important to write high quality code and so it is important to test the code effectively.

  • Introduction
  • White box JUnit testing
  • Testing Workflow robots
  • UI Testing
  • Performance/Stress Testing
  • How much testing do I need to do?


Generally speaking there are two forms of testing

  • Verification – Are we building the product right?
  • Validation – Are we building the right product?

Basically verification is the developer testing to see if the code does what they expect, and they can use various techniques such as inspection, command line scripts, debuggers, trace points and junit tests; these are often referred to as white box testing. Validation should be done independently from the code development and is testing functionality with techniques such as test plans, UI testing and data validation; these are often referred to as black box testing. There is clearly some overlap between techniques used.
It is generally accepted that the earlier a bug is found the cheaper it is to fix, so good testing makes good economic sense. However in a complex enterprise system like Windchill™ it is not always clear how much and what form of testing is needed for our customizations.

White box testing

Command Line Testing

Very simply it is just adding a main to a class to execute it. With Windchill is will work even if the code is designed to work server-side, such as a workflow robot.
In the following example it is a typical case, where the developer wishes to test a single method with a Windchill object as an argument.

public class WComMyClass2Test{

//--  Operation section --
    boolean method2Test(WTPart myWTPart) throws WTException {

//--  Test section --
    public static void main(String[] args) {

    try {

        System.out.println(VERBOSEID + "=> Start Test ");

        // Login with code to prevent asking user each time         

        // Get a Windchill object using a command line oid
        // e.g. wt.part.WTPart:324324
        WTPart myWPart = (WTPart) WComDataHelper.getObject(args[0]);

        // Test a method
        WComMyClass2Test thisWComMyClass2Test = new WComMyClass2Test();
        boolean result = thisWComMyClass2Test.method2Test(myWPart);

        // Check result
        if (result!=true)
            throw new Exception("Test failed");

    } catch (Exception e) {


Note the automatic login and that the command takes the string OID as an argument.
This is a very effective form of testing and we recommend that the mains are left in the code so that they can used for testing even on the final system if necessary. However the issue comes when the developer wishes to execute many tests, on many methods in the same class. This is when JUnit testing should be considered otherwise the main becomes too complex.
Also this is not formal testing as it does not ensure repeatability of the testing, however it does have the advantage that it is quick and does not need a formal test data set to be created. The user can chose any suitable object they have available.

JUnit Testing

A JUnit test is a class developed specifially to test in isolation the methods of the target class. It can be automated to run all the testing in sequence, which provides excellent regression testing. However, unlike a simple command line, it does require the test class to setup and destroy the data it uses. In Windchill this is often not trivial, we use a special library to assist us that can create various types of Windchill object and the relationships between them.

Example Target class

import static org.junit.Assert.*;
import org.junit.BeforeClass;
import org.junit.Test;

public class ExampleJUnitTest   {

//--  Operation section --
    public static void setUpClass() throws Exception {

    public void testmethod2Test() throws Exception {
    String testPartName = "Test Part";
    try {

        // Use the wcom library to create a test part
        WTPart myWTPart = WComTestSupportUtils.createPart( "Test Part", null);

        // Test the method
        WComMyClass2Test thisWComMyClass2Test = new WComMyClass2Test();
        boolean result = thisWComMyClass2Test.method2Test(myWPart);

        // Check result
        assertTrue("Result should be true",result);

    } finally {
    }// testmethod2Test

/** More tests.... */


Note the use of the WComTestLibrary to create the WTPart.

JUnit is a very effective form of testing, but we have found the code size required, even with our test library API is twice the size of the original code. So it the effort of writing and maintaining the tests should not be underestimated.
The developers should be familiar with the techniques of Test Driven software development to take full advantage of JUnit testing.

Testing Workflow robots

Testing workflows robots is a specific case where JUnit testing is essential. The reasons for this are that workflows are very difficult to run in an easy repeatable way, and so if the workflow robot e.g. java code is at the end or maybe just the middle of the excution it is a significant work to push the workflow to the correct point.
For this reason all robots need to be very well tested with a good data set. Frequently the robots are related to a change process, so the supporting workflow test library will need to be able to easily create all the types of change object and there relationships to other objects such as WTDocument and WTPart.

UI Testing

Although automated tools do exist we have not seen any used extensively with Windchill customizations. We recommend the use of a well written test plan with screenshot to guide the tester though the steps needed to exercise the the UI effectively.

Performance/Stress Testing

This normally relates to two separate system behaviours, User Interface stress testing or database performance.
There is alot of commercial tools available which can be used. It is sometimes sufficient to use a basic tool to check the performance of a specific URL but this is used more to check the performance of a specific customization. It is possible to attach a profiler to the MethodServer which can be very useful for detecting bottlenecks, but this is not strictly speaking testing more a developer tool.

How much testing do I need to do?

The problem, much to the dismay of many project managers, is it depends. You need to prioritize and understand that testing is very resource intensive, so you will never get to 100% tested (in fact this is in impossible) so what is good enough? 90%? 50%? 10%? To answer this question we must understand relative importance of the code and the ROI (Return on investment) there is from the testing effort. Let’s say that we have some code that is vital to all business activity in the client and in the same package we have some administrative utilities. It is obvious they do not require the same level of testing.

Code coverage

We can in one basic way check testedness by measuring how much of the code we executed with ours tests. To do this we need to add tools to trace the code covered by the tests, such as Emma, and we also need repeatable testing using junit or very complete test plans. At wincom we provide test coverage results but only in cases where we consider it a good ROI which typically is in critical parts of the code.