User Help > Executing Tests With Integrity Lifecycle Manager > Using Integrity Lifecycle Manager to Support Your Testing Process > Create Test Objectives
 
Create Test Objectives
Role
Test manager or a team lead within the testing team.
Purpose
Break down your testing activities into a series of goals or objectives.
It is up to the organization to define what a test objective means. The purpose is to structure the objective based on the way that you want to report your test results for a given project. This is often a different dimension than the way your test suites are authored and organized, particularly as you reuse test suites over time.
Action
Select Item > Create > Test Objective or File > New > Item > Test Objective.
Define the objective of testing and add planned tests to the objective.
Planned tests are added to the test objective using the Tests relationship field. Tests can be dragged onto this relationship field from other views within Integrity Lifecycle Manager. You can edit the test objective and click the add interactor (+) beside the relationship field.
The add interactor (+) brings ups the Finder dialog box, which is a dialog for navigating documents by related project.
In this dialog box, you can select the project in the left pane and find the appropriate test suite or group document in the right pane. You can select the test suite and add it to the test objective. Or, you can expand the test suite and add only explicitly selected test cases.
Metrics
The default metrics defined on the test objective include:
The planned number of tests. This computed field identifies the total number of test cases in the Tests relationship field.
Metrics fields only count numbers of meaningful test cases. Meaningful is defined by the value of the test case's Category field defined by your Administrator. All of the values for that field do not have to have the meaningful attribute associated with them. Category is a unique field. It is the only field with the ability to associate this attribute with the individual values. Normally the categories of Heading and Comment are considered non-meaningful and so would not show up in the metrics. Your administrator could have added other non-meaningful categories as well.
The actual pass and fail results, which are computed based on the related completed test sessions.
* 
By default, pass and fail metrics are rolled up using the Completed Test Sessions field. Test sessions transition to this field after they are related to the test objective and moved to the Completed state in their workflow.
* 
The number of pass and fail counts should never exceed the total number of planned tests. The number of planned tests can grow over time. As this count grows, the number of passed and failed counts or not run counts should increase proportionally. You can then use these fields to trend over time.
Relationships
The relationships relevant to a test objective are:
Test Plan: The test plan to which the objective belongs.
Tests: Planned tests defined in the test objective.
Active Test Sessions: The test sessions related to this objective that are in a state of Submit or Testing.
Completed Test Sessions: The test sessions related to this objective that are in a state of Completed.