UD - Logo

Measure Software Performance

Dr. Uwe Doetzkies
created: 2013/04/21
last change: 2014/09/26 
Uwe Doetzkies

This translation of my original paper was created by the Google translator and subsequently revised by me. Please tell me about possible mistakes and improvements. - Thank you. U.D.

Result (Abstract)

For each use case, a value is calculated, indicating the cost of a single execution of the use case, expressed in "Action Points". The sum of costs of all use cases executed in an observed time divided by the length of this time is the (average) performance during this time, measured in "Action Points per Second" (aps). SI-unit prefixes (kaps, Maps) are possible as well as a definition related to other time units (per minute, per hour, per day).

Introduction

Yet another performance measure for software?  Aren't there already enough methods: instructions per cycle, instructions per second, FLOPS, throughput, response time, processing speed, latency, etc. (see wikipedia)?
All these indicators are easy to measure, there are methods and tools to measure, to monitor, to report and to evaluate them. But still they say little about what the system actually does.
Why is that? These metrics are generally defined to be determined in all considered systems. Otherwise the values ​​would not be comparable. Only the comparability makes it possible to evaluate different systems at different times. On the other hand the comparability
restrictes us to use only those parameters that we can measure in each system.
But the user doesn't care how many floating-point operations will perform in a second, he is interested in how many tasks are dissolved in a second.

User-oriented performance

The user is interested in how many production data are stored, how many orders are processed by the system, how many status inquiries are answered. In short, the power of his system depends on which and how many use cases are carried out under what conditions.
Use Cases per minute? Should it be? Certainly not, because even the user who has no bearing on software should know that there are more and less complex applications.
In physics, power is the quotient from the energy and the time required for a process. Similarly, in computer sciences the performance can be considered as a quotient of the "energy" a use case, an operation, a sequence of instructions needs, and the time it would take.
The only problem is that no one has seen this "energy", and that terms like "complexity", "abstraction", "scope" ... may have a bit to do with it. But it replaces only one (arbitrary) definition with a other, even arbitrary one.

Function Points

This has never been done, really? Has really no one tries to determine an energy amount to an information process? No, there is one approach:
The Function Point Analysis attempts to assign a metric value 
 to an implementation process using various rules and experiences. This value represents the required effort. And effort is a form of energy. This measurement is expressed in "Function Points". A process has more or less function points than another one - and so the cost of its implementation is greater or less than that for the implementation of the other.
It is difficult to think of something special as a function point. The developer is working on a function and one day it is done, but it just required an effort of 10 Function Points. Another function, where he had worked much longer, but that required not more Lines Of Code as the first, maybe has 25 Function Points. And exactly this effect is described by the function points. But if you look in the code for these points, you will not find it.
Even if the method is not widely applied in practice, where it was applied, almost always gave good estimates for the project.
For more information about the function points refer to the usual sources.

Action Points

To determine the "energy of a use case" I have just taken an inspiration from the determination of the function points. Therefore I like to call these values ​​"Action Points". Like function points these action points should primarily depend on the complexity of the input data, the type of processing and the complexity of the output data. Initially we don't need the correction factors of the Function Point Analysis for special requirements (real-time, security, etc.). We don't have to do more than the processing of inputs and the generation of outputs. Nevertheless, the Action Point concept does not exclude the possibility of correction factors. It would be conceivable, for example, for special requirements to the use case like traceability or interruptibility.
However, the action points should be based solely on the user-related (!) analysis of input, output and process. Therefore action points can be
determined early,  very early in the system development:  once an use case has been defined its action points can be calculated. So a measurable performance definition already becomes part of the requirements analysis.
Assume the saving of the state of a workpiece requires 12 action points, then the requirement to be able to store 500 workpiece states per minute needs a system performance of 6,000 appm (action points per minute).
If this is the only function of a system we get only small information - it would be easier to create a lot of requests and count the successful ones. But real systems are complex - and have a lot of functions. The action point method allows to rate different test scenarios. We will be able to quantify the current performance of a productive system.  All this from the user's perspective and not from the developer's point of view.

Determination of Action Points

How to determine the number of action points of a given use case? The use case itself is usually given either as a use case diagram or as a verbal description or a hybrid of both. It is particularly important that the use cases were described from the user's perspective, not from the modeler's or the programmer's point of view. At this level, the most common use cases should be described as completely as possible.
Now we decribe the nature
of a given use case, and what information is handled.
Types of use cases are:
In addition, the complexity is assessed, as it appears from the user's viewpoint for each use case. Here only three levels are distinguished:  trivial (easy) when information can be transferred essentially sight unseen, normal when simple common tests and manipulations must be performed with the information (usually with two information), and complex if the tests and / or manipulation must consider more than two pieces of information.

Action Points for......trivial......normal......complex...
...Inputs346
...Outputs457
...Queries346
...Processing71015

Normally we can clearly specify the information with which a given use case (e.g. "Enter Order") operates (here: "Order") and its nature  ("enter" = input). Particularly in the processing use cases it is conceivable that use cases operate with more than one information. "Dispose Order" works with the current job list and the current storage status. These use cases are often characterized in that the user already has an idea of ​​how this should be handled by the system. (Note: One does not argue about whether such use cases must be broken into elementary use cases, you can do this or not - it shouldn't have a significant impact on the overall results). For these use cases, of course, all information has to be considered.

Result

For each use case, a value is calculated, indicating the cost of a single execution of the use case, expressed in "Action Points". The sum of costs of all use cases executed in an observed time divided by the length of this time is the (average) performance during this time, measured in "Action Points per Second" (aps). SI-unit prefixes (kaps, Maps) are possible as well as a definition related to other time units (per minute, per hour, per day).

Preview

In another paper, I'll show how these measurements can be defined in practise and how to measure the metric values ​​of software systems on the fly.

References

Send comments