Skip to main content

Performance Testing

· 3 min read
AnimusNull

As we focus on features, and making a product that passes a given requirement set. We often forget to validate the feature we're delivering. There is an intersection between DevOps, SRE, Product, and Software.

Delivering the feature is only one part, but how do we guage success? This further delves into value stream analytics. But skating over that, we've delivered the feature. How is the customer experience? Customer's are not always external, it could also be an internal team, using your service. There are a number of criteria that can gauge the value stream of a service. It could be latency, request per second, or something more business focused like the number of purchases.

Performance testing is a means to validate a given feature set, and that it meets the expected requirements. Usually the requirements will be defined by the product and or project team. Where they specify the base line required to meet the customer needs.

To validate the results you will need either a framework to do performance testing, or metrics to assess the performance. If focusing on custom metrics, there are additional cardinality data you can focus on, the customer id or some other meta data to discern which customer is being impacted.

For performance testing there are two means to performance testing. I've broken them down as follows:

  • Internal the equivalent of unit testing.
  • External the equivalent of integration testing.

Internal Performance Testing

An internal performance test does not require any additional infrastructure, generally. It can call to an external data source, but that is not the common use case. This is meant to test the performance of methods inside of the program. It's a means to compare several potential approaches, or libraries and their respective performance.

As an example, you may want to test the performance of serialization library. In a sample test case you would write a method to test several serialization approaches getting a base line of how the different approaches perform.

A case of testing an external library, would be testing different approaches to get data out of a data base. Assessing the query and parsing time.

Note: Concurrency and threading is not optimal in this use case. The time to spin up a thread pool doesn't work well with internal testing. The time to spin up a thread pool usually skews the resulting data set.

External Performance Testing

External testing will act as an extenral caller into the code as a deploy service. This assumes an instance of the code is running as a service that can be queried. Gathering average resonse times, potential throughput and other metrics.

This is usually followed after the internal performance testing. To validate the performance of the entire application. This is meant to gauge the final customer experience, and the maximum throughput of the applicaiton.

The test means can focus on stress testing, or just a general test that gagues more of an average. In either case the intent is to gauge how the application performs as a deployed entity.

Note: It's important to ensure that you don't have a bottleneck in an exterior layer (databsase, message queue, etc.*)