Skip to main content

2 posts tagged with "ops"

View All Tags

· 2 min read
AnimusNull

Java micro benchmark harness, herein annotated as JMH. Is a micro benchmarking tool meant for internal performance testing. It looks similar to JUnit and a respective unit test. But it's intent is to benchmark the performance of a method, generally without calling to an external service.

class BenchmarkLists {
@Benchmark
fun benchmark_comprehension() : List<Int> {
return (1..2).map { it + 5 }
}
}

The benchmark annotation, is the equivalent of a @Test method. It executes as a benchmark, how it runs is dependent on the configuration.

Setup & Teardown

Like Junit with @Before there are methods to setup and configure a benchmark run. The common use case for this is to instaniate core data connections, load a file or some other configuration.

As an example, in testing a serializer, rather than performing file IO in each respective benchmark method it should be loaded before each run. Code called during the @Setup will not be considered in the final benchmark results.

class BenchmarkLists {
lateinit var rawJson :String

@Setup
fun setup() {
rawJson = File.read("/tmp/data.json").readText()
}

@Teardown
fun teardown() {
// Close file or connections.
}

@Benchmark
fun benchmark_comprehension() : List<Int> {
return (1..2).map { it + 5 }
}
}

With kotlin you can set up properties via lateinit var, and then have them set via the @Setup method.

Parameters

Each JMH benchmark has the idea of a set of parameters. These parameters allow for benchmarking, or testing performance across several different variables. A use case would be to test several files of varying size.

Gradle Plugins

Kotlinx

I've not had much luck with this version, and have opted for the below.

JMH Gradle Plugin

Flame charts

· 3 min read
AnimusNull

As we focus on features, and making a product that passes a given requirement set. We often forget to validate the feature we're delivering. There is an intersection between DevOps, SRE, Product, and Software.

Delivering the feature is only one part, but how do we guage success? This further delves into value stream analytics. But skating over that, we've delivered the feature. How is the customer experience? Customer's are not always external, it could also be an internal team, using your service. There are a number of criteria that can gauge the value stream of a service. It could be latency, request per second, or something more business focused like the number of purchases.

Performance testing is a means to validate a given feature set, and that it meets the expected requirements. Usually the requirements will be defined by the product and or project team. Where they specify the base line required to meet the customer needs.

To validate the results you will need either a framework to do performance testing, or metrics to assess the performance. If focusing on custom metrics, there are additional cardinality data you can focus on, the customer id or some other meta data to discern which customer is being impacted.

For performance testing there are two means to performance testing. I've broken them down as follows:

  • Internal the equivalent of unit testing.
  • External the equivalent of integration testing.

Internal Performance Testing

An internal performance test does not require any additional infrastructure, generally. It can call to an external data source, but that is not the common use case. This is meant to test the performance of methods inside of the program. It's a means to compare several potential approaches, or libraries and their respective performance.

As an example, you may want to test the performance of serialization library. In a sample test case you would write a method to test several serialization approaches getting a base line of how the different approaches perform.

A case of testing an external library, would be testing different approaches to get data out of a data base. Assessing the query and parsing time.

Note: Concurrency and threading is not optimal in this use case. The time to spin up a thread pool doesn't work well with internal testing. The time to spin up a thread pool usually skews the resulting data set.

External Performance Testing

External testing will act as an extenral caller into the code as a deploy service. This assumes an instance of the code is running as a service that can be queried. Gathering average resonse times, potential throughput and other metrics.

This is usually followed after the internal performance testing. To validate the performance of the entire application. This is meant to gauge the final customer experience, and the maximum throughput of the applicaiton.

The test means can focus on stress testing, or just a general test that gagues more of an average. In either case the intent is to gauge how the application performs as a deployed entity.

Note: It's important to ensure that you don't have a bottleneck in an exterior layer (databsase, message queue, etc.*)