Open Credo

February 19, 2013 | Software Consultancy

Withstanding the test of time – Part 2

How to create robust tests for Spring based applications

This blog post continues on from Part 1 which discussed types of tests and how to create robust tests. Part 2 will examine techniques to help whip a test suite in to shape and resolve common issues that slow everything down. The approaches in this post will focus on spring based applications, but the concepts can be applied to other frameworks too.

WRITTEN BY

Gawain Hammond

Withstanding the test of time – Part 2

Performant Tests

A test suite can easily rocket in execution time. So how do we keep tests lean and fast? Well, ask yourself, what slows down tests? The obvious culprit is that some data access is slower than others. Memory is faster than hard disk, which in turn is faster than network access.

We then see this pattern repeated as unit tests are typically faster than component tests, which are faster than tests run against a production like environment.

A production environment will have a fully configured application context and require use of external data sources and messaging systems. So what can we do to speed up our tests?

Fast Running Unit Tests

Unit tests (by definition) are executed with all needed data in memory and are typically very short running. They tend to have a small memory foot print and will be the fastest executing tests you write. But we can’t confidently test that a system is ready for production deployment with just unit tests can we? Well, why not? If your application has been thoughtfully designed with requirements and appropriate class responsibilities in mind there’s little reason why unit tests alone cannot provide you with confidence your application will deliver value.

For many cases there’s often no need to run tests in a production like configuration, as your code is already production like. If the majority of your acceptance tests can’t be executed as fast as unit tests it’s potentially a design smell and will cost any team a huge hit in productivity, especially larger teams.

Atomic Tests

A test suite is going to finish running a lot sooner when tests can be executed in parallel. Unfortunately, this is not something often considered when the first tests of a project are being written. Even if developers do initially consider parallel execution intention can be quickly forgotten in large teams that do not communicate due to geographical distribution or siloed teams.

An atomic test is one which is self contained and does not share resources with other tests. Unit tests, for example will always be atomic and never affect affect other tests. In contrast, component/integration tests typically interact with shared parts of the system such
as databases or messaging systems, so considerations have to be taken to ensure the data created or altered does not affect other running tests. Reading data is typically safe, as long as the data is not being changed by other tests running in parallel.

Common areas of contention between tests are:

Databases

If two tests share data, either due to using existing data or creating data with the same identity, then they cannot be run in parallel. This is generally not going to be an issue if your tables use auto generated keys, but this doesn’t guarantee protection from the problem.

Ensure pre-populated data is either read-only or ensure tests create all the data they need without sharing record with the same primary keys, therefore ensuring atomicity. This also applies to any persistent data source for example: files, stateful singleton classes, messaging systems, web services, etc..

Spring application contexts

Using @ContextConfigration in your tests is a simple and clean way to run integration style tests to harness a spring application context. Handily, the SpringJUnit4Runner caches each spring application context created using the string parameter in the annotation as unique key. If you use a different string for loading different context files in each test a new application context will be created each time.

Doing this increases memory usage and increases execution time while the new application context bootstraps. This can become especially noticeable when using numerous application contexts in a large test suite. A simple way to avoid this is to import a single “test-context.xml” file for all spring based tests.

The other benefit of this approach is that all tests can in turn import a real production context from the test-context.xml, which means tests will be running in a production like configuration.

Another consideration with spring based tests is that some tests need to modify the application context at runtime (with BeanPostProcessors, BeanFactoryPostProcessors and bean profiles for example). This can be done for many reasons, such as to utilize a test datasource or webservice client. Substituting beans at can be confusing for other developers if not managed consistently and explicitly, but can also break other tests when run using the same context.

Ideally keep all application context changes in a single configuration location so it’s easy for other team members to see how changes may affect their tests. It’s also an idea to make this clear in the system logs by outputting big bold statements about the changes to the application context so other developers can easily observe. Another possibility is to consider doing some runtime assertions as the application loads up to ensure that parts of the application context are configured as expected and the applications fails fast if not (for example: throw an exception unless a test datasource is being used in tests).

wttotp2

Messaging Systems

Tests that involve asynchronous messaging should be very carefully designed with strategic constraints to ensure consistent results. When sending and receiving messages, tests should ensure each message contains unique data to guarantee they can be filtered to the desired end point. I’ve seen cases where a team created a set of tests that send messages and expect to receive reply messages back.

Unfortunately, due to asynchronous behavior in the system a message never arrived in the timeout period and a test failed. Subsequently the next test received the previously lost message which caused it to fail because it was never designed to receive an unexpected duplicate message. Using messages in tests is ok when there is just one test expecting the message with the same data/payload, but it’s a risk that another developer may write a test expecting a similar message (such as ‘success’ status reply message rather than ‘failure’). Ideally, asynchronous tests can avoid these types of situations either by adding unique headers to aid filtering, using unique payloads in each test such as a client generated UUID.

Testing asynchronous behaviour is hard, so frameworks that help (such as spring integration) usually avoid the issue by constraining and encouraging you to isolate the code under test synchronously.

wsttotp2a

Sleeping and Asynchronous Systems

Another common performance killer is Thread.sleep(XXXXX) in tests while waiting for something asynchronous to happen. What’s the solution? Quite simply the best thing you can do is isolate the behaviour under test and test it synchronously, or substitute asynchronous modules in the system with with synchronous ones at runtime in tests. If you must test asynchronously occurring behaviour then you have a few basic options:

Polling

Polling isn’t always ideal as it can wait longer than needed and each poll can be process intensive, especially on a shared build machine. This scenario can occur, for example, when you need to assert that an asynchronous event has at some point in the future inserted into a database. There may be no way for you to know when this will happen thus you need to perform some kind of polling strategy. Make sure the polling code is re-usable and lives in the test support module you just created so it’s not written in an adhoc manner and untested.

CountDownLatch

A CountDownLatch allows your test code to wait until either a specified number of ‘countdown’ events have triggered it to return or after a timeout. A countdown latch doesn’t really help you do much more than a binary yes/no condition to countdown the latch so is not a sophisticated solution (keep it simple if you can!). If you wish to distinguish by event type in a system where lots of messages are passing through a single point it may be best to use another approach, such as filtering messages back to your test.

Event Driven Assertions

This can be something simple such as the observer pattern, or using a more sophisticated approach like using message interceptors (configured as beans in a test only profile) to ensure that expected messages/events have passed through some channel or endpoint. An interceptor can be used to verify the content of the message. In this case, you may need to use matchers (like Hamcrest), or configure a test context only mechanism to send an event back to your test to let it know if the message arrived. This can all be a bit over engineered and is usually the result of a design that perhaps hasn’t been broken down in to more appropriate simpler (more synchronous) modules.

Stub Remote Calls

Remote calls are the big productivity killer of tests. The more network based calls that occur the slower the tests will be. A simple way to avoid this is to stub out endpoints that communicate with remote systems using bean profiles or BeanPostProcessors. This will ensure the tests run entirely in memory and are thus much faster to execute.

This is easy to do if all tests use a single test context which replaces DAOs/EntityMangers/MessageAdapters with stubs that return dummy data. The exception to this rule is of course integration style tests for Dao/Repository classes that need to ensure the system can successfully read & write entities to a database. In this case it makes sense to do a remote call and suffer the consequences by keeping these types of test to a minimum.

wsttotp2b

Wrap Up

That brings us to the end of Part 2 (Part 1 can be found here). The next post (Part 3) will cover software design strategies that can also help improve performance and clarity of tests and code.

 

This blog is written exclusively by the OpenCredo team. We do not accept external contributions.

RETURN TO BLOG

SHARE

Twitter LinkedIn Facebook Email

SIMILAR POSTS

Blog