20 8 / 2013

Performance testing at Postmark

As Postmark is growing, fast and reliable email delivery is becoming a heavier task. At the same time performance testing is more fun. The more emails we are sending, the more details we need to watch out for to preserve the same quality of service.

We have been performance testing Postmark for a long time. Postmark is growing fast and as a result, performance tests are becoming more frequent and more complex.

I would like to share with you a small insight on how we performance test Postmark.

Read More

05 5 / 2013

Automated Testing at Postmark

With Postmark, a day doesn’t go by that I don’t think to myself how simple it is to use it and how fast you can start sending emails.

Simple, easy, fast - three words which are our top priority goals. Achieving all three, especially sending emails fast comes with a price: reliability. From day one, our goal has been to deliver email to you reliably. Losing emails or emails not reaching your inbox has never been an option.

We have been working very hard to maintain this. My main goal today is to share with you what I do, as a tester, to make sure Postmark is doing its job.

The tools we use

Before I go into further detail, let me share with you which tools we use to test Postmark:

  • Selenium (Selenium Grid, Selenium WebDriver + RSpec)
  • Jenkins
  • Statsd/Librato

Writing the first tests

I have been writing automated tests for Postmark for a while now. The first tests I wrote are the ones which we run most frequently.

When creating the test suite, I decided to work first on the most important test cases, from which myself and the team will benefit the most.

For Postmark, this was a no brainer. The most important thing to test is sending. First I wrote the tests which covered sending email by SMTP and the API. These tests are sending emails and check whether they reach the inbox (or not).

We are tracking how long it takes to send and receive the email. If we see any slowdown, the tests notify us right away. These sending tests run every 5 minutes.

Extending the test suite

Once I was sure that we covered all important test scenarios for basic email sending, I started extending our test suite with Inbound tests, Bounce API, and UI tests.

Running the tests

As soon as I wrote the first tests, I wanted to have a way to run them as frequently as possible. One of the options we considered was running them with SauceLabs. SauceLabs is a great service to run your tests in the cloud with rich reports and the option to run the tests in most of the Browsers/Platforms you can think of.

The only problem with this is we found SauceLabs would not be cost effective for us, since we have to run parts of the test on one of our machines frequently.  It made the most sense to have a dedicated machine for testing.

Russ, our Sys Admin setup a dedicated Linux machine on which we installed Jenkins to run Selenium tests. Jenkins is a great CI tool, used a lot these days by other software testers too. We decided to use Jenkins on this machine solely for my tests.

Execute tests as frequently as possible

The main idea was to run the tests as frequently as possible so we are running Selenium tests all throughout the day. This allowed us to gain much more feedback from Postmark compared to running the tests only when there are changes being introduced.

Why do we run the tests all the time? Our developers have created a lot of tools that monitor Postmark’s health all of the time, but its always better to have two sets of eyes looking at our web application, and Selenium tests act like real users more than any other part of our test suite.

We can sleep soundly knowing that emails are being sent, activity pages are working, and our customers can use all of the features in the UI. As a bonus, the Selenium tests generate useful data in the test account we are using, which can help in manual testing.

To allow faster running of our tests, and continuous running of top priority tests, we use Selenium Grid and Jenkins nodes to be able to run tests in parallel. The sending tests run on separate nodes in Jenkins. The tests that check the UI are running on separate selenium ports.


Since we are running the test all of the time, there is a possibility to track much more information about our tests than whether they are passing or failing.

My tests are checking sending, search, activity, bounce api, and inbound processing. So why not have statistics for all these tests?

Thanks to Chris, I found out about Librato and we decided to integrate the Selenium Tests with Librato. This allowed us to have a complete performance history of search, bounce api calls, and inbound processing.


The regression tests are working hard every day on our testing machine. They are a third eye, watching over Postmark constantly. They helped us out numerous times in preventing issues and even foreseeing issues.

Of course, there is always a room for improvement, and I am eager to hear what do you think, what is the process/workflow you are using to test software.

11 11 / 2012

Running JMeter tests with Jenkins

Performance tests are a very important part of Quality Assurrance process. As Postmark and Beanstalk are growing, the needs for more frequent and more complex performance tests are growing too.

Performance tests help us to identify the maximum operating capacity of our web applications, as well as determining any bottlenecks and elements which could cause performance degradation. By running performance tests we are making sure that you don’t experience any slowdown while using our web applications.

We are using JMeter for performance testing, mostly for testing Postmark. JMeter is working great for us, it’s highly configurable and allows us to stress and load test easily.

The only problem with JMeter (as with most of other tools for performance tests) is that its not a tool that everyone can use. The scenario I use for performance testing is the following:

  • login to our test machine
  • fire up JMeter
  • load our performance tests
  • setup the parameters
  • run the tests
  • watch the results

Performance testing is a process which requires a lot of time, frequent repetition, in order to get all the important data. What I wanted to do is to simplify the process as much as possible and make performance tests available to everyone on the team.

My main goal was to:

  • make the most used tests easily accesible by everyone on the team
  • allow simple configuring of the tests
  • remove the need to login to testing machine for running the tests - allow remote execution

Running JMeter headless

First step in achieving my goals was to run JMeter tests headless - to run the tests without the need for UI. JMeter allows you to run your tests simply from the command line.

In our case what I wanted to do is to have a test which will be sending email. When running the test, the following options should be configurable:

  • set environment we will use (staging/production)
  • set credentials
  • set email content
  • set the load (number of users sending the emails, number of runs, duration)

In order to do this, you can use properties to fetch data from user input to JMeter. You can pass arguments from command line when running JMeter and they will be picked up in your tests.

For example, in my tests I can have a property called ENV, which can have the value: “staging” ro “production”. In JMeter, I can read the property with following call: ${__P(ENV)} and check its value. Depending on the value JMeter will load the settings for appropriately environment and run the test.

My JMeter call would look something like this:

sh jmeter.sh -n -p user.properties -t /smtp_test_param.jmx -l /reports/smtp_test_result.jtl -JUSERNAME=username -JPASSWORD=password -JUSERS=5 -JHOWMUCHTIMESTORUN=4 -JDURATION=20 -JENV=staging -JFILE=content.txt

This way I can run the JMeter tests right from console with no need to open JMeter UI. Now all I had to do is to allow me and my team to run these commands remotely.

Running JMeter Tests with Jenkins

I was already using Jenkins for running our automated tests, so Jenkins seemed to be a natural choice to run the JMeter performance tests. For every test build, you can simply setup parameters, even default values, so all I had to do is to create a Jenkins Test Project in which I will:

  • set the parameters and default values
  • open up shell to run JMeter
  • run JMeter with properties which would be the parameters which I have setup in Jenkins

By creating parametrized test project in Jenkins now everyone can run the performance tests. Not only that, it reduced the time I need to spend on performance testing, plus it allowed me to do even manual testing faster, in cases when I need to send larger number of test emails.

Viewing JMeter Test reports with Jenkins

After setting up parametrized test projects in Jenkins, there was one piece missing - viewing performance tests reports. In order to do this we need to install Performance Plugin for Jenkins.

This plugin includes the feature of setting the final build status as good, unstable or failed, based on the reported error percentage. This would be very important, since we would not be able to determine easily whether test passed or failed, and why it failed.

After installing the plugin all we need to do is update test project by adding post build action “Publish Performance test results report” and set JMeter report files to “/reports/smtp_test_result.jtl” which I have setup before in JMeter call.

Now we can run the JMeter tests easily and review the reports straight from Jenkins. This allows us also to create a history of all performance tests.

Let me know what do you think about running performance tests with Jenkins and JMeter.