Testing Vapor - A Deep Dive

Written by Tim on Sunday, September 3, 2017. Last edited on Monday, October 9, 2017.

This is a write up of the talk I gave at July’s Vapor London Meetup. The slides from that presentation can be found here. WARNING: this post has turned into a bit of an epic one, coming in at 8,000 words! Although I’ll skim over some basics of testing Vapor, it is a deep dive and for those who don’t feel comfortable with testing at all, I recommend reading some of the other excellent tutorials out there first.

Vapor XCTest

Software testing has been around as long as software has been, but the practices certainly aren’t the same as back when software was written on punch cards! In these days of deploying to live multiple times, or even hundreds of times a day, the way we approach testing software has had to keep up with the demands of continuous delivery and continuous deployment.

In this article I am going to talk about some testing and design concepts that I have used, both personally and professionally on everything from some of the open source projects I have done, to small personal projects, to applications that are used by millions of people at the BBC.

Note that everything in this article is purely from my personal experience and entirely my own views. You shouldn’t blindly copy examples but borrow and adapt them to your use cases, just as we all do for code we get from Stack Overflow! Finally, whilst I have evolved the ideas in the article by developing some of the open source projects, a lot of them don’t exactly follow everything I say and still require some refactoring! (This is especially the case in SteamPress...).

Why Do We Unit Test

So first, let’s take a step back and have a look at why we run unit tests, system tests and integration tests in the first place. There are a number of reasons, the highlights of which are:

  • Make sure things work!
  • Confidence when refactoring
  • Avoiding regressions
  • Development speed
  • Helps you design your code
  • Ensuring Vapor works on Linux

If you are a single developer on a small project, or even someone coming from an environment where waterfall is still a thing (and there are still a worryingly large number of instances of this!), then you may not have bothered with unit testing before, because you haven’t seen the benefits. Hopefully once you have finished reading this, you will be itching to go!

Making Sure Things Work

This is the obvious one! The main reason we run tests on our code is to make sure it actually does what we are expecting. We write our tests to ensure that our code does what we expect it to do, and if the tests fails then we know we have a problem! This is why running your tests as part of your Continuous Integration is so important. And making sure things work is a theme that will be common throughout everything in this article, whether it is developing quickly or refactoring. In an ideal world we would write perfect code, but every one is human and that is never going to happen!

Confidence When Refactoring

This is the biggest benefit for me in tests. As we add more and more features into our applications and code, we start to accrue technical debt that we then want to fix. If we have a large suite of tests that covers all of your behaviour, you can change things as much as you want without having to worry about breaking anything. With Vapor OAuth, the first release was not pretty code! It contained huge classes that were difficult to understand, but I have an extensive test suite that exercises pretty much the entirety of the library, which gave me the confidence to completely rip out the implementation and not worry about breaking it.

Avoiding Regressions

As well as having confidence that refactoring doesn’t introduce any regressions, you can also ensure that any other regressions don’t leak into the code. If you take the policy of writing a failing test for every single bug that is reported or found in your code, then fixing it, you can be sure that any changes in the future won’t reintroduce the bug.

Development Speed

Most of the reasons I’ve heard for people not wanting to write tests is because it is slow. I’ve heard a lot of managers complaining, saying “Why are you wasting time writing unit tests, when you could be spending that time implementing features?!” And this is a fundamental misunderstanding of what testing provides and I think is a hark back of the days of waterfall projects. In times gone by (or not as the case may be), you would have your architects who would design all of your code up front, based of the user requirements that were signed off up front. Every single class would then be described and designed in UML and then the development team would implement it. Once development was done it was thrown over to the test team who would test it manually and half the time any bugs discovered would be fixed by a support team who didn’t even develop it in the first place! If the customer wanted anything changed then the whole project could stop whilst a change request was signed off (and paid for) and then it would be implemented. Once development was finished, it would be burned to a CD and shipped to the customer, to never be seen again. Ideally no bugs would ever be written so why bother with tests!

Whilst that is a little bit of an extreme example, it is isn’t too far off of the way things used to be. In the world of rapid deployments, MVPs and constantly introducing new features, you have to be able to develop and change direction quickly. Which means you need to be able to release quickly and regularly, which means you need to have confidence in your code. And for all the reasons above, testing gives you that. If you introduce a new feature, or change some code, how long would it take you to manually test your application? And then if you want to release your application every two weeks? Automating your testing, and using unit tests is the way to do this, especially if you take it to the extreme and release hundreds of times a day.

Helps You Design Code

Testing can also help with how you actually design your code. By doing test-driven development, it can help break down a big problem into smaller problems and is also a great way of ensuring that you have tests for all of your code! I’ll delve into test-driven development a lot more later on.

Testing On Linux

Swift is an awesome language and Vapor is an amazing framework (obviously, otherwise you wouldn’t be reading this!) that helps you write web apps in Swift that you can deploy on Linux. Unfortunately Linux Swift is not the same as macOS Swift. There is no Objective-C runtime on Linux (...at least that you can hook up with Swift) and Swift is still a very young language. Whilst it works pretty well on macOS, (understandably) Apple haven’t devoted the same amount of time to ensure it works just as well on Linux. And whilst IBM and others are doing some amazing stuff to plug the gaps, there is still a lot of Foundation that is unimplemented or buggy on Linux that works on macOS as it has the backing of Objective-C.

For this reason, it is imperative that you test your code on Linux from your first commit. Whilst there are certainly less and less issues than there used to be, you don’t want to spend months working on an awesome application only to find that some of your core business logic relies on something that doesn’t work on Linux.

How To Test On Linux

Without the Objective-C runtime, Swift on Linux unfortunately (currently) can’t scan your code for the tests to run, so you need to give it a hand. You need to define an array containing all your XCTestCases, called LinuxMain.swift, which Swift will read to work out what to run. Notice that each test case in this file is pointing to an allTests dictionary, which tells Swift what the tests to run are. So you need to add each of your test cases to a dictionary in each file.

This is obviously prone to failure when you add tests, so a helpful hint I found is to add the following test case to each file:

func testLinuxTestSuiteIncludesAllTests() {
    #if os(macOS) || os(iOS) || os(tvOS) || os(watchOS)
        let thisClass = type(of: self)
        let linuxCount = thisClass.allTests.count
        let darwinCount = Int(thisClass.defaultTestSuite().testCaseCount)
        XCTAssertEqual(linuxCount, darwinCount, "\(darwinCount - linuxCount) tests are missing from allTests")
    #endif
}

This will cause a failing test when you run your tests on macOS if you forgot to add the test! Helpful!

Finally, it is also really easy to run your tests on Docker. All you need to do is add this Dockerfile to your project:

FROM swift:3.1

WORKDIR /package

COPY . ./

RUN swift package --enable-prefetching fetch
RUN swift package clean
CMD swift test

Then you can simply run docker build --tag <MY_PROJECT_NAME> . && docker run --rm <MY_PROJECT_NAME> to run your unit tests on Linux in a Docker container.

Testing And Design Strategies

What Is A Unit Test

A unit test used to be thought of a test to exercise the logic of a unit of code. Unfortunately this seemed to get conflated with methods and functions, so it wasn’t (and still isn’t) uncommon to see a unit test written for every method, which only tested that method.

Unfortunately this leads to horribly brittle code. If you go down this route, refactoring becomes an absolute nightmare. You could change a method signature and have 20 tests break. Removing methods or refactoring code to split it out into multiple functions would cause even more of a headache (and work). This leads to tests being ignored, or worse, removed.

Instead, a unit test should be thought of a test to test a specific behaviour of the system. For instance, this could be “given I submit a POST request to my user registration endpoint with missing fields, I should receive a 400 Bad Request response”. This also has the advantage of documenting the behaviour of your application. No one likes writing documentation and it becomes difficult to keep up to date with even the best of intentions. By ensuring that you have a test for your behaviours, it documents what you expect to happen for each use case. You also shouldn’t be afraid to write long test names as it helps you (and others) work out what the test should be doing! So instead of test400Error and then coming back to it a few weeks later and wondering what the hell you meant, testThat400ResponseReceivedWhenMissingUsernameSentForRegistration instantly lets you understand what the test should do.

OCMock and Mocking Frameworks

Before we delve into how we test stuff, it is worth mentioning mocking frameworks, especially for those coming from Objective-C. Most people who have written tests in Objective-C will have used a mocking framework, the most popular being OCMock. The dynamic nature of the language makes it really easy to use them and help stub out behaviours. It isn’t uncommon to see something along the lines of:

GCKRemoteMediaClient *sut = [[GCKRemoteMediaClient alloc] init];
self.mockRemoteClient = [OCMockObject partialMockForObject:sut];

Unfortunately this makes me go:

Noooo office

That example was taken from a Google Cast library that we wrote and use. And it is part of the reason why I dislike mocking frameworks so much. In the example above, we are creating our sut object (System Under Test) and then creating a mock object from that - i.e. we are mocking the thing we are trying to test! Whilst in this instance it is probably OK, we still have no guarantee that we are testing the thing that we think we are testing - and we have definitely been bitten in the past by doing this kind of thing.

Whilst this is definitely something that shouldn’t be done, and is more a case of 'user error', mocking frameworks make this kind of thing too easy to do. It encourages bad behaviour and lazy development. Instead of being forced to think about how you should design your code and coming up with a nice architecture, you can just mock out the parts you don’t care about and not worry about it. If you can only write a test by mocking something then you are doing it wrong, and it is a huge code smell.

Whilst mocking frameworks may have their place, thankfully in the strict static world of Swift, a lot of these situations aren’t possible and we are forced to write code properly 😜.

Testing In Swift

Back at WWDC 2015, Apple 'introduced' the concept of Protocol Orientated Programming. Whilst this design pattern has certainly been around a lot longer than 2015, it is nice to see it be pushed with Swift. There was also a really good talk at WWDC this year on Engineering For Testability that helps explain it further.

To demonstrate this, we will look at how to write a test in Vapor for the behaviour where registering a user sends an email to them. For this, imagine we have our Vapor app that the registration is sent to, and then to send an email, we will send a request to a 'notifications' service that will send the email for us. So we want to test that we send the request to our notifications API.

Vapor is already heavily protocol based, which makes testing this a breeze! So to test this we can write a 'Capturing Client' that will act as the client we use to send the request. This will capture the request we try to send so we can inspect it later and also return a dummy response:

class CapturingClient: ClientProtocol {
    init() {}
    required init(hostname: String, port: Sockets.Port, securityLayer: SecurityLayer, proxy: Proxy?) throws {}

    private(set) var capturedRequest: Request?
    func respond(to request: Request) throws -> Response {
        capturedRequest = request
        return "Test".makeResponse()
    }
}

Then in our test, we can send a request to the registration endpoint and ensure that our client sends the expected request to the notifications API:

func testEmailRequestSentWhenUserSuccessfullyRegistered() throws {
    let emailAddress = "han.solo@therebelalliance.com"
    let registrationRequest = Request(method: .post, uri: "/users/registration")
    var registrationJSON = JSON()
    try registrationJSON.set("first_name", "Han")
    try registrationJSON.set("last_name", "Solo")
    try registrationJSON.set("email", emailAddress)
    registrationRequest.json = registrationJSON

    _ = try drop.respond(to: registrationRequest)

    guard let json = capturingClient.capturedRequest?.json else {
        XCTFail()
        return
    }

    XCTAssertEqual(capturingClient.capturedRequest?.uri.description, "https://notifications.api.brokenhands.io")
    XCTAssertEqual(json["email"]?.string, emailAddress)
    XCTAssertEqual(json["notification_type"]?.string, "registration_email")
}

So in this simple test, we create our registrationRequest and then send it to our Vapor app. This should then send a JSON request to https://notifications.api.brokenhands.io and we make sure that it does, we make sure that the email address to send it to is the one the user registered with, and we make sure that we are expecting to send a registration email.

Note that in this example we are simply testing the behaviour of our app. We aren’t testing individual methods; this leaves us free to refactor and change the code as much as we want and we still test that the behaviour that a request is sent to the correct API upon registration without having to change the test at all.

Hexagonal Architecture

This type of design pattern - of having pluggable components that are defined by a protocol or an interface - is better known as Hexagonal Architecture, or Ports and Adapters. It was first formalised in a blog post written by Alistair Cockburn in 2005.

Hexagonal Architecture

The idea behind this is that your business logic is the core of your app (the yellow part above) and it interacts with any dependencies or external services through interfaces (or protocols or ports - the red parts above). You can have different implementations (or adapters - the blue objects above) of these services and can just swap them in and out depending on what you need. So for production you may use a real database, but in tests you may just have a static hard-coded list. This helps you isolate your dependencies and helps you write nice code. You simply use dependency injection to change the different components and all your business logic interacts with the interfaces and doesn’t really care what implementation it is talking to.

If you are using a MySQL database and then one day decide to switch it out for something else, it doesn’t matter! All you have to do is change your implementation (adapter) and none of your business logic has to actually change. This can be incredibly powerful and allow you to iterate quickly and and stop you from leaking dependencies into where they shouldn’t be.

Test Driven Development

Test Driven Development, or TDD, has become a bit of a buzzword over the last few years as it becomes more and more popular. It has its origins in XP and the idea is that you write your tests first and code later. This has a number of benefits over the old school way of writing your tests after the fact, if you had the time...

The general concept is you write your tests to test for all of the behaviours you want to see in your code. So if you want a behaviour that a registration POST must contain certain parameters otherwise you’ll get a Bad Request response, then write a test for this first. Once you have a failing test, you can then implement the code to make the test pass.

The first benefit this way of doing things gives you is it gives is almost perfect test coverage! If you write all of your tests and only write the code necessary to make the tests pass, you will have 100% test coverage from the very first test! The idea of only writing the smallest amount of code to make the test pass (and some people really take this to extremes, but long term this leads to really solid test suites) is a very XP approach and very alien to some people. But then if you have extra code that isn’t included in the tests that describe all of the behaviours of your app, then it obviously doesn’t need to be there. If you are strict with this you can get to the stage where if you discover a line of code that doesn’t break a test when changed or removed, then you just delete that line of code. This is probably one of the concepts that people new to this way of working find most horrifying but if you think about this, it makes sense. However, I am not recommending you do this for all your projects straight away, it takes time and discipline to achieve, and it is also a great way to discover gaps in your test suites.

Writing the tests up front helps you think about how you are going to interact with your system and how you need to design it. By concentrating on behaviours at a time it can really help break down your code rather than just get stuck on starting out with this enormous system with multiple moving parts.

If you need to write a test for authentication then you can ignore any view stuff, you can ignore the database and just hardcore a user. It will help you realise that you may need to add sessions to your application, so you can just inject that in. It helps you work out what your ports are and what your API will look like. And with all behaviours tested and covered, when you need to change and adapt you can do it quickly and with confidence. If you are struggling to test or design an endpoint that has a lot of logic in it, you can built upon all the behaviours one by one, ensuring that your changes don’t break any existing ones. Though if you are really struggling to even write the first test, don’t be afraid to write really small, micro-tests on a small part of your system to get going and then you can elevate these at a later date. And if you are really struggling to test your application, have a think about your design. If you can’t test your code easily, it is usually an indication that your system isn’t designed well.

Finally, with Test Driven Development, it is really important to remember the motto - Red-Green-Refactor. You write a failing test first, then you do the smallest thing possible to make the test pass, then you refactor your code to tidy it up and remove any tech debt. This last part is really important and many people often forget it, or think it doesn’t matter and they want to get on to the next test. You can only ignore your test debt for so long until it really starts to slow you down. Always look for opportunities to make both your source code and test code better, more concise and remove duplication. I like to use the rule of threes - copy and pasting something once is usually ok (at least initially) but if you start to do it a third time, or see some similar logic more than twice then you should probably refactor these out into single bits of code.

Testing Vapor

So finally we can put this all together and talk about testing Vapor specifically! It turns out that it is actually really easy to implement everything we have talked about above. Vapor is already very protocol orientated, which makes it easy to switch out all of the pluggable components during testing. And for testing things like an API, I have already alluded how we can write nice behaviour tests above - we can simply send a request and assert on the response. If you think back to the hexagonal architecture diagram above, your Vapor application becomes your core logic, the database and things like your client are your ports and adapters, and we can design our system with that in mind.

Vapor Testing

Vapor OAuth - An Example

To see how this looks in a real world example, we’ll look at the very first commit I made when writing the Vapor OAuth library, which was a failing test! Looking at the spec, I knew that when a request was made to get an Authorisation Code, the user should be redirected to a login page to give the requesting application authorisation to their resources. So the test was written for that behaviour:

func testThatAuthorizationCodeRequestRedirectsToLoginPage() throws {
    let config = Config([:])
    try config.addProvider(OAuth.Provider.self)
    let drop = try Droplet(config)

    let requestQuery = "response_type=code&client_id=1234567890&redirect_uri=https://api.brokenhands.io/callback&scope=create+view&state=xcoivjuywkdkhvusuye3kch"
    let codeRequest = Request(method: .get, uri: "/auth?\(requestQuery)")
    let codeResponse = try drop.respond(to: codeRequest)

    XCTAssertEqual(codeResponse.status, .seeOther)
    XCTAssertEqual(codeResponse.headers[.location], "login/")
}

I knew that I wanted a Provider to add to the Droplet, and that I needed to send a request with a query, as per the spec. Once I had a failing test (and it didn't even compile to start with!), I could then write the code to make the test pass:

struct OAuth2Provider {
    func addRoutes(to router: RouteBuilder) {
        router.get("auth", handler: authHandler)
    }

    func authHandler(request: Request) throws -> ResponseRepresentable {
        return Response(redirect: "login/")
    }
}

public final class Provider: Vapor.Provider {

    public init(config: Config) throws {}

    public func boot(_ config: Config) throws { }

    public func boot(_ drop: Droplet) throws {
        let provider = OAuth2Provider()
        provider.addRoutes(to: drop)
    }

    public func beforeRun(_ drop: Droplet) throws { }
}

Note that this was the simplest thing I needed to write to make the test pass. There’s no point over-complicating it and trying to write a load of extra code that may not work with future behaviours, after all the most efficient programmer is a lazy programmer!

Testing Routes

There are a number of options you can take when testing routes:

  • you can test route handlers individually
  • you can write a test for your controller
  • you can test on your Droplet

All of these methods have their merits but it should be obvious from above about the preference I have! Writing large, end-to-end tests provides a number of benefits over smaller tests. You usually have a lot of dependencies when building up your Droplet which need to be set up, and you want to try and keep your test system as close to your deployment system as possible; otherwise you may find that things that worked in your tests, don’t work when you deploy your system.

It was also mentioned at one the the recent Vapor meetups, when we discussed testing, that if you have a large suite of tests that send requests and assert on the response, it is really easy to repurpose the tests into assurance tests when you deploy your application into live - instant feedback!

Testing End-To-End

The way to test like this in Vapor is over three steps:

  1. Set up your environment, such as cookies and your database and create your request
  2. Send your request to your Droplet
  3. Assert on the response

This type of pattern is very common and if you read any type of testing articles, books or blogs, then you will commonly hear this as “Arrange, Act, Assert”.

A Vapor OAuth Example

So to see a real world example, we can have a look at another Vapor OAuth test. This test is making sure that the correct error response is received when an OAuth client doesn’t authenticate correctly. And in this test I’ll also go through all of the refactoring that’s been done on the tests to simplify stuff.

Setting Up The Test

var drop: Droplet!
let fakeClientGetter = FakeClientGetter()
let fakeUserManager = FakeUserManager()
let fakeTokenManager = FakeTokenManager()
let capturingLogger = CapturingLogger()
let testClientID = "ABCDEF"
let testClientSecret = "01234567890"
let testUsername = "testUser"
let testPassword = "testPassword"
let testUserID: Identifier = "ABCD-FJUH-31232"
let accessToken = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
let refreshToken = "ABCDEFGHIJLMNOP1234567890"
let scope1 = "email"
let scope2 = "create"
let scope3 = "edit"

// MARK: - Overrides

override func setUp() {
    drop = try! TestDataBuilder.getOAuthDroplet(tokenManager: fakeTokenManager, clientRetriever: fakeClientGetter,
                                                userManager: fakeUserManager, validScopes: [scope1, scope2, scope3],
                                                log: capturingLogger)

    let testClient = OAuthClient(clientID: testClientID, redirectURIs: nil, clientSecret: testClientSecret,
                                 validScopes: [scope1, scope2], firstParty: true, allowedGrantType: .password)
    fakeClientGetter.validClients[testClientID] = testClient
    let testUser = OAuthUser(userID: testUserID, username: testUsername, emailAddress: nil, password: testPassword.makeBytes())
    fakeUserManager.users.append(testUser)
    fakeTokenManager.accessTokenToReturn = accessToken
    fakeTokenManager.refreshTokenToReturn = refreshToken
}

So in this test case setup we are setting up our environment for all the tests in this case. We are setting up all of the fakes that we require, we are ensuring that our fake user can be authenticated, and we are making sure we will get the correct tokens returned. You’ll also notice that I’m using a TestDataBuilder, and I use this pattern throughout as it's a great way to reuse code across tests for common things you need. In this case, it looks like:

static func getOAuthDroplet(codeManager: CodeManager = EmptyCodeManager(), tokenManager: TokenManager = StubTokenManager(),
                            clientRetriever: ClientRetriever = FakeClientGetter(), userManager: UserManager = EmptyUserManager(),
                            authorizeHandler: AuthorizeHandler = EmptyAuthorizationHandler(), validScopes: [String]? = nil,
                            resourceServerRetriever: ResourceServerRetriever = EmptyResourceServerRetriever(),
                            environment: Environment? = nil, log: CapturingLogger? = nil,
                            sessions: FakeSessions? = nil) throws -> Droplet {
    var config = Config([:])

    if let environment = environment {
        config.environment = environment
    }

    if let log = log {
        config.addConfigurable(log: { (_) -> (CapturingLogger) in
            return log
        }, name: "capturing-log")
        try config.set("droplet.log", "capturing-log")
    }

    let provider = OAuth.Provider(codeManager: codeManager, tokenManager: tokenManager, clientRetriever: clientRetriever,
                                  authorizeHandler: authorizeHandler, userManager: userManager, validScopes: validScopes,
                                  resourceServerRetriever: resourceServerRetriever)

    try config.addProvider(provider)

    config.addConfigurable(middleware: SessionsMiddleware.init, name: "sessions")
    try config.set("droplet.middleware", ["error", "sessions"])

    if let sessions = sessions {
        config.addConfigurable(sessions: { (_) -> (FakeSessions) in
            return sessions
        }, name: "fake")
        try config.set("droplet.sessions", "fake")
    }

    return try Droplet(config)
}

This function will create our Droplet for us, with everything we need configured, such as our environment, or our log and with our OAuth2 Provider added. Then once we have all the generic stuff ready, we can set up our test-specific code:

func testCorrectErrorWhenClientDoesNotAuthenticate() throws {
    let clientID = "ABCDEF"
    let clientWithSecret = OAuthClient(clientID: clientID, redirectURIs: ["https://api.brokenhands.io/callback"],
                                       clientSecret: "1234567890ABCD", allowedGrantType: .password)
    fakeClientGetter.validClients[clientID] = clientWithSecret

    ...
}

And in this test, all we are doing is creating a new OAuthClient and adding it to our fake.

Getting Our Response

To get our response, it’s actually really easy, as most of the stuff we need has already been set up:

func testCorrectErrorWhenClientDoesNotAuthenticate() throws {
    ...

    let response = try getPasswordResponse(clientID: clientID, clientSecret: "incorrectPassword")

    ...
}

Again, for this test we are using another helper method to reduce code duplication:

func getPasswordResponse(grantType: String? = "password", username: String? = "testUser",
                         password: String? = "testPassword", clientID: String? = "ABCDEF",
                         clientSecret: String? = "01234567890", scope: String? = nil) throws -> Response {
    return try TestDataBuilder.getTokenRequestResponse(with: drop, grantType: grantType, clientID: clientID,
                                                       clientSecret: clientSecret, scope: scope,
                                                       username: username, password: password)
}

This simply passes everything through to the TestDataBuilder, which looks like:

static func getTokenRequestResponse(with drop: Droplet, grantType: String?, clientID: String?, clientSecret: String?,
                                    redirectURI: String? = nil, code: String? = nil, scope: String? = nil,
                                    username: String? = nil, password: String? = nil,
                                    refreshToken: String? = nil) throws -> Response {
    let request = Request(method: .post, uri: "/oauth/token/")

    var requestData = Node([:], in: nil)

    if let grantType = grantType {
        try requestData.set("grant_type", grantType)
    }

    if let clientID = clientID {
        try requestData.set("client_id", clientID)
    }

    if let clientSecret = clientSecret {
        try requestData.set("client_secret", clientSecret)
    }

    if let redirectURI = redirectURI {
        try requestData.set("redirect_uri", redirectURI)
    }

    if let code = code {
        try requestData.set("code", code)
    }

    if let scope = scope {
        try requestData.set("scope", scope)
    }

    if let username = username {
        try requestData.set("username", username)
    }

    if let password = password {
        try requestData.set("password", password)
    }

    if let refreshToken = refreshToken {
        try requestData.set("refresh_token", refreshToken)
    }

    request.formURLEncoded = requestData

    let response = try drop.respond(to: request)

    return response
}

This function takes everything we could ever want for getting a response for a token request and builds up the request with everything specified. It then sends the request to the Droplet and returns the response. By doing it like this, it means we can use this function for every token request across all of our tests.

Asserting On The Response

Finally now that we have the response, we can assert on it:

func testCorrectErrorWhenClientDoesNotAuthenticate() throws {

    ...

    guard let responseJSON = response.json else {
        XCTFail()
        return
    }

    XCTAssertEqual(response.status, .unauthorized)
    XCTAssertEqual(responseJSON["error"]?.string, "invalid_client")
    XCTAssertEqual(responseJSON["error_description"], "Request had invalid client credentials")
    XCTAssertEqual(response.headers[.cacheControl], "no-store")
    XCTAssertEqual(response.headers[.pragma], "no-cache")
}

In this, we take the response, make sure we get the correct status code, make sure we have the correct headers (as defined by the OAuth spec), and ensure we have some JSON in the response and it contains the required fields.

Doing all of the above helps to keep our tests nice and small - and therefore easy to understand - whilst still enabling us to do large, complex, end-to-end tests.

Alternative Options

Vapor does include some inbuilt testing methods, which we could use instead. Doing so would look something like:

func testInvalidClientResponse() throws {
    try drop
        .testResponse(to: .post, at: "oauth/token")
        .assertStatus(is: .unauthorized)
        .assertJSON("error", equals: "invalid_client")
}

Whilst this syntactic sugar does contain some niceties, there are some issues. First, it can be more difficult to inject in things to the request, such as cookies. It can also be more difficult to assert on parts of the response, such as headers. Whilst these issues can both be solved easily with extensions, the main reason I don’t like using these is that they require the use of @testable, which changes the visibility of the imported package in testing, allowing you to see internal stuff you normally wouldn’t be able to see.

This isn’t so much as issue for importing a testing function, but it does go against the idea of keeping tests as close as possible to real world interactions, and I like to avoid @testable wherever possible. Anyone who took the very first release of Vapor Security Headers will be able to attest that this can cause problems!

Testing Views

So testing APIs are relatively simple, especially if you haven’t headed down the route of abstracting your requests and responses (but that is definitely for another day!). Testing views however, are a bit harder. Anyone who’s ever tried to maintain any sort of UI testing suite knows that it can become an impossible task. Views change regularly and keeping up with that without getting to the stage of just ignoring failing tests, usually requires a disciplined, dedicated QA team. And with the web where you can evolve and release a lot quicker, things get even more difficult. So how do we test views?

Introduce Presenters

Well in short, you don’t. Whilst you can with unlimited resource and big teams, the idea is to abstract away as much logic as you can, so the actual view does very little, meaning that there is less to go wrong. Using the Model-View-Presenter design pattern, you can split out your difficult-to-test business logic and any view logic. You move any business logic that is related to views to an abstraction called a Presenter, so that it can easily be tested. This leaves you with a very thin view layer (in most cases your Leaf templates) with very little logic in, which has less risk when not testing.

By doing this, your Leaf views can evolve quickly and you can change colours, layouts, styles, even introduce things such as A-B testing without breaking your tests and having to try and keep them up to date. All of your business logic should stay in your core, your view layer has no real logic in it and your presenter takes care of how to display the data, such as what order a list of models should be in, or what data to display for that model.

Presenter Architecture

Testing With Presenters

So to use a presenter with Vapor, you inject your presenter into your controller as a parameter in the initialiser. This presenter should be a protocol which knows about the different views it can 'display' and what data each view needs. In SteamPress for example, this is known as a ViewFactory. When writing your big end-to-end tests, you inject in a fake, such as the CapturingViewFactory in SteamPress, and assert on that. You then test the real presenter in isolation, since the presenter becomes a 'port' onto your system.

(Note: I know that this isn’t strict presenterisation since you are still leaking view knowledge into your core app by exposing the different views on your presenter to the controller, but we will try and keep it simple for now. For very large systems you may want to completely isolate any view knowledge from your controller using something like a listener or router pattern.)

So to test our core with an injected presenter, it would look something like:

func testBlogPostRetrievedCorrectlyFromSlugUrl() throws {
    try setupDrop()
    let user = TestDataBuilder.anyUser()
    try user.save()
    let post = TestDataBuilder.anyPost(author: user, slug: "test-slug")
    try post.save()
    let blogPostRequest = Request(method: .get, path: "/blog/posts/test-slug")
    _ = try drop.respond(to: blogPostRequest)

    XCTAssertEqual(viewFactory.blogPost)
    XCTAssertEqual(viewFactory.blogPostAuthor, user)
}

Here we send our request to our Droplet like normal, but instead of asserting on the response (which we should still check in other tests!), we assert on the results of our CapturingViewFactory and make sure things have been set on that correctly. This ensures that our core logic passes the correct information to our presenter port.

Testing The Presenter

Now that we have the port tested, we obviously need to test our adapter, which in this case is our implementation of our ViewFactory, which for SteamPress is our LeafViewFactory. In order to test this, we inject in a fake ViewRenderer. We could use the real ViewRenderer but at that point we are asserting on an actual view and that brings with it all of the issues of UI Testing. Instead we can check that we request the right template and transform the models correctly. So our fake ViewRenderer will look something like:

class CapturingViewRenderer: ViewRenderer {
    var shouldCache = false

    private(set) var capturedContext: Node? = nil
    private(set) var leafPath: String? = nil
    func make(_ path: String, _ context: Node) throws -> View {
        self.capturedContext = context
        self.leafPath = path
        return View(data: "Test".makeBytes())
    }
}

We can then write narrow tests for our LeafViewFactory. For example, for the authors page, our test would look something like:

func testParametersAreSetCorrectlyOnAllAuthorsPage() throws {
    let user1 = TestDataBuilder.anyUser()
    try user1.save()
    let user2 = TestDataBuilder.anyUser(name: "Han", username: "han")
    try user2.save()
    let authors = [user1, user2]
    _ = try viewFactory.allAuthorsView(uri: authorsURI, allAuthors: authors, user: user1)

    XCTAssertEqual(viewRenderer.capturedContext?["authors"]?.array?.count, 2)
    XCTAssertEqual((viewRenderer.capturedContext?["authors"]?.array?.first)?["name"], "Luke")
    XCTAssertEqual((viewRenderer.capturedContext?["authors"]?.array?[1])?["name"], "Han")
    XCTAssertEqual(viewRenderer.capturedContext?["uri"]?.string, "https://test.com:443/authors/")
    XCTAssertEqual(viewRenderer.capturedContext?["site_twitter_handle"]?.string, siteTwitterHandle)
    XCTAssertEqual(viewRenderer.capturedContext?["disqus_name"]?.string, disqusName)
    XCTAssertEqual(viewRenderer.capturedContext?["user"]?["name"]?.string, "Luke")
    XCTAssertEqual(viewRenderer.leafPath, "blog/authors")
}

So in this test, we create a couple of users, and then assert that those users are given to the context which would be rendered. We also make sure that we request the right Leaf template, and that other parameters, such as the disqus_name, which are global on the LeafViewFactory are also given to the template. If you have different Contexts for your models, you can make sure that the presenter creates the model's Node with the right Context - i.e. it is getting the right data for that mdoel.

By testing our adapter like this, it allows us to switch out the rendering engine for a different implementation if we ever wanted to and the only thing that would need to change would be our adapter. The rest of the application core would work exactly the same. This would also be the case if we decided we wanted to return JSON instead of HTML, the only thing that would change would be our adapter. This is really useful if you decide to change from JSON to Protobuf for example.

Testing Authentication

Testing authentication is one issue I see crop up on the Vapor Slack a lot, and it is one of the most complex issues when it comes to testing. We obviously want to be able to test that unauthorised users can’t access things they shouldn’t and you should test the different cases for any permission model you have in your application. We also want to be able to write tests for the 'happy path' for logged in users, for example a logged in user being able to create a blog post, without too much difficulty. We also want to make sure that our login works! Injecting in a fake token is relatively easy, but using Basic authentication in the header involves the use of a hasher, which could mean each test could take up to a second. This is far from ideal when you have hundreds of test cases, we want quick feedback! And then when you move into logins on websites, you would have to write a helper method to send a login request, grab the returned cookie and inject it into the next request on top of all of this! So we have a number of approaches we can take (these are predominately focused on logging in with a username and password on a web form but the principles will apply to any authentication tests). However we still need to test a real login works!

Testing a Real Login

It is important that you test the full complete flow using all real objects at least once in your application. You need to ensure that everything is hooked up correctly, test you are hashing in the right places, test that logouts work and that your routes are protected correctly. If you only ever write your tests using a fake session for example, you may find that when you try to actually login after deploying, things don’t work!

In SteamPress this looks like:

func testLogin() throws {
    let hashedPassword = try BlogUser.passwordHasher.make("password")
    let newUser = TestDataBuilder.anyUser()
    newUser.password = hashedPassword
    try newUser.save()

    let loginJson = JSON(try Node(node: [
            "inputUsername": newUser.username,
            "inputPassword": "password"
        ]))
    let loginRequest = Request(method: .post, uri: "/blog/admin/login/")
    loginRequest.json = loginJson
    let loginResponse = try drop.respond(to: loginRequest)

    XCTAssertEqual(loginResponse.status, .seeOther)
    XCTAssertEqual(loginResponse.headers[HeaderKey.location], "/blog/admin/")
    XCTAssertNotNil(loginResponse.headers[HeaderKey.setCookie])

    let rawCookie = loginResponse.headers[HeaderKey.setCookie]
    let sessionCookie = try Cookie(bytes: rawCookie?.bytes ?? [])

    let adminRequest = Request(method: .get, uri: "/blog/admin/")
    adminRequest.cookies.insert(sessionCookie)
    let adminResponse = try drop.respond(to: adminRequest)

    XCTAssertEqual(adminResponse.status, .ok)

    let logoutRequest = Request(method: .get, uri: "/blog/admin/logout/")
    logoutRequest.cookies.insert(sessionCookie)
    let logoutResponse = try drop.respond(to: logoutRequest)

    XCTAssertEqual(logoutResponse.status, .seeOther)
    XCTAssertEqual(logoutResponse.headers[HeaderKey.location], "/blog/")

    let secondAdminRequest = Request(method: .get, uri: "/blog/admin/")
    secondAdminRequest.cookies.insert(sessionCookie)
    let loggedOutAdminResponse = try drop.respond(to: secondAdminRequest)

    XCTAssertEqual(loggedOutAdminResponse.status, .seeOther)
    XCTAssertEqual(loggedOutAdminResponse.headers[HeaderKey.location], "/blog/admin/login/?loginRequired")
}

So in this test we first send a login POST request to the login page and make sure that we get redirected to the correct page after login. We also make sure we see the Set-Cookie header in the response and then pull out the cookie as we’ll need this next. Once we have logged in, we send a request to a protected route. with the cookie, to make sure that we can access it.

Once we have asserted that login works correctly, we then send a request to the logout route, which should log the user out and then check to make sure that we get redirected to the expected page after logging out. We then send a request to the protected route again, ensuring that we attach the cookie (as this is what the browser does) and make sure we get redirected to the login page and can’t access the protected route.

This test can be thought of as more of an integration test as we are testing everything hooked up, and it will take around a second to run since we are using BCrypt. However it verifies that everything works and when we deploy a real user can login, and you should include a test like this in your test suite.

Faking The Login

Once we have verified that a real login works, we want to test the rest of our behaviours but without having to go through all of this effort (and time) to do so. We have a number of options we can do:

  • full workflow with a fake hasher
  • inject into storage
  • inject into sessions

It does have to be said that I am still undecided on which approach is best! I’ve used all 3 approaches and whilst I am leaning towards the full workflow, I think it does depend on your use case. I’ll explain all 3 approaches and you can decide what works for you!

Injecting The Hasher

For all options however, you will need to inject in your hasher into your model to ensure that you can test things like registration without it taking ages. You can do this either manually or through config.

If you are writing an application then you can just do this via config by setting the droplet.json:

{
    ...
    "hash": "bcrypt"
    ...
}

In test, you can then add your fake hasher, which will look something like:

struct FakePasswordHasher: PasswordHasherVerifier {
    func verify(password: Bytes, matches hash: Bytes) throws -> Bool {
        return password == hash
    }

    func make(_ message: Bytes) throws -> Bytes {
        return message
    }

    func check(_ message: Bytes, matchesHash: Bytes) throws -> Bool {
        return message == matchesHash
    }
}

Then in your config, you can add it and then set the Droplet’s hash:

var config = try Config()
try config.addConfigurable(hash: { (_) -> (FakePasswordHasher) in
    return FakePasswordHasher()
}, name: "fakeHasher")
try config.set("droplet.hash", "fakeHasher")

This will then set it to your fake hasher in test. Just ensure that nothing else is relying on this using something like SHA256.

For libraries and providers you must do this manually since you have no guarantee that an integrating application will configure itself to use BCrypt through config. So wherever you setup your system you need to be able to inject in a hasher using dependency injection.

For both cases, the final thing to be careful of is that passwordVerifier is a static property on the PasswordAuthenticable model, which can complicate things, so you need to make sure that wherever you set up your hasher, you also set the static property on your user model. If you are using config, you will need to do this when your Droplet is being configured, in the setup() extension for example.

Full Workflow

This is in effect replicating the full workflow in the integration test above, so logging in, grabbing the cookie and injecting it in to each request, you just replace the hasher as above to speed things up. This approach works really well for token-based authentication, and you can get the token/cookie in the test setup so you don’t duplicate code.

I like this method, especially for applications, as it is the closest to real world and it is just another port to the system. You just have to ensure that your real system uses a BCrypt hasher (write a test for it!). For a library, you need to ensure that you can inject in a safe hasher and that an integrating application can’t change this. The biggest benefit to doing it this way is that it doesn’t require any intimidate knowledge of how your application works or how Vapor works. This will future-proof you some what from Vapor changes, and avoids you having to reverse engineer the authentication library (it took me some time to work out how it all worked!).

Inject Into Storage

This method is the simplest as you simply inject the user you want to be logged in as into the request’s storage:

private func createLoggedInRequest(method: HTTP.Method, path: String, for user: BlogUser? = nil) throws -> Request {
    let uri = "/blog/admin/\(path)/"
    let request = Request(method: method, uri: uri)

    let authAuthenticatedKey = "auth-authenticated"

    if let user = user {
        request.storage[authAuthenticatedKey] = user
    }
    else {
        let testUser = TestDataBuilder.anyUser()
        try testUser.save()
        request.storage[authAuthenticatedKey] = testUser
    }

    return request
}

This is really simple and makes it easy to test permissions of different users, however it requires you to know how authentication works under the hood in Vapor and it is quite likely a major update to the Auth Provider will break this.

Inject Into Sessions

Another way to fake a login is to inject an identifier into the Droplet’s session. To do this, you can use a stubbed implementation of the SessionsProtocol, add a session to your stub, then inject in a cookie to your request with the session. To do this, create your FakeSessions like so:

import Sessions

class FakeSessions: SessionsProtocol {

    var sessions: [String: Session] = [:]

    func makeIdentifier() throws -> String {
        return "ID"
    }

    func get(identifier: String) throws -> Session? {
        return sessions[identifier]
    }

    func set(_ session: Session) throws { }

    func destroy(identifier: String) throws { }

    func contains(identifier: String) throws -> Bool {
        return sessions[identifier] != nil
    }
}

You can then add this to your Droplet’s config:

var config = try Config()
config.addConfigurable(sessions: { (_) -> (FakeSessions) in
    return sessions
}, name: "fake")
try config.set("droplet.sessions", "fake")

You can then create a fake session:

let session = Session(identifier: "vapor-session")
session.data.set("session-entity-id", myUser.id)

Once you have your fake session created, you just need to create a request with that session in a cookie:

let sessionCookie = Cookie(name: "vapor-session", value: sessionID)
myRequest.cookies.insert(sessionCookie)

Since myRequest has the session cookie set, then your user will be logged in. This is a little more convoluted for logging in a user compared to just faking the storage. It also requires you to know that the default SessionsMiddleware uses a vapor-session identifier, and that the SessionPersistable protocol will use the session-entity-id identifier. Both of these could well change in future versions, which would break your tests, so it is something to be aware of. I have used this method for things like testing CSRF but tend to avoid it for faking logins due to the required knowledge about how the system works and the complexity it can introduce.

Vapor Testing - The Conclusion

This whirlwind tour of my experiences of testing should hopefully provide some things to think about when it comes to testing your own Vapor projects. As I said at the beginning (which was a long time ago!), this post is not meant to be an authoritative guide for how to test Vapor and should most definitely not be seen as the only 'right' way to do it! If you have your own methods and design patterns and they work for you, then definitely keep using them! Make sure however you are open to evolving the way you do things, which is something you should do with everything in software, and I would be really interested to hear what your thoughts and experiences are in the comments below. Maybe it will make you try out new things and solves any pains you have when trying to test your Vapor applications and moving quickly. Definitely let me know if you think you have better ways of doing things, I’m open to new ideas too!

Happy testing!

Tim