close icon
AWS

Testing AWS Chalice Applications

A tutorial to learn how to write unit tests and integration tests for REST APIs in AWS Chalice. Additionally, we will see how to measure test coverage.

September 06, 2021

TL;DR: In this tutorial, we will learn how to write unit tests and integration tests with Pytest in AWS Chalice applications. We will also learn how to measure test coverage.

Introduction

AWS Chalice is a Python-based web micro-framework that leverages on the AWS Lambda and API Gateway services. It is used to create serverless applications. The Chalice experience is Flask-like by way of features like semantics and syntax. For more details on creating and deploying Chalice applications, you can go through the article on how to create CRUD REST API with AWS Chalice.

When building applications, there is a need to test the code to avoid shipping bugs and unstable code. It also saves one a lot of debugging hours and makes deployments less stressful.

Common forms of tests in software development include:

  1. Unit test: This tests a particular function, component, or logic in the code. This way, edge cases can easily be identified, isolated, and fixed. Unit tests usually involve inspecting the output of a function against a known or expected output.
  2. Integration Test: examines multiple parts or the entire application in an end-to-end manner. It considers how each function or component works with the other.

However, Chalice currently provides a test client for just unit tests. Therefore, the integration tests are written in a manner similar to unit tests with no major difference other than the fact that integration tests consist of multiple unit tests.

Where to Write Tests

In Python-based applications, tests are usually housed in test.py files. These are the test files that will import the application logic to be tested.

Let's assume we have a simple Chalice application with a folder structure that looks like the following:

├── app.py
├── .chalice
├── requirements.txt
└── test.py

However, as the application becomes bigger, a single test.py file would become bulky and can become difficult to work with. Hence, it is necessary to create a folder of tests called tests and then split the tests into multiple test files inside the test folder.

Now, we'll need to create a new folder called tests and an empty __init__.py file inside the folder. The __init__.py will allow Python to recognize the test directory as a package that could be run:

mkdir tests && cd tests
touch __init__.py

Now, let's create a test file for our application inside the test folder:

touch test_unit.py

Then, the folder structure will look like this:

├── app.py
├── .chalice
├── requirements.txt
└── tests
    ├── __init__.py
    └── test_unit.py

We can then create as many test files as we need in the tests folder.

How to Do Unit Tests

Since the release of v.1.17.0, AWS Chalice has shipped with a test client that serves as a test runner to write tests in Chalice applications. We no longer need to set up boilerplates and logic for testing. We only need to import the test client into our test file.

Let's assume we have our Chalice application has an app.py file that looks like the following:

from chalice import Chalice

app = Chalice(app_name='chalice-api-sample')


@app.route('/')
def index():
    return {'hello': 'world'}

Now, we'll modify the test_unit.py file as follows:

import app
from chalice.test import Client

In the above code, we have just imported the app.py and the chalice test Client. Let's add the following test code:

...

def test_index():
    with Client(app.app) as client:
        response = client.http.get('/')
        assert response.status_code == 200
        assert response.json_body == {'hello': 'world'}

In the above test code:

  1. we instantiated the test Client to be used within the context of the particular test function. That implies that whenever we run the test, a test environment with resources and environment variables will be set up and then cleaned up after running the test.
  2. We made a GET request via HTTP using the client.http attribute.
  3. We assert that a 200 status code is returned in the response with a JSON response body—{'hello': 'world'}.

We will install Pytest runner to run our test:

pip install pytest

Then, we will run with the following command:

py.test tests/test_unit.py

We should get a response that looks like the following:

======================================== test session starts ========================================
platform win32 -- Python 3.7.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: C:\aws-chalice-api-sample
collected 1 item

tests\test_unit.py .                                                                           [100%]

========================================= 1 passed in 0.11s =========================================

Creating mocks

We can try to mock an external API in order to learn how to test with mock variables. We will test a request from our app to an endpoint of JSONPlaceholder API data service to get a list of dummy posts. Firstly, let us install the Python requests module:

pip install requests

Then, let's add a function inside app.py file to make a GET request to the /post endpoint of JSONPlaceholder and return a list of posts:

...
import requests

@app.route('/post')
def get_post():
    response = requests.get("https://jsonplaceholder.typicode.com/posts/1")
    if response.ok:
        return response.json()
    else:
        return None
...

In the above code, we wrote a function called get_post that makes an HTTP request to the DummyAPI server and returns the response of the request in JSON form.

So, we'll add a mock inside the test_unit.py file:

...
from unittest.mock import patch

...

@patch('app.requests.get')
def test_get_post(mock_get):
    """Mocking with the patch decorator to get a post from an External API"""
    mock_get.return_value.ok = True
    response = app.get_post()
    assert response.ok

In the above code, we imported the patch function of the mock module. Then, we defined the patch function as a decorator with reference to the project's request.get. Then, we created a function called test_get_post with a parameter called mock_get to test the get_post function inside the app.py file. If the returned status of mock_get is ok, then a fake request to the JSONPlaceholder is made, after which an assertion of the status code of the response to the request is made. We ensured that the mock acts like it is making a real request.get request to the JSONPlaceholder server, whereas it is a fake request. This allows us to test our code without dependence on the JSONPlaceholder external API server.

If we run our test again with the following bash command:

py.test tests/test_unit.py

We should get an output similar to the following:

========================================= test session starts =======================================
platform win32 -- Python 3.7.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: C:\aws-chalice-api-sample
plugins: cov-2.12.1
collected 2 items

tests\unit\test_unit.py ..                                                                     [100%]

========================================= 2 passed in 0.85s =========================================

How can we test when the desired response is not returned? That's why we have an else statement in the get_post() function inside the app.py file, right? To accommodate situations when no post is returned when we request for a post from JSONPlaceholder. Therefore, we will add a test that will check when no post is returned. Let's add the following code in the test_unit.py file:

...
@patch('app.requests.get')
def test_no_get_post(mock_get):
    """Mock testing to check when no post is returned"""
    mock_get.return_value.ok = False
    response = app.get_post()
    assert response == None

In the above file, we asked the test to check if the returned value of the mock GET request is not ok in the line mock_get.return_value.ok = False. We also asserted that a None response is returned. So, we have been able to handle situations where a post is not returned.

We can then run the test_unit.py file again as thus:

py.test tests/test_unit.py

We will get the following output:

========================================= test session starts ======================================= 
platform win32 -- Python 3.7.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: \aws-chalice-api-sample
plugins: cov-2.12.1
collected 3 items

tests\unit\test_unit.py ...                                                                    [100%] 

========================================= 3 passed in 0.17s =========================================

Cool!

The full code of the test_unit.py file is thus:

from unittest.mock import patch
import app
from chalice.test import Client
import json


def test_index():
    with Client(app.app) as client:
        response = client.http.get('/')
        assert response.status_code == 200
        assert response.json_body == {'hello': 'world'}

@patch('app.requests.get')
def test_get_post(mock_get):
    """Mocking with the patch decorator to get a post from an External API"""
    mock_get.return_value.ok = True
    response = app.get_post()
    assert response.ok

@patch('app.requests.get')
def test_no_get_post(mock_get):
    """Mock testing to check when no post is returned"""
    mock_get.return_value.ok = False
    response = app.get_post()
    assert response == None

How to Write Integration Tests

Integration tests check multiple components to see if they work together. These tests are usually written like unit tests, but they involve verifying multiple parts of the application at once. An integration test might require establishing a network connection, setting up a database, etc. These can be configured as fixtures. Fixtures are functions that set up the initial states/environment that you can create in your tests once and use multiple times.

It is a good habit to separate unit tests from integration tests by putting them in separate folders. Hence, we will create two folders inside the test directory and call them unit and integration, respectively. Then,

  • we will move the test_unit.py file into the unit folder.
  • also, we will create a new test_integration.py file in the integration folder
  • next, we will create a conftest.py file in the test folder to house our fixtures.

So, the folder structure will look like the following:

├── app.py
├── .chalice
├── requirements.txt
└── tests
    ├── __init__.py
    ├── conftest.py
    ├── unit/
    │   ├── __init__.py
    │   └── test_unit.py
    │
    └── integration/
        ├── __init__.py
        └── test_integration.py

To start with, we will create a fixture called app. This fixture will be the instance of our Chalice application. Let's go to the conftest.py file and change it thus:

import pytest
from chalice import Chalice
import app as chalice_app
from chalice.test import Client


@pytest.fixture
def app() -> Chalice:
    return chalice_app


@pytest.fixture
def test_client():
    with Client(chalice_app.app) as client:
        yield client

In the code above, we've abstracted an instance of our Chalice application as a fixture. We also created a fixture for our test client.

Going forward, let's assume that we have set up a REST API for a bookshelf application in the app.py with CRUD endpoints like the following:

...

# POST endpoint to add books to the bookshelf

@app.route('/book', methods=['POST'])
def create_book():
    book_as_json = app.current_request.json_body
    try:
        Item = {
            'id': book_as_json['id'],
            "title": book_as_json['title'],
            "author": book_as_json['author']
        }
        return {"id": book_as_json['id'], "title": book_as_json['title'], "author": book_as_json['author']}
    except Exception as e:
        return {'message': str(e)}


# PUT endpoint to update a book item based on the given ID

@app.route('/book/{id}', methods=['PUT'])
def update_book(id):
    book_as_json = app.current_request.json_body
    try:
        Item = {
            "id": book_as_json['id'],
            "title": book_as_json['title'],
        }
        return {'message': 'ok - UPDATED', 'status': 201}
    except Exception as e:
        return {'message': str(e)}


# DELETE endpoint to delete a particular book based on the given ID

@app.route('/book/{id}', methods=['DELETE'])
def delete_book(id):
    book_as_json = app.current_request.json_body
    try:
        Item = {
            "id": book_as_json['id'],
            "author": book_as_json['author']
        }
        return {'message': 'ok - DELETED', 'status': 201}
    except Exception as e:
        return {'message': str(e)}

The code above consists of:

  • add_book(): function for POST method to add books to the catalog
  • update_book(id: uses the UPDATE method to update a specified book entry with a new title
  • delete_book(id): deletes a particular book entry from the catalog

Now, we can write tests for them using the Chalice TestHTTPClient class inside the test_integration.py file like the following:

import json

#  test for the create_book endpoint
def test_add_book(test_client):
    response = test_client.http.post(
        '/book',
        headers={'Content-Type': 'application/json'},
        body=json.dumps(
            {
                "id": "123",
                "title": "Javascript Know It All",
                "author": "Chukwuma Obinna",
            })
    )
    assert response.json_body == {
        "id": "123",
        "title": "Javascript Know It All",
        "author": "Chukwuma Obinna"
    }


#  test for the update_book endpoint
def test_update_book(test_client):
    response = test_client.http.put(
        '/book/{id}',
        headers={'Content-Type': 'application/json'},
        body=json.dumps(
            {
                "id": "123",
                "title": "Chalice Book",
            })
    )
    assert response.json_body == {
        "message": "ok - UPDATED",
        "status": 201
    }


#  test for the delete_book endpoint
def test_delete_book(test_client):
    response = test_client.http.delete('/book/{id}',
         headers={'Content-Type': 'application/json'},
         body=json.dumps(
              {
                    "id": "123",
                    "author": "Chukwuma Obinna",
              })
    )
    assert response.json_body == {
        "message": "ok - DELETED",
        "status": 201
    }

In the above code:

  • We wrote the tests for each of our CRUD endpoints.
  • In each test function, we used the test_client that we defined as a fixture earlier on in the conftest.py file.
  • defined the header and body to be passed in each test request
  • We asserted known responses to the test requests

Note: the database functionality is not included in the API used in order to simplify the example. Otherwise, we would have to write a fixture to setup a mock database for the integration test.

To run the above test, we'd use the following command:

py.test tests/test_integration.py

We should get a response that looks like the following:

========================================= test session starts ======================================
platform win32 -- Python 3.7.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
cachedir: .pytest_cache
rootdir: C:\aws-chalice-api-sample
collected 4 items

tests/integration/test_integration.py::test_index PASSED                                     [ 25%]
tests/integration/test_integration.py::test_add_book PASSED                                  [ 50%]
tests/integration/test_integration.py::test_update_book PASSED                               [ 75%]
tests/integration/test_integration.py::test_delete_book PASSED                               [100%]

========================================= 4 passed in 1.01s =======================================

Generally, integration tests usually take a longer time to run than unit tests. Therefore, it is advisable to not run them every time but whenever one needs to deploy.

The full code of the app.py file is thus:

from requests.models import Response
from chalice import Chalice
import requests
import json

app = Chalice(app_name='aws-chalice-api-sample')


@app.route('/')
def index():
    return {'hello': 'world'}

# Function to make External API Call
@app.route('/post')
def get_post():
    response = requests.get("https://jsonplaceholder.typicode.com/posts/1")
    if response.ok:
        return response.json()
    else:
        return None

# Function to make POST request to create a book
@ app.route('/book', methods=['POST'])
def create_book():

    book_as_json = app.current_request.json_body
    try:
        Item = {
            'id': book_as_json['id'],
            "title": book_as_json['title'],
            "author": book_as_json['author']
        }
        return {"id": book_as_json['id'], "title": book_as_json['title'], "author": book_as_json['author']}
    except Exception as e:
        return {'message': str(e)}

# Function to make POST request to update a book
@app.route('/book/{id}', methods=['PUT'])
def update_book(id):
    book_as_json = app.current_request.json_body
    try:
        Item = {
            "id": book_as_json['id'],
            "title": book_as_json['title'],
        }
        return {'message': 'ok - UPDATED', 'status': 201}
    except Exception as e:
        return {'message': str(e)}

# Function to make POST request to delete a particular book a book
@app.route('/book/{id}', methods=['DELETE'])
def delete_book(id):
    book_as_json = app.current_request.json_body
    try:
        Item = {
            "id": book_as_json['id'],
            "author": book_as_json['author']
        }
        return {'message': 'ok - DELETED', 'status': 201}
    except Exception as e:
        return {'message': str(e)}

Measuring Code Coverage

Code coverage is simply a means of quantifying how much of our code is being tested. We will use pytest-cov package to measure test coverage in this tutorial. It is a tool built off the coverage.py tool used in measuring coverage in Python code. Fortunately, pytest-cov works well with pytest.

Let's install pytest-cov:

pip install pytest-cov

Let's measure the test coverage by using the -cov argument with pytest. We will measure the coverage over our source code in the app.py file:

pytest --cov=app  --cov-report term-missing

We used the --cov-report term-missing command to specify that we want our coverage report to indicate the lines of code that are not covered by our tests.

We will get a terminal output that looks like this:

======================================= test session starts =======================================
platform win32 -- Python 3.7.7, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: \aws-chalice-api-sample
plugins: cov-2.12.1
collected 7 items

tests\integration\test_integration.py ....                                                    [ 57%] 
tests\unit\test_unit.py ...                                                                   [100%] 

----------- coverage: platform win32, python 3.7.7-final-0 -----------
Name     Stmts   Miss  Cover   Missing
--------------------------------------
app.py      33      6    82%   46-47, 59-60, 72-73
--------------------------------------
TOTAL       33      6    82%

======================================== 7 passed in 2.16s ========================================

Note: The 6 missed exception statements in lines 46-47, 59-60, and 72-73 above are in the app.py file. The missing seemed to occur while running as a result of a pytest error.

Conclusion

In this article, we have considered how to run unit and integration tests in Chalice applications and APIs. We also learned how to use Pytest for testing and Pytest-cov for measuring code coverage. We can now go ahead and build test-driven Chalice applications with the knowledge gained. Thanks for following through. We'd be glad to have your thoughts and suggestions in the comment section. Thanks.

  • Twitter icon
  • LinkedIn icon
  • Faceboook icon