Want to share your content on python-bloggers? click here.
Introduction
This is part the final part of our three part series
- Part 1: API as a package:
Structure - Part 2: API as a package:
Logging - Part 3: API as a package: Testing (this post)
This blog post is a follow on to our API as a package series, which
looks to expand on the topic of testing {plumber} API applications
within the package structure leveraging {testthat}. As a reminder of the
situation, so far we have an R package that defines functions that will
be used as endpoints in a {plumber} API application. The API routes
defined via {plumber} decorators in inst
simply map the package
functions to URLs.
The three stages of testing
The intended structure of the API as a package setup is to encourage a
particular, consistent, composition of code for each exposed endpoint.
That is:
- A plumber decorator that maps a package function to a URL
- A wrapper function that takes a request object, deals with any
serialization of data and dispatches to a “business logic” function - The “business logic” function, or core functionality of the purpose
of a particular endpoint
With that, we believe that this induces three levels of testing to
consider:
- Does the running API application successfully return an appropriate
response when we make a request to an endpoint? - Does the wrapper function handle behaviour matching your
expectations? - Is my logic correct?
Do you use RStudio Pro? If so, checkout out our managed RStudio services
Example: Sum
Consider a POST endpoint that will sum the numeric contents of objects.
For simplicity, we will consider only requests that send valid JSON
objects, however there are a few scenarios that might arise:
-
A JSON array
# array.json [1, 2] # expected sum: [3]
-
A single JSON object
# single_object.json { "a": 1, "b": 2 } # expected sum: [3]
-
An array of JSON objects
# array_objects.json [ { "a": 1, "b": 2 }, { "a": 1, "b": 2 } ] # expected sum: [3, 3]
R code solution
Writing some R code to ensure that we calculate the expected sums for
each of these is fairly simple, keeping in mind that when parsing JSON
objects we would obtain a named list to represent an object and an
unnamed list to represent an array:
# R/api_sum.R # function to check whether the object we # receive looks like a json array is_array = function(parsed_json) { is.null(names(parsed_json)) } # function to sum the numeric components in a list sum_list = function(l) { purrr::keep(l, is.numeric) |> purrr::reduce(`+`, .init = 0) } # main sum function which handles lists of lists appropriately my_sum = function(x) { if (is_array(x)) { if(is.list(x)) { purrr::map(x, sum_list) } else { sum(x) } } else { sum_list(x) } }
To integrate this into our API service we can then write a wrapper
function
# R/api_sum.R #' @export api_sum = function(req) { # parse the JSON body of the request parsed_json = jsonlite::fromJSON(req$postBody) # return the sum return(my_sum(parsed_json)) }
and add a plumber annotation in inst/extdata/api/routes/example.R
#* @post /sum #* @serializer unboxedJSON cookieCutter::api_sum
which exposes our sum function on the URL <root_of_api>/example/sum
.
Testing: Setup
With the above example we are now ready to start writing some tests.
There are a few elements which are likely to be common when wanting to
test endpoints of an API application:
- Start an instance of your API
- Send a request to your local running API
- Create a mock object that looks like a real rook request object
The {testthat} package for R has utilities that make defining and using
common structures like this easy. A tests/testthat/setup.R
script will
run before any tests are constructed, here we can put together the setup
and subsequent tear down for a running API instance, for our
cookieCutter example package being built as part of this series this
might look like
# test/testthat/setup.R ## run before any tests # pick a random available port to serve your app locally # note that port will also be available in the environment in which your # tests run. port = httpuv::randomPort() # start a background R process that launches an instance of the API # serving on that random port running_api = callr::r_bg( function(port) { dir = cookieCutter::get_internal_routes() routes = cookieCutter::create_routes(dir) api = cookieCutter::generate_api(routes) api$run(port = port, host = "0.0.0.0") }, list(port = port) ) Sys.sleep(2) ## run after all tests withr::defer(running_api$kill(), testthat::teardown_env())
With this, as our test suite runs, we can send requests to our API at
the following url pattern, http://0.0.0.0:{port}{endpoint}
.
Similarly, {testthat} allows for defining helper functions for the
purposes of your test-suite. Any file with “helper” at the beginning of
the name in your testthat directory will be executed before your tests
run. We might use this to define some helper functions which will allow
us to send requests easily and create mock objects, as well as some
other things.
# tests/testthat/helper-example.R # convenience function for creating correct endpoint url endpoint = function(str) { glue::glue("http://0.0.0.0:{port}{str}") } # convenience function for sending post requests to our test api api_post = function(url, ...) { httr::POST(endpoint(url), ...) } # function to create minimal mock request objects # doesn't fully replicate a rook request, but gives the parts # we need as_fake_post = function(obj) { req = new.env() req$HTTP_CONTENT_TYPE = "application/json" req$postBody = obj req }
You might also want to skip the API request tests in cases where the API
service did not launch correctly
# tests/testthat/helper-example.R # skip other tests if api is not alive skip_dead_api = function() { # running_api is created in setup.R testthat::skip_if_not(running_api$is_alive(), "API not started") }
One of the things that we like to do, inspired by the
pytest-datadir plugin for
the python testing framework, pytest, is have numerous test cases stored
as data files. This makes it easy to run your tests against many
examples, as well as to add new ones that should be tested in future.
With that our final helper function might be
# tests/testthat/helper-example.R test_case_json = function(path) { # test_path() will give appropriate path to running test environment file = testthat::test_path(path) # read a file from disk obj = readLines(file) # turn json contents into a single string paste(obj, collapse = "") }
Testing: Tests
With all of the setup work done (at least we only need to do that once)
we will finally write tests to address the three types identified
earlier in the article. We identified three scenarios for JSON we might
receive, so we can go ahead and stick those in a data folder within our
test directory.
└── tests ├── testthat │ ├── example_data │ │ ├── array.json │ │ ├── array_objects.json │ │ └── single_object.json
Our test script for this endpoint then, will iterate through the files
in this directory and:
- Send each example as the body of a POST request and ensure we get a
success response (200) - Send a mock request object to the wrapper function, ensuring that
data is being parsed correctly and the return object is of the right
shape - Take the data from the example file, run it through the
my_sum()
function and ensure that the result is correct
# tests/testthat/test-example.R # iterate through multiple test cases purrr::pwalk(tibble::tibble( # get all files in the test data directory file = list.files(test_path("example_data"), full.names = TRUE), # expected length (shape) of result length = c(2, 1, 1), # expected sums sums = list(c(3,3), 3, 3) ), function(file, length, sums){ # use our helper to create the POST body test_case = test_case_json(file) # test against running API test_that("succesful api response", { # skip if not running skip_dead_api() headers = httr::add_headers( Accept = "application/json", "Content-Type" = "application/json" ) # use our helper to send the data to the correct endpoint response = api_post("/example/sum", body = test_case, headers = headers) # check our expectation expect_equal(response$status_code, 200) }) # test that the wrapper is doing its job test_that("successful api func", { # use helper to create fake request object input = as_fake_post(test_case) # execute the function which is exposed as a route directly res = api_sum(input) # check the output has the expected shape expect_length(res, length) }) # test the business logic of the function test_that("successful sum", { # use the data parsed from the test case input = jsonlite::fromJSON(test_case) # execute the logic function directly res = my_sum(input) # check the result equals our expectation expect_equal(res, sums) }) })
Concluding remarks
With that we have a setup for our test suite that takes care of a number
of common elements, which can of course be expanded for other HTTP
methods, data types etc; and a consistent approach to testing many cases
at the API service level, serialization/parsing and logic level. As with
the other posts in this series a dedicated package example is available
in our blogs
repo.
For updates and revisions to this article, see the original post
Want to share your content on python-bloggers? click here.