Skip to main content

Integration Testing

As you may be aware, Tailcall offers a method for writing a configuration to generate a GraphQL backend. Additionally, you can link multiple configurations to compose them together. Extending the behavior of Tailcall is also possible by integrating custom JavaScript scripts. Managing this involves handling multiple files in various formats, which complicates the experience of writing integration tests.

To maintain control, we have opted to utilize markdown files, allowing us to consolidate various types of configurations and scripts into a single document.

Here is an example of how the test looks:

---
identity: true
---

<!-- Test Configuration -->

```graphql @config
schema
@upstream(
baseURL: "http://jsonplaceholder.typicode.com"
) {
query: Query
}

type Query {
post: Post @http(path: "/post")
}

type Post {
id: Int
title: String
body: String
}
```
tip

Try to play around with the cargo test command by modifying the tests written in the tests/execution folder.

How does it work?

Execution Spec implements a custom markdown-based testing framework for Tailcall. The framework is designed to help write integration tests for Tailcall configs.

Run all tests

The integration tests are executed as usual integration test so you can use test options and filters like with usual test.

cargo test

To run integration tests skipping other tests run following command:

cargo test --test execution_spec

After running you will get an output of all executed integration tests.

Run a single test

Similar to filtering unit tests to execute a single markdown configuration you can pass it's name to the test command:

cargo test --test execution_spec grpc
Compiling tailcall-fixtures v0.1.0 (/Users/tushar/Documents/Projects/tailcall/tailcall-fixtures)
Compiling tailcall v0.1.0 (/Users/tushar/Documents/Projects/tailcall)
Finished `test` profile [unoptimized + debuginfo] target(s) in 15.96s
Running tests/execution_spec.rs (target/debug/deps/execution_spec-6779d7c5c29b9b0b)

running 18 tests
test run_execution_spec::test-grpc-invalid-method-format.md ... ok
test run_execution_spec::test-grpc-invalid-proto-id.md ... ok
test run_execution_spec::test-grpc-group-by.md ... ok
test run_execution_spec::test-grpc-missing-fields.md ... ok
test run_execution_spec::test-grpc-nested-optional.md ... ok
test run_execution_spec::test-grpc-nested-data.md ... ok
test run_execution_spec::test-grpc-proto-path.md ... ok
test run_execution_spec::grpc-proto-with-same-package.md ... ok
test run_execution_spec::grpc-reflection.md ... ok
test run_execution_spec::test-grpc-optional.md ... ok
test run_execution_spec::test-grpc-service-method.md ... ok
test run_execution_spec::test-grpc-service.md ... ok
test run_execution_spec::grpc-error.md ... ok
test run_execution_spec::grpc-simple.md ... ok
test run_execution_spec::grpc-batch.md ... ok
test run_execution_spec::grpc-url-from-upstream.md ... ok
test run_execution_spec::grpc-override-url-from-upstream.md ... ok
test run_execution_spec::test-grpc.md ... ok

In the above command all tests with the name grpc will be executed.

Skipping a test

Skipping the test is also possible by passing the --skip parameter:

cargo test --test execution_spec -- --skip grpc

Sometimes, you might want to skip the test per permanently for everyone and the CI. You could achieve it by setting the skip configuration in your markdown:

---
skip: true
---

<!-- Rest of the configurations -->

Folder Structure

All execution_spec tests are located in tests/execution. The results generated by these tests are stored as snapshots in tests/core/snapshots. An execution_spec test is always a markdown file with a .md extension.

File Structure

Each .md file runs in its own scope, so no two tests can interfere with each other. The file structure is as follows:

Heading

The heading of file is used to provide metadata about the test. It is a YAML front matter block that contains the following fields:

  • identity - This instructs the runner to check if the configuration when parsed and then printed back, is the same as the original configuration. This is useful to check whenever a new feature is added in the configuration and the parsers + printer needs to be updated.
  • error - This instructs the runner to expect a validation error while parsing the configuration. This is useful to test validation logic written while converting config to blueprint.
  • skip - This is a special annotation that ensures that the test is skipped.
---
identity: true
error: true
skip: true
---

The rest of the file is the test's body consisting of code blocks and descriptions.

Config

Codeblocks can be enhanced with additional meta information for the test parser to make sense of the code. So for example a Tailcall configuration could be written in a code block with the graphql language and a @config meta information could be attached to it.

```graphql @config
schema {
query: Query
}

type Query {
users: [User]
posts: [Post]
}
```

For each config a few tests are automatically executed:

  1. We check if the config written is valid. If it's not and unless error: true is set in the front matter, the test will fail.
  2. We check if the config when parsed and then printed back is the same as the original config. This is useful to check whenever a new feature is added in the configuration and the parsers + printer needs to be updated.
  3. We check if the config when merged with an empty configuration is the same as the original config. This is useful to check whenever a new feature is added in the configuration and the merger needs to be updated.
  4. We autogenerate the schema of the GraphQL server and snapshot it for later. This is useful to see what would the final GraphQL schema look like.

Test

An @test block specifies HTTP requests that the runner should perform in YAML format. It solely contains requests. The response for each request is automatically generated and compared with the snapshot.

note

There may be at most one @test block in a test.

Example:

```yml @test
- method: POST
url: http://localhost:8080/graphql
body:
query: query { user { name } }
```

Mock

Mock provides a way to match requests and send back a predefined response. It is used to mock HTTP & gRPC requests in the test.

```yml @mock
- request:
# The method to match on (default: Any)
method: POST

# The URL to match on (default: Any)
url: http://jsonplaceholder.typicode.com/users/1

# Predefined response
response:
status: 200
body:
id: 1
name: foo

# Number of time we expect this request to be hit (default: 1)
expectedHits: 1

# Whether we should assert the number of hits (default: true)
assertHits: true
```

Env

An @env block specifies environment variables in YAML that the runner should use in the app context. There may be at most one @env block in a test.

Example:

```yml @env
TEST_ID: 1
```

File

A @file block creates a file in the spec's virtual file system. The @config block will have exclusive access to files created in this way: the true filesystem is not available to it.

Every @file block has the filename declared in the header. The language of the code block is optional and does not matter.

Example:

```js @file:worker.js
function onRequest({request}) {
request.headers["x-test"] = "test"
return {request}
}
```

```graphql @config
schema @link(file: "worker.js") {
query: Query
}
```

In the above example we are able to link the worker.js file to the schema and write an integration test where all the requests will be modified by the onRequest function.

Snapshots

Tailcall uses the Insta snapshot engine. Snapshots are automatically generated with a .new suffix if there is no pre-existing snapshot, or if the compared data didn't match the existing snapshot.

Instead of writing result cases in tests and updating them when behaviour changes, a snapshot-based testing workflow relies on auto-generation. Whenever a .new snapshot is generated, it means one of the following:

  • Your code made an unexpected breaking change, and you need to fix it.
  • Your code made an expected breaking change, and you need to accept the new snapshot.

You need to determine which one is the case, and take action accordingly.

Usage of cargo-insta is recommended:

cargo insta test --review

This will regenerate all snapshots without interrupting the test every time there's a diff, and it will also open the snapshot review interface, so that you can accept or reject .new snapshots.

To clean unused snapshots, run:

cargo insta test --delete-unreferenced-snapshots