APIs are crucial for enabling enterprises to achieve better connectivity with diverse business systems and applications. API is a specific set of routines, protocols, or tools for building software applications. They provide all tools necessary for software components to interact properly with one another. These integrate and mediate the varied business systems or applications in which they share resources. This complexity is what makes testing APIs difficult. Here we explore the process and what goes into testing an API.
What is API Testing?
Like any another software application, APIs are tested in order to identify bugs, security vulnerabilities, inconsistencies, or failures within the API.
The process of validating the HTTP Response with respect to the HTTP Request sent to the server by or from the client is referred to as the API Testing. Testing helps to find whether the services are working according to the request sent and if the server responds to the request or not. APIs encompass all the functions that characterize the business logic layer. It acts as a middle-ware between GUI and database.
How API Testing is Performed
API testing should cover the following testing methods apart from usual SDLC process of Software Architecture:
- Discovery Testing: The tester must run the calls listed in the API documentation, to check if the listed resources can be enumerated, created, updated and deleted.
- Usability Testing: This kind of testing usually verifies whether the API is functional and user-friendly. See to it that API integrates well with another platform.
- Security Testing: This testing includes what type of authentication is required and whether sensitive data is encrypted over HTTP or both.
- Automated testing: API testing should be concluded in the creation of a set of scripts or a tool that can be used to execute the API in timely intervals.
- Documentation: Documentation should actually be a part of the final deliverables. The team of Testers make sure the documentation provides enough information to interact with the API.
- Resource URL: https://reqres.in/
- Parameter: <name of api>/<Name of the Form>
- First, requested data is sent to the server by the client in order to fetch a response from the server.
- Request Line: Request Line consists of method or operation to be used, Request URI and then the HTTP Protocol that’s used one after the other. Ex: GET / expertise
- Headers (0 or more Headers in the list): The Section between Request Line and Request Body is having a set of Headers (0 or more) known as Request Header Section.HTTP Request has 3 Parts:
- An optional Body of the request: It is a part of the HTTP Request where the additional content/data can be sent to the Server from the Client side.
Example: JSON/XML files or file types are sent as body for the request.
- HTTP Protocol like HTTP/1.1
- Status Code such 200 or 201
- Reason phrase such as OK or Created
- to ensure that the implementation is working correctly as expected — no bugs!
- to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).
- to prevent regressions between code merges and releases.
- to ensure that the implementation is working correctly as expected — no bugs!
- to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).
- to prevent regressions between code merges and releases.
- Basic positive tests (happy paths)
- Extended positive testing with optional parameters
- Negative testing with valid input
- Negative testing with invalid input
- Destructive testing
- Security, authorization, and permission tests (which are out of the scope of this post)
- to ensure that the implementation is working correctly as expected — no bugs!
- to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).
- to prevent regressions between code merges and releases.
- Basic positive tests (happy paths)
- Extended positive testing with optional parameters
- Negative testing with valid input
- Negative testing with invalid input
- Destructive testing
- Security, authorization, and permission tests (which are out of the scope of this post)
- Check that the API is designed according to correct security principles: deny-by-default, fail securely, least privilege principle, reject all illegal inputs, etc.
- Positive: ensure API responds to correct authorization via all agreed auth methods – Bearer token, cookies, digest, etc. – as defined in spec
- Negative: ensure API refuses all unauthorized calls
- Role Permissions: ensure that specific endpoints are exposed to user based on role. API should refuse calls to endpoints which are not permitted for user’s role
- Protocol: check HTTP/HTTPS according to spec
- Data leaks: ensure that internal data representations that are desired to stay internal do not leak outside to the public API in the response payloads
- Rate limiting, throttling, and access control policies
- Check API response time, latency, TTFB/TTLB in various scenarios (in isolation and under load)
- Find capacity limit points and ensure the system performs as expected under load, and fails gracefully under stress
- For public APIs: a manual “Product”-level test going through the entire developer journey from documentation, login, authentication, code examples, etc. to ensure the usability of the API for users without prior knowledge of our system.
Let's see the terms for HTTP Request and HTTP Response in detail:
HTTP Request
Sample HTTP Request is given below:
HTTP Response
Sample HTTP Response is given below:
Response is data sent to the Client in turn by the Server for the request received from Client.
2. HTTP Response has 3 Parts:
Part 1: Status Line: It also has 3 parts:
Part 2: Headers (0 or more Headers in the List): The Section between Status-Line and Response Body is having set of Headers (0 or more) known as Response Header Section.
Part 3: Body of the response: It contains the resource data requested by the Client.
Example: City New York City Weather where City: New York is considered as main resource.
Conclusion
End-to-End testing can be done by the testers either manually using tools like Postman and SOAP UI or automated using HTTPClient or Rest-Assured libraries.
The API layer of any application is one of the most crucial software components. It is the channel which connects client to server (or one microservice to another), drives business processes, and provides the services which give value to users.
A customer-facing public API that is exposed to end-users becomes a product in itself. If it breaks, it puts at risk, not just a single application, but an entire chain of business processes built around it.
Mike Cohn’s famous Test Pyramid places API tests at the service level (integration), which suggests that around 20% or more of all of our tests should focus on APIs (the exact percentage is less important and varies based on our needs).
Once we have a solid foundation of unit tests which cover individual functions, API tests provide higher reliability covering an interface closer to the user, yet without the brittleness of UI tests.
API tests are fast, give high ROI, and simplify the validation of business logic, security, compliance, and other aspects of the application. In cases where the API is a public one, providing end-users programmatic access to our application or services, API tests effectively become end-to-end tests and should cover a complete user story.
So the importance of API testing is obvious. Several methods and resources help with HOW to test APIs — manual testing, automated testing, test environments, tools, libraries, and frameworks. However, regardless of what you will use — Postman, supertest, pytest, JMeter, mocha, Jasmine, RestAssured, or any other tools of the trade — before coming up with any test method you need to determine what to test…
API test strategy
The test strategy is the high-level description of the test requirements from which a detailed test plan can later be derived, specifying individual test scenarios and test cases. Our first concern is functional testing — ensuring that the API functions correctly.
The main objectives in functional testing of the API are:
Test hundreds of APIs in this interactive playground for DevOps:
API as a contract — first, check the spec!
An API is essentially a contract between the client and the server or between two applications. Before any implementation test can begin, it is important to make sure that the contract is correct. That can be done first by inspecting the spec (or the service contract itself, for example a Swagger interface or OpenAPI reference) and making sure that endpoints are correctly named, that resources and their types correctly reflect the object model, that there is no missing functionality or duplicate functionality, and that relationships between resources are reflected in the API correctly.
The guidelines above are applicable to any API, but for simplicity, in this post, we assume the most widely used Web API architecture — REST over HTTP. If your API is designed as a truly RESTful API, it is important to check that the REST contract is a valid one, including all HTTP REST semantics, conventions, and principles (here, here, and here).
If this is a customer-facing public API, this might be your last chance to ensure that all contract requirements are met, because once the API is published and in use, any changes you make might break customers’ code.
The API layer of any application is one of the most crucial software components. It is the channel which connects client to server (or one microservice to another), drives business processes, and provides the services which give value to users.
A customer-facing public API that is exposed to end-users becomes a product in itself. If it breaks, it puts at risk, not just a single application, but an entire chain of business processes built around it.
Mike Cohn’s famous Test Pyramid places API tests at the service level (integration), which suggests that around 20% or more of all of our tests should focus on APIs (the exact percentage is less important and varies based on our needs).
Once we have a solid foundation of unit tests which cover individual functions, API tests provide higher reliability covering an interface closer to the user, yet without the brittleness of UI tests.
API tests are fast, give high ROI, and simplify the validation of business logic, security, compliance, and other aspects of the application. In cases where the API is a public one, providing end-users programmatic access to our application or services, API tests effectively become end-to-end tests and should cover a complete user story.
So the importance of API testing is obvious. Several methods and resources help with HOW to test APIs — manual testing, automated testing, test environments, tools, libraries, and frameworks. However, regardless of what you will use — Postman, supertest, pytest, JMeter, mocha, Jasmine, RestAssured, or any other tools of the trade — before coming up with any test method you need to determine what to test…
API test strategy
The test strategy is the high-level description of the test requirements from which a detailed test plan can later be derived, specifying individual test scenarios and test cases. Our first concern is functional testing — ensuring that the API functions correctly.
The main objectives in functional testing of the API are:
Test hundreds of APIs in this interactive playground for DevOps:
API as a contract — first, check the spec!
An API is essentially a contract between the client and the server or between two applications. Before any implementation test can begin, it is important to make sure that the contract is correct. That can be done first by inspecting the spec (or the service contract itself, for example a Swagger interface or OpenAPI reference) and making sure that endpoints are correctly named, that resources and their types correctly reflect the object model, that there is no missing functionality or duplicate functionality, and that relationships between resources are reflected in the API correctly.
The guidelines above are applicable to any API, but for simplicity, in this post, we assume the most widely used Web API architecture — REST over HTTP. If your API is designed as a truly RESTful API, it is important to check that the REST contract is a valid one, including all HTTP REST semantics, conventions, and principles (here, here, and here).
If this is a customer-facing public API, this might be your last chance to ensure that all contract requirements are met, because once the API is published and in use, any changes you make might break customers’ code.
(Sure, you can publish a new version of the API someday (e.g., /api/v2/), but even then backward compatibility might still be a requirement).
So, what aspects of the API should we test?
Now that we have validated the API contract, we are ready to think of what to test. Whether you’re thinking of test automation or manual testing, our functional test cases have the same test actions, are part of wider test scenario categories, and belong to three kinds of test flows.
API test actions
Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions:
Verify correct HTTP status code. For example, creating a resource should return 201 CREATED and unpermitted requests should return 403 FORBIDDEN, etc.
2. Verify response payload. Check valid JSON body and correct field names, types, and values — including in error responses.
3. Verify response headers. HTTP server headers have implications on both security and performance.
4. Verify correct application state. This is optional and applies mainly to manual testing, or when a UI or another interface can be easily inspected.
5. Verify basic performance sanity. If an operation was completed successfully but took an unreasonable amount of time, the test fails.
Test scenario categories
Our test cases fall into the following general test scenario groups:
Happy path tests check basic functionality and the acceptance criteria of the API. We later extend positive tests to include optional parameters and extra functionality. The next group of tests is negative testing where we expect the application to gracefully handle problem scenarios with both valid user input (for example, trying to add an existing username) and invalid user input (trying to add a username which is null). Destructive testing is a deeper form of negative testing where we intentionally attempt to break the API to check its robustness (for example, sending a huge payload body in an attempt to overflow the system).
The API layer of any application is one of the most crucial software components. It is the channel which connects client to server (or one microservice to another), drives business processes, and provides the services which give value to users.
A customer-facing public API that is exposed to end-users becomes a product in itself. If it breaks, it puts at risk, not just a single application, but an entire chain of business processes built around it.
Mike Cohn’s famous Test Pyramid places API tests at the service level (integration), which suggests that around 20% or more of all of our tests should focus on APIs (the exact percentage is less important and varies based on our needs).
Once we have a solid foundation of unit tests which cover individual functions, API tests provide higher reliability covering an interface closer to the user, yet without the brittleness of UI tests.
API tests are fast, give high ROI, and simplify the validation of business logic, security, compliance, and other aspects of the application. In cases where the API is a public one, providing end-users programmatic access to our application or services, API tests effectively become end-to-end tests and should cover a complete user story.
So the importance of API testing is obvious. Several methods and resources help with HOW to test APIs — manual testing, automated testing, test environments, tools, libraries, and frameworks. However, regardless of what you will use — Postman, supertest, pytest, JMeter, mocha, Jasmine, RestAssured, or any other tools of the trade — before coming up with any test method you need to determine what to test…
API test strategy
The test strategy is the high-level description of the test requirements from which a detailed test plan can later be derived, specifying individual test scenarios and test cases. Our first concern is functional testing — ensuring that the API functions correctly.
The main objectives in functional testing of the API are:
Test hundreds of APIs in this interactive playground for DevOps:
API as a contract — first, check the spec!
An API is essentially a contract between the client and the server or between two applications. Before any implementation test can begin, it is important to make sure that the contract is correct. That can be done first by inspecting the spec (or the service contract itself, for example a Swagger interface or OpenAPI reference) and making sure that endpoints are correctly named, that resources and their types correctly reflect the object model, that there is no missing functionality or duplicate functionality, and that relationships between resources are reflected in the API correctly.
The guidelines above are applicable to any API, but for simplicity, in this post, we assume the most widely used Web API architecture — REST over HTTP. If your API is designed as a truly RESTful API, it is important to check that the REST contract is a valid one, including all HTTP REST semantics, conventions, and principles (here, here, and here).
If this is a customer-facing public API, this might be your last chance to ensure that all contract requirements are met, because once the API is published and in use, any changes you make might break customers’ code.
(Sure, you can publish a new version of the API someday (e.g., /api/v2/), but even then backward compatibility might still be a requirement).
So, what aspects of the API should we test?
Now that we have validated the API contract, we are ready to think of what to test. Whether you’re thinking of test automation or manual testing, our functional test cases have the same test actions, are part of wider test scenario categories, and belong to three kinds of test flows.
API test actions
Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions:
1. Verify correct HTTP status code. For example, creating a resource should return 201 CREATED and unpermitted requests should return 403 FORBIDDEN, etc.
2. Verify response payload. Check valid JSON body and correct field names, types, and values — including in error responses.
3. Verify response headers. HTTP server headers have implications on both security and performance.
4. Verify correct application state. This is optional and applies mainly to manual testing, or when a UI or another interface can be easily inspected.
5. Verify basic performance sanity. If an operation was completed successfully but took an unreasonable amount of time, the test fails.
Test scenario categories
Our test cases fall into the following general test scenario groups:
Happy path tests check basic functionality and the acceptance criteria of the API. We later extend positive tests to include optional parameters and extra functionality. The next group of tests is negative testing where we expect the application to gracefully handle problem scenarios with both valid user input (for example, trying to add an existing username) and invalid user input (trying to add a username which is null). Destructive testing is a deeper form of negative testing where we intentionally attempt to break the API to check its robustness (for example, sending a huge payload body in an attempt to overflow the system).
Test flows
Let’s distinguish between three kinds of test flows which comprise our test plan:
1. Testing requests in isolation – Executing a single API request and checking the response accordingly. Such basic tests are the minimal building blocks we should start with, and there’s no reason to continue testing if these tests fail.
2. Multi-step workflow with several requests – Testing a series of requests which are common user actions, since some requests can rely on other ones. For example, we execute a POST request that creates a resource and returns an auto-generated identifier in its response. We then use this identifier to check if this resource is present in the list of elements received by a GET request. Then we use a PATCH endpoint to update new data, and we again invoke a GET request to validate the new data. Finally, we DELETE that resource and use GET again to verify it no longer exists.
3. Combined API and web UI tests – This is mostly relevant to manual testing, where we want to ensure data integrity and consistency between the UI and API.
We execute requests via the API and verify the actions through the web app UI and vice versa. The purpose of these integrity test flows is to ensure that although the resources are affected via different mechanisms the system still maintains expected integrity and consistent flow.
An API example and a test matrix
We can now express everything as a matrix that can be used to write a detailed test plan (for test automation or manual tests).
Let’s assume a subset of our API is the /users endpoint, which includes the following API calls:
API Call Action GET /users List all users GET /users?name={username} Get user by username GET /users/{id} Get user by ID GET /users/{id}/configurations Get all configurations for user POST /users/{id}/configurations Create a new configuration for user DELETE /users/{id}/configurations/{id} Delete configuration for user PATCH /users/{id}/configuration/{id} Update configuration for user
Where {id} is a UUID, and all GET endpoints allow optional query parameters filter, sort, skip and limit for filtering, sorting, and pagination.
# Test Scenario Category Test Action Category Test Action Description 1 Basic positive tests (happy paths)
Execute API call with valid required parameters Validate
status code: 1. All requests should return 2XX HTTP status code
2. Returned status code is according to spec:
– 200 OK for GET requests
– 201 for POST or PUT requests creating a new resource
– 200, 202, or 204 for a DELETE operation and so on
Validate
payload: 1. Response is a well-formed JSON object
2. Response structure is according to data model (schema validation: field names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.)
API Call | Action |
GET /users | List all users |
GET /users?name={username} | Get user by username |
GET /users/{id} | Get user by ID |
GET /users/{id}/configurations | Get all configurations for user |
POST /users/{id}/configurations | Create a new configuration for user |
DELETE /users/{id}/configurations/{id} | Delete configuration for user |
PATCH /users/{id}/configuration/{id} | Update configuration for user |
status code:
2. Returned status code is according to spec:
– 200 OK for GET requests
– 201 for POST or PUT requests creating a new resource
– 200, 202, or 204 for a DELETE operation and so on
payload:
2. Response structure is according to data model (schema validation: field names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.)
Validate
state: 1. For GET requests, verify there is NO STATE CHANGE in the system (idempotence)
2. For POST, DELETE, PATCH, PUT operations
– Ensure action has been performed correctly in the system by:
– Performing appropriate GET request and inspecting response
– Refreshing the UI in the web application and verifying new state (only applicable to manual testing)
Validate
headers: Verify that HTTP headers are as expected, including content-type,
connection
, cache-control
, expires
,
access-control-allow-origin
, keep-alive
, HSTS, and other standard header fields – according to spec.
Verify that information is NOT leaked via headers (e.g. X-Powered-By
header is not sent to user).
Performance sanity: Response is received in a timely manner (within reasonable expected time) — as defined in the test plan.
Validate state: | 1. For GET requests, verify there is NO STATE CHANGE in the system (idempotence) 2. For POST, DELETE, PATCH, PUT operations – Ensure action has been performed correctly in the system by: – Performing appropriate GET request and inspecting response – Refreshing the UI in the web application and verifying new state (only applicable to manual testing) | ||
Validate headers: | Verify that HTTP headers are as expected, including content-type, connection , cache-control , expires ,access-control-allow-origin , keep-alive , HSTS, and other standard header fields – according to spec.Verify that information is NOT leaked via headers (e.g. X-Powered-By header is not sent to user). | ||
Performance sanity: | Response is received in a timely manner (within reasonable expected time) — as defined in the test plan. |
Positive + optional parameters
Execute API call with valid required parameters AND valid optional parameters
Run same tests as in #1, this time including the endpoint’s optional parameters (e.g., filter, sort, limit, skip, etc.)
Validate
status code: As in #1
Validate
payload: Verify response structure and content as in #1.
In addition, check the following parameters:
– filter: ensure the response is filtered on the specified value.
|
– sort: specify field on which to sort, test ascending and descending options. Ensure the response is sorted according to selected field and sort direction.
– skip: ensure the specified number of results from the start of the dataset is skipped
– limit: ensure dataset size is bounded by specified limit.
– limit + skip: Test pagination
Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response.
Validate
state: As in #1
Validate
headers: As in #1
Performance sanity: As in #1
– skip: ensure the specified number of results from the start of the dataset is skipped
– limit: ensure dataset size is bounded by specified limit.
– limit + skip: Test pagination
Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response.
state:
headers:
Negative testing – valid input
Execute API calls with valid input that attempts illegal operations. i.e.:
– Attempting to create a resource with a name that already exists (e.g., user configuration with the same name)
– Attempting to delete a resource that doesn’t
exist (e.g., user configuration with no such ID)
– Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name)
– Attempting illegal operation (e.g., delete a user configuration without permission.)
And so forth.
Negative testing – valid input | ||
Execute API calls with valid input that attempts illegal operations. i.e.: – Attempting to create a resource with a name that already exists (e.g., user configuration with the same name) – Attempting to delete a resource that doesn’t exist (e.g., user configuration with no such ID) – Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name) – Attempting illegal operation (e.g., delete a user configuration without permission.) And so forth. |
Validate
status code: 1. Verify that an erroneous HTTP status code is sent (NOT 2XX)
2. Verify that the HTTP status code is in accordance with error case as defined in spec
Validate
payload: 1. Verify that error response is received
2. Verify that error format is according to spec. e.g., error is a valid JSON object or a plain string (as defined in spec)
3. Verify that there is a clear, descriptive error message/description field
4. Verify error description is correct for this error case and in accordance with spec
|
Validate
headers: As in #1
Performance sanity: Ensure error is received in a timely manner (within reasonable expected time)
|
Negative testing – invalid input
Execute API calls with invalid input, e.g.:
– Missing or invalid authorization token
– Missing required parameters
– Invalid value for endpoint parameters, e.g.:
– Invalid UUID in path or query parameters
– Payload with invalid model (violates schema)
– Payload with incomplete model (missing fields or required nested entities)
– Invalid values in nested entity fields
– Invalid values in HTTP headers
– Unsupported methods for endpoints
And so on.
Validate
status code: As in #1
Validate
payload: As in #1
Validate
headers: As in #1
Performance sanity: As in #1
|
|
Destructive testing
Intentionally attempt to fail the API to check its robustness:
Malformed content in request
Wrong content-type in payload
Content with wrong structure
Overflow parameter values. E.g.:
– Attempt to create a user configuration with a title longer than 200 characters
– Attempt to GET a user with invalid UUID
which is 1000 characters long
– Overflow payload – huge JSON in request body
|
Boundary value testing
Empty payloads
Empty sub-objects in payload
Illegal characters in parameters or payload
Using incorrect HTTP headers (e.g. Content-Type)
Small concurrency tests – concurrent API calls that write to the same resources (DELETE + PATCH, etc.)
|
Other exploratory testing
Validate
status
code: As in #3. API should fail gracefully.
Validate payload:
Validate headers: As in #3. API should fail gracefully. As in #3. API should fail gracefully.
Performance
sanity: As in #3. API should fail gracefully.
|
Test cases derived from the table above should cover different test flows according to our needs, resources, and priorities.
|
Going Beyond Functional Testing
Following the test matrix above should generate enough test cases to keep us busy for a while and provide good functional coverage of the API. Passing all functional tests implies a good level of maturity for an API, but it is not enough to ensure high quality and reliability of the API.
In the next post in this series we will cover the following non-functional test approaches which are essential for API quality:
|
Comments