bizdev close data download founders icon--facebook icon--instagram icon--linkedin icon--twitter icon_facebook icon_linkedin icon_twitter marketing-design product support-ops tech tick Skip to content

Why Pytest for writing functional API tests

Posted 9 years ago by Skimlinks
Skimlinks technology is based on a complex system of different tech solutions. Many of them are based on web services that take care of the data flows – APIs that aggregate or expose a lot of existing data. The bigger the project grows, the harder it is to assure its quality or avoid regression bugs, it becomes time and resource consuming and there are always some unpredictable risks.
So we ended up with an idea to create a single automation project that will let us verify that everything works properly after we test a new feature or make a patch for an existing API functionality. We started researching on how to do that by using python and its testing tools.

For python based projects, pytest is a commonly used package for writing unit tests and for verifying the health of the API against mocked data. But what about automating the functional testing of the APIs by applying similar Pytest techniques? Reading about it was misleading – whether it is a good practice to use it for unit tests only or we could use it for creating an independent QA automation project that will be easy to maintain and worth the time we would allocate for creating the IT?

Many articles were giving pros and cons and different frameworks for automating API tests – the problem was that pytest was usually used for unit testing and most of the examples were using mocked objects. Our purpose now was to create fixtures that will validate real data and once a new API-related feature is tested, regression bugs will be easily avoided by running a single module in our QA automation project – it will all take a minute once we have the automation project cloned locally.

We wanted to make it easy to maintain, easy to configure or switch environments, to use a unified pattern to add new tests and know what to expect when running a suite, make it platform independent. Python helped us to achieve that and pytest helped us to customize it and make it API independent, flexible, extensible.

Why use Pytest?

Flexibility
Pytest fixtures makes it really easy to define the way your tests will be executed because they are scope-based. The purpose was to create custom fixtures in a way that they can be easily re-used by each new test package, module or method we add. Pytest fixtures are flexible so a QA can specify the scope of it – whether it will be executed per session, module, method, etc. Once we’ve defined a scope of a fixture, a module doesn’t need specific setup and teardown methods or classes as such are required for other testing libraries as TestNG or Nose. The fixture creation and finalization at set in a scope annotation – execution in the beginning of the module, in the beginning of each method, etc. Another significant difference between pytest and frameworks like nosetest and unittest is that the tests don’t need to be organized in classes.

Fixtures
Fixtures parametrization is another feature that help us send parameters used during fixtures execution – this is usually really helpful for avoiding the boilerplate and assert as many scenarios as possible by using only a single test method. The parameters are added as annotations and they provide a really flexible execution flow of a method — pytest.use.fixture pytest.mark.parametrize or @pytest.mark.skipif

Marking tests to run, skip or fail can control the test flow, help us predict the output of a test and create negative test scenarios. For example you can mark a test for skipping if a certain condition occurs – this will prevent the failure of a whole test suite and report a test as skipped.

Pytest’s flexibility helped us to create custom and reusable fixtures for converting the DB and API queries’ results into dictionaries and json data files. All those verification data files are removed once the tests have finished. The fixtures use the DB and API clients located in the tools package.

Test discovery and logging
All of the tests can be run by using a single command and by defining a test module, method or package of modules to run. Pytest test discovery is pretty smart – it will run everything that is not marked to be skipped and test prefixed methods, classes, modules. This makes it really easy to define a suite of tests by specifying just a package of test modules to run. One can also specify a single test method to run by using the proper flags for the purpose. The output of the tests can be easily defined in a fixture for that purpose – this is very useful and can be easily modified if that is required.

So to summarize, py.test gives a better flexibility and scope-ability, decreases the boilerplate and allows customizing the fixtures, which helps in creating a unified way to add or modify tests. This makes running or developing tests straightforward and flexible enough because no built-in fixtures are used to mock data or verify API responses only against known/expected data.

How is py-testing applied in Skimlinks:

Apart from the unit tests we have for each python repository, we have a dedicated QA automation project that contains all of the automated functional API tests. The following image will give an overall idea of how this project looks like in Skimlinks.
By using this project, a QA can easily pick what to run – they only need to set up a python environment, install the dependency packages listed in the requirements file and run the tests by using the py.test tool. The project is organized in test packages, tools, fixtures, and a single configuration file. We tried to make the project structure conventional so it is separated in tests and tools. And all the dependencies are listed and installed by a single requirements file.

Test packages under “tests” contain test modules and test methods in each module. They are all organized in “api” packages that are a suite of tests for an API. This makes it comprehensible enough for someone who sees the project for the first time. A QA will easily decide what set of tests to run or what and where something needs to be changed/added.

We have added the configuration of the endpoints, databases and properties to a single file which makes it easy to switch endpoints, hostnames, passwords.

The fixtures are discovered by pytest because it uses a unified conftest.py file that contains all project related fixtures and they are shared by all of the resources in it.

Pytest is flexible enough to allow us to add also fixtures specific for a single module. Our job was to develop them in a way that they can be reused each time we add new tests instead of developing fixtures for each new test module. There are web-service and DB clients with the proper xml or json handlers, help methods to access the response data structures and other functions useful for the tests – those are organized under “tools”.

We are trying to apply all flexibilities that pytest provides – fixtures, parametrization, scope, test running and logging. This is an example of how a simple test would look like – it uses parameterized fixtures, has a conditional execution and shows how the boilerplate is avoided by setting bunch of API hits in a single test method.

We are looking for engineers that go the extra mile, are obsessed with learning and love a challenge! Sound like you? View our current job openings.

Ideas to inspire St. Patrick’s Day commerce content

The essentials of creating a buying guide

Close menu
Log in to your account
Are you a Merchant or Affiliate Network?
Contact us
Message received. Thank you!