QA Automation is the automation of quality assurance testing in software development. A plethora of QA automation techniques exist, and we will cover a subset of the more common methods.

Testing Categories

High level testing categories include Functional and Non-Functional testing.

Functional testing is a broad category and the process of testing a software application against its business and functional requirements. Manual examples of functional testing include: (1) a user manually testing that a word processor application gives the user the ability to input and save information, and (2) a user manually testing that a calculator application allows the user to input 1+1 and the output results in 2.

Non-Functional Testing is another broad category of testing that involves anything outside the realm of functional testing. Multiple sub-categories exist.

While the sub-categories may overlap, here is a high level summary of some common testing sub-categories:

Functional Testing

  • Unit Testing
  • System Testing
  • Integration Testing
  • Regression Testing
  • multiple others

Non-Functional Testing

Black Box vs White Box Testing

Black-box testing is a method in which the testing method is not aware of the internal operations of the system. The examples for functional testing above are examples of black-box testing. White-box testing involves some knowledge of the underlying software system and is often written by a developer that has done work on the system itself. Examples of white-box testing may include unit-testing in Test-Driven Development (TDD). Both have benefits.

Black-box testing has the advantage that it may be developed with the functional requirements in mind but without knowledge of the implementation. It may involve having a separate pair of eyes on the implementation and can be good for exposing issues that the original developer may not have recognized.

White-box testing has the advantage of understanding internals of the system that may not be easily exposed during black-box testing. It may be easier to understand and test for corner cases that may not appear as an issue in black-box testing.

Many of the examples of the sub-categories in the section above may be developed as either black-box or white-box testing, and using a mixture of both in development is a good idea.

Implementations of Tests

There are two common implementations automated tests. These methods are used to test a particular System Under Test (SUT). The SUT is simply the thing that the testing individual is attempting to test.

Real Tests. These tests are tests that rely upon existing systems in order to run. Often, fixtures, or preset real test environments are used for testing with real tests.

Examples may include: (1) a real database backup loaded up to a testing database, or (2) testing against a real API.


  • Real tests may more directly mimic the production environment. Mocks require abstracting away slightly from the actual production setup.
  • Real tests may be easier to set up or comprehend initially.


  • Real tests may fail if the test environment has not been reset. Real Tests require the preset environment to be configured and be configured correctly. An issue with the preset testing environment may show up as a test automation failure, but in actuality there may have been an issue with the test environment. i.e. if a test database system is down, or if someone modified the database after the fixture was loaded, the test may erroneously fail.
  • Real tests take longer to run. Testing against a real database or a real API requires every call to the database or API to wait for database reads/writes or network traffic to respond. Mock tests are able to bypass this time for testing. This test run-time may increase as the application grows larger.

Mock Test. These tests use mocks, or imitated testing data for the specific implementation, in order to perform the testing required for the system under test.

Examples may include: (1) Using monkey patches to modify a database read function result to pass back a standard response without actually accessing a database, and (2) modifying a function for a remote API read call to return a pre-canned result without actually sending a request to a remote API.

“Mocks” are more detailed implementations of “stubs”, for which there is a longer and more detailed differentiation from Martin Fowleravailable here.


  • Mocks decrease the required time to run a test by preventing the need for network calls or I/O. As such, test time can be specifically improved.
  • The tester has more control over the specific data returned to the SUT because the data can be manually specified.
  • No need to rely on resetting a database, using a fixture, or any type of issues with network connectivity.


  • Test maintenance may be slightly more detailed. The mocks will require updating whenever the core functionality of the SUT is modified. Arguably though, a real test would require the same updates to a database.
  • A minor abstraction layer from the real test means there is a slight difference from testing the system in production.


A few recommendations that I personally have found to be useful:

  1. Use a mixture of black-box and white-box testing in your automated tests. Having both is helpful to have different perspectives on an issue.
  2. Use mock tests as much as possible during development and testing, and in particular for continuous integration. They make it easier and faster to test while doing development.
  3. Use real tests with fixtures as additional comprehensive testing for testing the whole software application before a release. In a solid continuous deployment pipeline, these may be part of the continuous integration tests.

[3] List of non-functional testing:

Leave a comment