Building a Great Front-End Test Automation Solution

- Quality Assurance, Programming

Connected gears with a business man or woman in each one of them

So far in my career I had to architect three different front-end test automation solutions from scratch. Such solutions are really useful to minimize manual regression tests after a code change. What I would like to share today is what I've seen work and not work based on my experience.

Note that even though I'm following a DO/DON'T format, this should not be taken as absolute truths, but as generalities.

DO have an expert programmer working on test automation

Test automation is a field that requires both QA and programming expertise. Make sure that you never end up with a team of developers exclusively made up of non-technical QA analysts and/or junior programmers, otherwise you risk of ending up with a solution that is inefficient, unreliable and hard to maintain... just like any software really.

DO follow standard design and development practices

Even if you build the most powerful test automation software ever, if nobody else understands how to use it, have trouble adding test cases to it or can't figure out how to maintain its back-end, it's going to be useless.

DO use UI references that are unlikely to change

The last thing you want is for a test to fail because the look and feel of the tested UI changed a little bit, as this would require a lot of maintenance. If there aren't any, you may want to consider adding invisible anchors in the code of the tested application.

For example, if you use WebDriver to automate web browsers, a good reference is usually the id attribute, and I've seen teams implement a custom data-* attribute for debug builds when id was not usable.

DO abstract the tested UI in classes

Whatever method you will end up using to communicate with the desired front-end, I found that it's a good idea to always wrap it up in classes by logical components using proper object-oriented programming principles. This allows simple code reusability and maintenance.

For example, you might want a UI_SignInForm class which contains a Submit button interface of type UI_Button with a Click method, and only that method would contain low-level code to interact with the front-end.

DO abstract high-level test case actions

Having a high-level tast case actions layer above the UI logic layer in a test automation architecture allows converting high-level concepts into logical actions, and reusing the same test cases on different platforms simply by switching layers. The best part is that you can design high-level actions to follow whatever format QA analysts are most familiar with.

To continue with my previous example, you might want to create a Sign_In($end_user) function that will fill up the form with the data contained in the $end_user veriable, call UI_SignInForm.Submit().Click(), and verify if the next screen is the one expected.

DO support automated test cases close to natural language

Natural language is how most QA analysts write test cases in the first place, so that will allow them to understand existing ones and even write new ones.

There are existing tools such as allows writing test cases in such a format. For example, Robot Framework allows writing linear steps in a format that looks like Sign In End User. Similarly, Cucumber can interpret test cases written in a Given [...] When [...] Then [...] format for teams used to this approach.

DO support test data independent of test cases

Many test cases are practically identical, but with different input data and different expected outcomes. By allowing variables in test cases, they can be reused and maintained more easily.

For example, in my latest iteration, I ended up using CSVs containing 3 columns: Test ID, Parameter Name, and Parameter Value. Whenever it made sense, a parameter value could reference a test ID from a different CSV. Finally, for each generic test case, I made a test suite file that specified which test IDs to initialize test case variables for each desired combination. This turned out to be a good compromise between the simplicity of spreadsheets for non-technical QA analysts and supporting complex data structures, while allowing ordering parameters when it mattered, such as determining the order of textboxes to fill. This might not be the best solution for all teams, but it met our requirements.

DO mock external services

Your tests should pass regardless of what happens to services outside your control. You want to test for your stuff, not theirs. Sure, you might miss an incompatible API upgrade, but that's something you should be proactive against instead of reactive anyway.

For instance, there are existing tools such as MockServer that can intercept encrypted network traffic, analyze the request as part of the test and return a mock response.

DO continually run tests

There's a new commit pushed? Detect it and run critical tests automatically! There's a new release candidate ready? Automatically run the full test suite! The earlier a bug is detected, the better.

Automation servers such as Jenkins can be used to implement such a process.

DO support parallelism on the cloud

Eventually, your test automation solution might contain many test cases, and running them one after the other might take a long time. Being able to reduce execution time by simply adding in more computation resources when needed can always be useful. Keep in mind that this may also require deploying multiple instances of a tested web server application.

Examples of cloud services for such implementation include Amazon Web Services, BrowserStack and SauceLabs.

DO link test database snapshots to test cases

Small test databases can be quickly deployed on the cloud for parallelism, and also allow test cases not compatible with each other to co-exist. Just make sure that you have a proper automatic patching mechanism when the data structure is updated.

DON'T use image recognition

Unless you want to test for pixel-perfect accuracy, image recognition software will always have some margin of error, which is problematic for reliability of tests and may require updating image references often.

Maybe one day we'll have good A.I.s that will be good enough to work around that problem, but as of this writing, we're not there yet.

DON'T wait for a fixed amount of time

If you wait too long, your test will be slow. If you don't wait long enough, the test will incorrectly fail. Always wait for triggers instead of fixed amounts of time. I know it's tempting to wait for a few seconds for a stable state, but I can't stress enough the importance of not doing this. If you want reliable tests, this is a must.

DON'T record and play

A manual session recording may contain useful information, but they should be used as a source of information to design a full test, not as the test itself. Assuming the recording tool was configured to record good UI references as described earlier (which is not possible in the general case), recorded sessions still can't be easily repeated without running into incomplete waiting conditions and incomplete expected behavior conditions.

DON'T depend on proprietary testing tools

Many companies offer shiny testing tools for test automation at hefty prices. I've witnessed some of them use questionable marketing practices to convince higher-ups of their usefulness. I've also witnessed some of them simply wrap free software over a coat of paint. The best part is that if they stop supporting their tool, it may render the underlying test automation solution completely obsolete and useless.

Consider the many free open-source testing tools readily available, as they are not only a lot cheaper, but can also always be modified to continue supporting the test automation solution whenever an issue occurs. Also, the code architecture should be designed so a tool could be swapped with another with minimal pain when it becomes advantageous to do so.

DON'T log everything

Yes, logs are very useful for debugging and for keeping track of software changes, but if you log every single step, you're going to end up with extremely large logs that can't even be opened. You might think it might not be too bad because test cases are linear by design, but logs can still blow up.

I recommend logging only the tests that were executed, the length of execution of each test case, and what's necessary to diagnose a failure when it happens.

DON'T block software updates

This may seem counter-intuitive as software updates can cause incompatibilities, and this problem also exist with test automation software. However, end users will use the latest, freshest versions of their operating systems and web browsers, and those are the most important environments to support in general.

If your test automation solution does not update along with those environments, then implementation differences between the two will slowly creep in, introducing false positives in test results until an update will become necessary, and at that point it will likely be painful to perform. That, or you'll be stuck running automated tests on a configuration that nobody uses anymore.

DON'T use production data

That's a security risk. Don't do it. In fact, you shouldn't have access to production data in the first place.

DON'T stop using lower-level automated tests

Well-designed unit tests and integration tests are able to find bugs at the lowest levels of code quickly and efficiently. Front-end tests does not replace them, but are complimentary to them, as they test the final assembly of code and take into account client configuration in their results.

Related articles I wrote

Stream of zeros and ones in space

Minifying JSON Text Beyond Whitespace

- Programming, Mathematics

JSON is a common data serialization format to transmit information over the Internet. However, as I mentioned in a previous article, it's far from optimal. Nevertheless, due to business requirements, producing data in this format may be necessary. I won't go into the details as to how one could…

Field of CG-rendered disembodied arms pointing at a dark sky at sunrise

Current Generative AIs Have Critical Quality Issues

- Business, Quality Assurance, Security

The hype for generative AI is real. It is now possible for anybody to dynamically generate various types of media that are good enough to be mistaken as real, at least at first glance, either for free or at a low cost. In addition, the seemingly-creative solutions they come up with, and the…

Stream of concatenated JSON objects

Current Data Serialization Formats May Be a Waste of Money

- Programming, Business

Storing data. Transmitting data. Processing data. These fundamental topics of computer science are often overlooked nowadays thanks to the historical exponential growth of processing power, storage availability and bandwidth capabilities, along with a myriad of existing solutions to tackle them. So…

Slippery road signs scattered everywhere

Scrum Is Not Agile

- Programming, Business, Psychology

While there is no denying that Scrum revolutionized the software industry for the better, it may seem a little strange to read about someone that dislikes it despite strongly agreeing with the Agile Manifesto, considering the creator of Scrum was one of its signers. However, after having experienced…

Assembled cog wheels

Validating and Viewing OpenAPI Definitions with Docker

- Quality Assurance, Programming

Here are a few commands I crafted to validate and easily read API definitions in the OpenAPI format, using Docker and open source tools provided by Swagger. I have yet to convert them into proper shell scripts, but I hope these will be helpful nonetheless. The commands are designed to be run in a…

See all of my articles