ui-testing-best-practices

Some UI testing problems and the Cypress way

Call for contributors: are you a TestCafé expert? I would like to split the “problems” section from the “how Cypress solves them” one, and add a section dedicated to how TestCafé solves them!

Testing a front-end application brings some challenges that the “classic” tests have not: you need to orchestrate a real browser. Browsers are heavy applications by definition, and you need to launch them, manage them through a made on purpose library, leverage some APIs to automate the same kind of interactions the user would do, and then check if the state of front-end application (essentially what it shows) is the same you expect it to have.

This process and the involved steps are what make UI testing hard. The main problems are:

await page.goto(url);
await page.click('[data-test="contact-us-button"]');
await expect(page).toMatch("Contact Us");

And, as soon as you need something more complicated, await-ing everything leads you to deeply manage promises and recursive promises.

Automating and testing a front-end application is hard but there are some tools that do not alleviate the pain and some other that give you superpowers, go on!

Common tools

To automate and test a front-end application you need two different tools:

The tools are independent and while the test runner of your choice (Jest, for example) runs in the terminal (and gives you all the test feedback), the second one (Selenium or Puppeteer) opens a browser, executes the commands written in the test, and gives back the result.

There is two-way communication between the terminal-based test runner and the browser automation tool.There is two-way communication between the terminal-based test runner and the browser automation tool.

The two tools are detached and this complicates a lot of things! The actions that run in the browser are really fast! You can slow down them but you cannot pause or stop them! Or, better, I mean not interactively… Because you obviously can jump back and forth from the code editor, change the test commenting everything after the step you want to inspect, re-launching the test and checking what happened. But this is not an ideal flow. And since the test is a little program, you know that you need to repeat this step a lot of times…

Another problem arises while running a test in an above-described way: you typically log in to the terminal (where the test runner works) while the actions happen in the browser. How could you connect them? Do you add timestamp logging both in the terminal and in the browser console? Do you add a fixed-div above your front-end application that shows you the name of the running test? Connecting what happens in the browser with what you do (or log) through the terminal is hard, too.

Last but not least: when you debug the test in the terminal, you are not debugging real DOM elements, but serialized/referenced ones. There is not any kind of two-way interactivity between the terminal and the browser and so you cannot leverage the browser DevTools the way you are used to doing.

Trust me, understanding why a test is failing or why the browser does not what you expect it to do is really hard this way. But you must face that in all the three different phases of the test-consuming process:

Step #1 and #3 are pretty similar, #3 could be faster but #1 could be exhausting. #2 will make you hate UI testing literally if the tools you are using do not help you…

Test runner purposes

Stop for a moment and think about what the mentioned tools try to accomplish, starting from the test runners.

Test runners are made for managing unit tests. You can use/plug them the way you want, obviously, but they are made essentially for super-fast (and parallelized) small-function invocations. They do not have browser-like DevTools but the main problem is test timeouts. Every test has a timeout and it is completely reasonable. Thanks to the timeout, if a test takes too long, the test runner kills it.

But what happens when you combine the test timeout with the UI-testing needs? As you know, a user flow could last really long. For a lot of reasons:

These examples show you how unpredictable could be a UI test. The solution could seem handy: incrementing the test timeout! But this is the worst solution, it does not work because:

Last but not least, remember that you need to have DOM-related assertions. In a UI test, you do not treat objects, arrays, and primitives, but you manage essentially DOM elements. Assertions like “I expect the element is equal to…” does not work, while it works for unit testing, obviously. This problem is usually solved with external plugins.

Browser automation tool purposes

Selenium and Puppeteer aim for an easy, magics-free, UI automation experience. They are not meant to test the UI, but only to automate the user interactions. Automating and testing overlap in some ways but they are not the same. Both try to understand if a button is clickable and try to click it, but while the former fail, the latter tries to tell you why it failed. While the former tells you that an element is not on the page, the latter tells you that it is not on the page because the previous XHR request failed.

We were used to putting together a test runner with a browser automation tool and try to get the best of them, but suffering from what two non-integrated and different tools could not offer.

Speaking again about the test (and the app under test) debuggability: in order to slow down/debug/pause/stop/make-them-working, etc. you need to “sleep” the tests a lot of times. It is a common practice, both because it solves problems in the short term, and sometimes because you do not have alternatives (read the Await, don’t sleep section). Unfortunately, add some “sleep” steps makes the tests worse and worse, slower and slower. As I wrote previously: test slowness is one of the most common flaws that brings developers to hate UI tests.

More: what happens when a test fails? What you can do to understand the problem before understanding how to fix the bug? If you are lucky enough to discover the broken test locally, your pains are limited. But if the test fails in a pipeline, how could you know what happened if you do not have a UI? Have you added some parachute automate screenshot? Is there anything that speaks better than a screenshot? Unfortunately not…

You need even to leverage third-party debugging tools (React DevTools, Vue DevTools, etc.) but their installation process onto the controlled browser is not the handiest in the world.

Last but not least: stubbing the server and asserting about the XHR requests could be considered testing implementation details… But I do not think so, for two reasons:

Implicit testing challenges

The above considerations create another problem: the test code should be as simple as possible. Tests allow you to check that everything works as expected but they are little programs after all. And so, you need to maintain them, over time. And since you need to understand them in a while (it is unfeasible if you need hours to understand why and how a test works, tests must help you, not complicate your life like bad code could do), their code should not be complicated (read the Software tests as a documentation tool section).

But, tools that are not created for a difficult task like UI testing, do not help you writing simple tests code. And your testing life gets harder, again… Hence you are condemned to spend a lot of time debugging a failing test instead of understanding what did not work in your front-end application (assuming that something broke…). The result is fewer test trustability

Cypress to the rescue

Do not worry, I have not reported this dramatic situation for the sake of your sadness 😉 but just to get you aware that you do not need generic tools mixed together, you need something made on purpose! Two tools come to my mind: Cypress and TestCafé. Both are super valid since they have just one goal in mind: reinventing (or fixing?) the UI testing world.

I concentrate on Cypress and I compare them later. How does Cypress solve all the above problems? First of all…

Cypress has a UI

Yes, you start Cypress through the terminal, but you consume it through its UI! And the UI is side-by-side with your application! Take a look at this preview

The [Command Log UI](https://docs.cypress.io/guides/core-concepts/test-runner.html) (on the left) runs alongside your front-end application (on the right).The Command Log UI (on the left) runs alongside your front-end application (on the right).

What does it mean? What are the main Command Log UI features?

But, as I told you, the lack of retroactive debuggability is a big missing while writing UI tests… Let me introduce you…

Others Command Log utilities are:

… and more, take a look at its capabilities in the official Cypress docs.

Cypress commands

Commands are asynchronous by default, take a look at the next snippet

cy.visit(url);
cy.click('[data-test="contact-us-button"]');
cy.contains("Contact Us").should("be.visible");

Do you notice any await? No and the reason is straightforward: why should you manage awaits while everything in the UI needs to be awaited? Cypress “waits” for you, which means that if a DOM element is not ready when you try to interact with it, no problem! Cypress retries (by default for 4 seconds) until it can interact with the element (the user way, so only if the element is visible, is not disabled, is not covered, etc.). So you can avoid facing the front-end intrinsic asynchronicity at all!

The above feature has one more effect: do you remember the not-so-good test timeout? Well, forget about it! In Cypress, tests do not have timeouts! You avoid to guess (and continuously adapt based on the needs) test duration, every command has its own timeout! If something goes wrong, the test fails soon! And if the test goes well, it does not risk to face the test timeout guillotine!

Last but not least: DOM-related commands report DOM-related errors the way you need. Take a look at the following example:

Cypress reports clearly the problem from a user/DOM perspective.Cypress reports clearly the problem from a user/DOM perspective.

It is pretty clear the reason why the user could not type into the input element. Cypress is not the only tool that has commands that act as the user would do, but its speaking errors are quite uncommon.

Test quality

There are a lot of common mistakes that developers do while testing. Some mistakes are negligible but some are not. Cypress enforces you avoiding some mistakes, how?

Productivity

Cypress wins on another, really important, topic: productivity. Read how in the dedicated section: Use your testing tool as your primary development tool.

Debugging

I explained above why debugging a test could be a nightmare without some dedicated features. There are two kinds of failing test debugging:

Cypress has two amazing solutions:

FAQ

I presented Cypress as a perfect tool, now I anticipate some common questions they ask me:

Conclusions

As a recap, the problems and the solutions listed above are:

References

Mastering UI Testing - conference video