Ontmoet TYPO3

0 comments

Introduction

All non-trivial software has bugs, they're an inevitable byproduct of writing software. Obviously, well designed software is likely to have less and software testing is an important technique for assessing the quality of a software product. It’s this systematic review process which keep bugs to an absolute minimum.

Software testing is the process of analyzing software to detect the difference between existing and required conditions.

The workflow pattern we use when making changes to TYPO3’s core consists of four different steps and every patch version that is pushed to the review system goes through all of these steps before being integrated. The extensive testing is one of the key cornerstones that gives the content management system TYPO3 its stability and part of the foundation for its long-term growth and success.

In the past few weeks, we’ve taken a look at unit tests and functional tests. Now it’s time to dig into acceptance tests, the third of these four pillars in our quality assurance (QA) process. 

Acceptance tests are the “user” level of an application

The purpose of acceptance testing - also called beta testing - is to verify that a solution works for the user. It gives answer to the question: Are we building the product right? This makes acceptance testing the most important step in TYPO3’s review process and it’s essential to get right. An acceptance test boils down to simulating a user clicking around in a web browser to fulfill certain tasks within the web-application.

This may sound a bit abstract, but in practice an acceptance test is named and executed as a series of single user interactions. The language used in this programming impersonates a user as “I” or “me”.
Here’s an example to get an idea of what this looks like:

$me->logInToBackendAsUserKarl();
$me->clickOnPageModule();
$me->openRecord(‘What is TYPO3’);
$me->changeHeadlineTo(‘TYPO3 is cool’);
$me->saveRecord();
$me->verifyHeadlineIsNow(‘TYPO3 is cool’);

Translated into English, this means that Karl can log into the TYPO3 backend, go to the so-called page tree and open the page titled “What is TYPO3?”. There he can change the title to “TYPO3 is cool”, save his changes, and the system verifies that this change is stored correctly.

Of course the given example doesn’t exist as a test in the framework, but it shows the general idea behind acceptance tests: Simulate a user who clicks through the application to verify that crucial parts of the system work exactly as anticipated.

Coverage - failure is a blessing in disguise!

Unit tests reveal issues in very specific code segments deep down in the system. Tests that fail make code faults visible which can then be fixed prior to the code being integrated into the system. This only works if this specific area is covered by a unit test in the first place.

Functional tests identify issues with specific data sets and a bigger interaction scope. But functionals are mostly bound to the PHP system level and ignore the user level above.

It is only by applying acceptance tests that the healthiness of an entire application can be verified. If any of the involved players - namely CSS, JavaScript, HTML and  PHP on the server side - is broken by a patch, an acceptance test will definitely fail, while a functional or unit test may not.

This gives acceptance tests a high level view to the state of an application. Every acceptance test involves huge parts of the code base and verifies that critical parts work as expected. Acceptance tests are one of the final and critical procedures that must occur before newly developed software is rolled out to the market.

Acceptance testing: What environment is needed?

Typically, acceptance tests are performed in the final phase of testing and are carried out in a completely separate testing environment.

Technical test prerequisites

  • a full blown TYPO3 instance with PHP interpreter

  • a working database instance

  • a web server

  • a web browser that is controlled to click somewhere

As of PHP 5.4 a built-in web server is part of its framework. This is not a full-featured web server, but designed specifically for testing purposes. The built-in server is able to handle single browser instances that click through applications. It’s easy, it’s fast and it saves us a whole lot of time and effort when running acceptance tests, as no full blown web server is needed, and neither Apache nor nginx need be installed.

Remote controlling a web browser in a stable way is more difficult, though. With latest changes, the basic chain looks like this:

  • Codeception on top of PHPunit provides the PHP level as main interface to browser control from PHP side

  • A TYPO3 specific bootstrap within codeception sets up the instance, paths and the database

  • SeleniumServer acts as an endpoint to Codeception for browser control

  • Selenium takes care of starting Chrome and loading a Chrome extension for remote control (the “driver”)

Additionally, the browser needs to render to some graphical display. While this can be the local display of a developer machine, there is no such thing on a server. Browsers need a “head” with some X and Y coordinate they render their stuff to, a mouse cursor and other things. And while there is finally a working “headless” solution in Chrome on the horizon, this currently still has to be emulated with hacks like “Xvfb” which simulates a graphical display for successful automated testing.

The test chain is complex, it’s strong and it’s also fragile

The high-level view of acceptance tests is both their strength and their weakness. While they quickly reveal critical issues in the application, they are also rather fragile and difficult to stabilize in comparison to other quality assurance measures. There are a few reasons why acceptance tests are more difficult to get right:

  • The execution chain is complex: Codeception, SeleniumServer, Driver, Browser, Web server, Environment. All that has tons of dependencies and all parts needs to play well with each other.

  • Timing problems: Single test steps need to wait until a page is fully loaded before continuing to the next step. As the TYPO3 backend has a lot of asynchronous JavaScript this is not always easy to achieve.

  • Backend iframes: The TYPO3 backend still relies on iframes. While this may change in the future, it does currently cause us the odd headaches when we run our fully automated acceptance testing.

The core team has put a significant amount of time and effort into establishing working software combinations and stabilizing tests. Compared to unit and functional tests, they are still more fragile and sometimes fail without good reason. With tons of test cycles every day, a fail rate of one or two percent is still significant and it’s an ongoing effort of ours to rule out last broken bits and pieces. The Bamboo setup has to be especially reliable for these tests, so let’s have a look at this area next.

Bamboo integration

The current Bamboo integration of acceptance tests faces similar issues as the functional tests does. Single acceptance tests have an even longer execution time than functional tests do. Therefore, a similar split script takes care of hacking the currently roughly 70 acceptance tests into pieces and executing them with altogether 8 different jobs. The tests have recently been changed to use Chrome instead of Firefox, which is now delivering good performance in terms of speed. Executing the acceptance tests with multiple browsers at the same time leads to even more trouble and is currently out of scope of the TYPO3 core testing.

Conclusion

Acceptance testing is pretty cool. It gives a high level view of the application state and together with the other core tests a significant part of the application is covered. However, acceptance tests are hard to stabilize and require quite some effort to be of practical use. This being said, acceptance tests have pointed out quite a number of issues in the backend in the past and have led us to precisely those code faults that needed improvement and stabilization.

The encapsulation of the entire testing stack within docker images helps active core contributors to execute, create and maintain acceptance tests. We hope the setup will become easier over time and also, that more developers will adopt to applying this kind of testing.

Together with the blog posts in the past few weeks, we’ve now covered all of the main testing areas except for one. This last step in our workflow is what we call “integrity tests” and we’ll be taking a closer look at this topic in our next article in this series. Be well!

Comments

No comments

Feedback