At Tighten, we write custom web applications. And, like all software, they sometimes have bugs, glitches, and unexpected behavior.
Programmers, technical leads, and project managers always test our applications by hand as we write them. Write a feature, try the feature out in the browser, make sure it works the way we want. Usually our clients do too—if you’re paying to have software built for you, you’ll usually test how it works when it’s delivered.
But this manual testing only covers a small portion of the things that need to be tested in our web apps. And if we don’t have full coverage, the software (and its owners and users) could be exposed to data loss, security troubles, financial loss, or even legal ramifications. Manual testing has a place in the software development world, but for the broadest coverage, you need to add automated testing coverage.
Manual testing is when a human being tests the application by navigating their way through it (usually in a browser). This human may be a programmer, the product owner, or a paid quality assurance (QA) engineer. If it’s manual, it requires a human to do it.
Automated testing, on the other hand, relies on software to run the tests. This software runs against scripts (automated tests) that either programmers or QA engineers have written, and the scripts can be run thousands of times a day, in different environments, manually or automatically.
Neither manual nor automated testing are perfect.
Manual testing relies on humans, and there are many benefits of a human interacting with your application; humans are, however, slower, more likely to make mistakes, and less capable of testing a matrix of different configurations for each test. It’s nearly impossible for even a full-time worker to examine every single aspect of a software tool every time it’s prepared for release.
Additionally, some tests don’t lend themselves well to manual testing. Security, data, and privacy issues are harder for humans to test for because the potential problem areas aren’t immediately apparent from a human perspective. For example, a security issue might not follow a typical user path. It could be buried deep in another path or function.
However, the human perspective that can’t identify a security breach bug is valuable in other ways. For example, if your web app has a graphic user interface that leverages 3D images and complex animations, an automated test can only tell you if it functions “properly.” It can’t decipher if the graphics are realistic. That’s for a human to decide.
Ultimately, you need both kinds of tests. Unfortunately, automated tests seem harder to set up, so they are typically overlooked. However, automated tests are an important component in every dev team’s arsenal to achieve clean code and functional web apps.
It’s easy for organizations to eschew automated testing. If you have an existing codebase with no automated tests, there can be an upfront cost to setting it up for automated testing, and writing tests that cover a decent amount of the codebase. Additionally, if you already have a manual team, it requires a culture shift in addition to an operational change.
However, if you’re not using automated tests, you’re opening your web applications—and yourself—to unnecessary risks and unfortunate side effects.
Wasted Time and Resources Relying solely on manual testing means you’ll have to build and maintain an entire QA department, often including multiple full-time roles. No matter how smart, talented, and hard-working these folks are, the reality is you’ll now be spending time, money, and energy managing this team.
We’ve seen first-hand how challenging it is to stand up a team of separate testers; QAs butt heads with each other and with programmers, departments subscribe to different testing philosophies, and ultimately, your test results suffer. Further, humans are good at creative thinking, and the sorts of things which can’t be automated, but manual-only testing will require humans to do repetitive, boring tasks, which isn’t good for anyone.
Costlier Upgrades Every time you upgrade any technology your web application relies on, it has the potential to introduce major bugs or security holes into the final application. As a result, every upgrade means the entire functionality of the app, including every edge case and previously-fixed bug, needs to be tested again. In a system without an expansive automated test suite, upgrades are costly and nerve-wracking—and therefore performed much less often, which is bad for the application and introduces even more technical debt and security risk.
Security Risks and Bugfix Regressions One of the best ways to test for security risks and other unexpected application states is to run your application through hundreds of different potential scenarios every time you make a minor change. Every time a potential security risk is identified, your team will write an automated test to prove that security risk is patched. Every time a new bug is identified and fixed, your team will write an automated test to prove that bug is still fixed.
A lack of automated tests means security risks are significantly easier to introduce or miss in the first place, and bugs are much more likely to regress to a state before they were fixed. Automated tests help you stay away from being in the news as the subject of the latest hack or data privacy leak.
Poor User Experience Bugs are simply a part of life in software development. Thankfully, users have come to expect that some number of bugs come along with any application. However, if you have more bugs than the competition? If you fix bugs and they become un-fixed? If it takes forever for your team to fix bugs because there are so many? That could have a serious impact on your users’ perception of your application and your organization.
As I wrote about in the previous example, bugs are more likely to regress to a broken state if you don’t have automated tests ensuring their continued removal. Furthermore, writing tests actually produces code with less bugs in the first place; your engineers have to encode the business logic into the tests, which makes them think more about what this code should do, and well-written automated tests will work through a suite of potential applications and edge cases for that feature, testing far more use cases than a programmer or product lead often would in the normal development flow.
Across the board, automated testing produces applications with less bugs, which produces happier users and a better reputation for your app and your organization.
Major Business Issues A buggy, security-risk-laden application isn’t just a problem for the users. It also could introduce massive costs to your organization. You’ll need more support, more QA engineers, more programmers to fix things, and potentially even more PR and legal to cover the outcome of the bugs’ impact.
If your bugs or breaches are privacy or financial related, you may find yourself in a situation—as a result of a simple bug—that could threaten to end your entire company. Maybe your manual test didn’t catch a bug that affects the automatic billing function. Six months later, your accounting team notices something isn’t adding up—and you discover your app has been billing canceled subscribers for the past eight weeks. Now you have to reimburse users for the erroneous payments. Two months worth of invoices is a hefty amount. If you didn’t have cash flow issues before, you will now.
Automated testing can have a cost to set up in an existing app; but over time it costs far less than manual testing. It’s also more reliable and much more expansive in its capabilities.
The best situation is to write automated tests as you go. Programmers are the best people to write the tests for the code they just wrote, but bigger organizations may choose to have QA engineers write the tests instead.
If your app is already running, it’s not too late! You can add tests, or bring in a team like Tighten to add a testing framework and some existing test coverage and teach your team how to test.
No matter what, if you are running a web application, you should be running automated tests.