Page 1 of 1

Introduction to Software Testing Methodologies

#1 NeoTifa  Icon User is offline

  • NeoTifa Codebreaker, the Scourge of Devtester
  • member icon





Reputation: 4017
  • View blog
  • Posts: 17,994
  • Joined: 24-September 08

Posted 17 March 2017 - 12:35 PM

*
POPULAR

Introduction to Testing Methodologies by NeoTifa

~~~~~~~~~~~~Index~~~~~~~~~~~~

Part 1.) Introduction: what and why
Part 2.) Black and White
Part 3.) Edge Cases
Part 4.) Technologies

~~~~~~~~~~~~Part 1~~~~~~~~~~~

Part 1.) Introduction

Testing: the most loved and hated process in the software development lifecycle. Tis a cruel mistress that will make you feel insignificant, yet is a necessary evil. As a tester, I enjoy finding defects and showing them to the developers to watch them squirm (only slightly joking).

Now, all kidding aside, testing should be included in your timeline, estimates, coding method, etc. at all times. Ever hear of the term "test driven development", or TDD? This is what most all enterprises are moving towards, because testing is THAT important. Testing comes at almost all cycles of the SDL and performed by many people. The aim is, of course, to break it and fix it before the customer does. After all, customers are who ultimately pays your paycheck and if they can't use your software, then they're not going to throw anymore money at you or your company. Kinda hard to pop bottles and make it rain during a drought.

Posted Image

***Please, for the automation testers sake, put id's in your HTML tags. Thanks!***

~~~~~~~~~~~~Part 2~~~~~~~~~~~

Part 2.) Black and White

There are several types of testing, done at several times of the SDL.

Unit: The most basic. When you do TDD, you start with unit tests. You create these tests before you even start coding (in theory). These unit tests should contain the bare basic funtionality, edge cases, acceptance criteria, and whatever metrics your company follows. This is performed by the developer. It is the FIRST LINE of defense against defects. Done on your local environment.
Integegration (IT): IT testing occurs when you integrate your unit-tested code to a development environment with everybody else on your line's code. This testing is done by a team of tester generally. It ensures your code gels well with others'. Sometimes automated.
System (ST): ST testing integrates the dev code into a test environment that contains the other lines' code. This ensures that your line's code plays nice with all the other people's code. Done by testing team on line. Sometimes automated.
Regression: This is your most tedious testing. It's done every release and includes test for old code as well as some new code. Usually during a requirements review/3 amigos the testers or business will point out regression scenarios. It usually includes the most basic functionality for your software, end-to-end, happy path, with maybe a few non-happy paths that are otherwise super important. These are usually automated but there are some that just can't be done feasibly. Usually business and leads/keys will identify these scenarios to be done manually. This is usually in the same test environment but can be run in other environments also.
Smoke: Smoke testing is usually a quick and dirty regression to test to make sure that some code moves or endpoint changes went in okay. Done in any environment by anyone.
Performance/Load (PT): PT testing is where you check load times of pages for acceptance, put a huge load on the system, etc. to make sure that it doesn't crash the servers or break the software. This is usually done by a specialized PT team, but the testers might have also automated it. This is done in a PT environment that is the closest env to production that you can get (unless you have a UAT env).
User Acceptance (UAT): UAT testing is generally done by the business team. Done in a UAT or PT environment. This most emulates the customer.
Production (PROD): PROD testing is generally done on release nights, and often requires extra special care as this could destroy customer data (as opposed to test data) or cause other customer impacts. Testers do this, but developers check the logs, so it's a dual effort.

I know it looks like a lot to keep in mind, but the testers are generally in charge of them. However, as a developer, it's important to keep these things in mind.

Now, for the titular content: blackbox vs whitebox testing. This is sometimes referred to blackhat vs whitehat, but boxes are more popular than hats. Anywho, blackbox testing is done without the internal workings in mind. The tester doesn't need to know how you implemented the code, just what should happen when they perform whatever function. Whitebox, on the other hand, is where you DO know the internal workings. Since you wrote the code, it's only natural that you as a developer do the whitebox testing, which tests more specific functionality that only a dev should know.

Unit: Whitebox
IT: Combination whitebox and blackbox
ST: Blackbox
Regression: Blackbox
Smoke: Can be a combination. If you, say, updated a specific database, you would generally only test those systems that use said database.
PT: Blackbox
UAT: Blackbox
PROD: Mostly blackbox, but since devs need to check logs and some specific things, there's a slight touch of whitebox thrown in.

In today's tech-involved world, things are moving more towards automation vs. manual testing, so testers are basically intro level developers to begin with, and depending on the company, they might even have a specialized dev team specifically for automated testing (like me). This can blur the lines between black and white box testing at times because they understand how software generally works.

~~~~~~~~~~~~Part 3~~~~~~~~~~~

Part 3.) Edge Cases

The point of testing (other than regression) is to test the boundaries of the software's functionality and to make sure you can try to break it as many ways as possible. Never underestimate the power of a user, they WILL find the most asinine ways to break your software, so you need to try and find those ways and fix them before the customer does. A common interview question would be something along the lines of "write some test cases for this scenario:...". For a quick sample, we can say a method takes an integer input from 1-100 and doubles it. Some test inputs you can do are:
1, 100, 50, 0, 101, INT_MAX, INT_MIN, 3.14, "hello, world!", Color.RED, etc.

One would expect something like 3.14 to fail since it's not an int, but your method functionality could surprise you. It might cut off the double part and only take the 3. Or it could fail. You don't know for sure, but you can bet a customer is gonna try it. Same with "hello, world!". It's not an int, but its hash value or location value is. But, you say, it's greater than 100, so it should fail anways! Yes, it should, but it should be a "softfall" failure vs a "hardfall" failure (software completely crashing). Handling these types of imputs are important. As for Color.RED, it could be an Enum value, which could be between 1 and 100. \_(ツ)_/

~~~~~~~~~~~~Part 4~~~~~~~~~~~

Part 4.) Technologies

Testing is so important that there are like a billion jillion trillion diffent technologies for it, so here are a few:

Bug/Defect Tracking (link):
HP ALM/QC
IBM Ration ClearQuest
Bugzilla
Jira

Automation (link):
Selenium (Java)
Watir (Ruby's version of selenium)
Cucumber/Gherkin
QTP/HPE

These are only a few of the tools. I provided some links and listed a few, but hopefully that'll get you started.

Hope you've learned something. Happy testing!

Is This A Good Question/Topic? 5
  • +

Replies To: Introduction to Software Testing Methodologies

#2 jon.kiparsky  Icon User is offline

  • Screw Trump (before he screws you)
  • member icon


Reputation: 10546
  • View blog
  • Posts: 17,930
  • Joined: 19-March 11

Posted 17 March 2017 - 11:22 PM

On my team we don't have a QA team, so the developers are responsible for writing all of the tests. This changes a few things. First of all, all of our tests are automated. We have no manual testing at all, except when we're reviewing a pull request - there just isn't time.
This means that we try very hard to get our automated tests right. What we've settled on is the following:

1) All new code must have 100% coverage, and in general any file that we touch should end up with 100% coverage. Since we're working with a legacy codebase, there are some areas where we don't enforce this because the code there is just not amenable to testing and not worth refactoring, but those are also areas we don't work on very much, and we're very careful about touching them. But 100% coverage is just a start - all that means is that each line gets touched at least once. So we also require that
2) Any time we add a feature or fix a bug, we write tests that fail without the fix and pass with it. When we add a feature, we usually don't manage to test every detail of it, but that's fine because if something goes wrong, we'll fix that particular thing on a bug ticket and we'll write a test specifically about that issue.
And because our tests are an important part of our code,
3) We fix tests. This is not something that's usually done, but we actually maintain our tests and try to improve them as we improve our test writing. For example, we developed a system of creating linked data objects for testing purposes, called "contexts". These generally define a particular state that we want to be in, and provide that state for testing purposes. These allow us to replace a lot of repeated setup code, or (horrors) inherited test case classes and mixins, or local setup functions, with nice simple objects which just work. So once we started using these, we would replace boilerplate setup code in our tests with calls to these contexts, and it became a lot easier to see what each test was trying to do. (and since we monitor our coverage, this helped us to ensure that the contexts we were writing were actually setting up the things they needed to, so we could move forward with them with some confidence)

All of this got us from an essentially untested legacy codebase to 80% overall coverage without ever stopping work to just write tests, in about a year. We've sort of stalled out at 80%, and we'll probably be stuck there until we make a few major architectural moves in the next year, but having 100% coverage on files that are getting worked on a lot has been a big factor in our ability to move forward without fear of breakage.

And while we don't use them as much as some folks do, I also want to say a good word for unit tests: if you take seriously the idea of unit tests, your code will get better, quickly. This is because it's just really hard to write good unit tests on badly-written functions, and it's easy to write them for well-written functions. So the discipline of writing unit tests will, in and of itself, produce better code, almost by magic.

tl;dr: If you're working on an untested codebase, your life has room to get a lot better, and you should take steps to make your life better.


Link: coverage tool for python code
Was This Post Helpful? 4
  • +
  • -

#3 NeoTifa  Icon User is offline

  • NeoTifa Codebreaker, the Scourge of Devtester
  • member icon





Reputation: 4017
  • View blog
  • Posts: 17,994
  • Joined: 24-September 08

Posted 18 March 2017 - 06:42 AM

I should have prefaced this by saying I'm writing this from a enterprise external customer facing web app perspective. Obviously there's lots of wiggle room. Thank you for that perspective.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1