Tuesday, January 8, 2013

SOFTWARE TESTING


SDLC  & STLC:

Software Development Life Cycle involves the complete Verification and Validation of a Process or a Project. 
Whereas Software Testing Life Cycle involves only Validation. 


Software Development Life Cycle involves business requirement specifications,Analysis,Design,Software requirement specifications,Development Process(Coding and Application development),Testing Process(Preparation of Test Plan,Preparation of Test cases,Testing,Bug reporting,Test Logs & Test Reports),Implementation and Maintainence . 
Whereas Software Testing Life Cycle involves Preparation of Test Plan,Preparation of Test cases,Testing,Bug reporting,Test Logs & Test Reports.


Software testing is an important part of the software development process. In normal software development there are four important steps, also referred to, in short, as the PDCA (Plan, Do, Check, Act) cycle.



Let's review the four steps in detail.

1.    Plan: Define the goal and the plan for achieving that goal.
2.    Do/Execute: Depending on the plan strategy decided during the plan stage we do execution accordingly in this phase.
3.    Check: Check/Test to ensure that we are moving according to plan and are getting the desired results.
4.    Act: During the check cycle, if any issues are there, then we take appropriate action accordingly and revise our plan again.


So developers and other stakeholders of the project do the "planning and building," while testers do the check part of the cycle. Therefore, software testing is done in check part of the PDCA cyle.

CMMI model: Capability Maturity Model Integration

There are five maturity levels in a staged representation as shown in the following figure.


Maturity Level 1 (Initial): In this level everything is adhoc. Development is completely chaotic with budget and schedules often exceeded. In this scenario we can never predict quality. 



Maturity Level 2 (Managed): In the managed level basic project management is in place. But the basic project management and practices are followed only in the project level.



Maturity Level 3 (Defined): To reach this level the organization should have already achieved level 2. In the previous level the good practices and process were only done at the project level. But in this level all these good practices and processes are brought to the organization level. There are set and standard practices defined at the organization level which every project should follow. Maturity Level 3 moves ahead with defining a strong, meaningful, organizational approach to developing products. An important distinction between Maturity Levels 2 and 3 is that at Level 3, processes are described in more detail and more rigorously than at Level 2 and are at an organization level.



Maturity Level 4 (Quantitatively measured): To start with, this level of organization should have already achieved Level 2 and Level 3. In this level, more statistics come into the picture. Organization controls the project by statistical and other quantitative techniques. Product quality, process performance, and service quality are understood in statistical terms and are managed throughout the life of the processes. Maturity Level 4 concentrates on using metrics to make decisions and to truly measure whether progress is happening and the product is becoming better. The main difference between Levels 3 and 4 are that at Level 3, processes are qualitatively predictable. At Level 4, processes are quantitatively predictable. Level 4 addresses causes of process variation and takes corrective action.



Maturity Level 5 (Optimized): The organization has achieved goals of maturity levels 2, 3, and 4. In this level, processes are continually improved based on an understanding of common causes of variation within the processes. This is like the final level; everyone on the team is a productive member, defects are minimized, and products are delivered on time and within the budget boundaThe following figure shows, in detail, all the maturity levels in a pictorial fashion.


Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.


White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.



There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.

 Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.


Usability testing is a testing methodology where the end customer is asked to use the software to see if the product is easy to use, to see the customer's perception and task time. The best way to finalize the customer point of view for usability is by using prototype or mock-up software during the initial stages. By giving the customer the prototype before the development start-up we confirm that we are not missing anything from the user point of view.




Configuration management is the detailed recording and updating of information for hardware and software components. When we say components we not only mean source code. It can be tracking of changes for software documents such as requirement, design, test cases, etc.


When changes are done in adhoc and in an uncontrolled manner chaotic situations can arise and more defects injected. So whenever changes are done it should be done in a controlled fashion and with proper versioning. At any moment of time we should be able to revert back to the old version. The main intention of configuration management is to track our changes if we have issues with the current system. Configuration management is done using baselines.


Unit testing - Testing performed on a single, stand-alone module or unit of code.


Integration Tests - Testing performed on groups of modules to ensure that data and control are passed properly between modules.



System testing - Testing a predetermined combination of tests that, when executed successfully meets requirements.



Acceptance testing - Testing to ensure that the system meets the needs of the organization and the end user or customer (i.e., validates that the right system was built).





Alpha and beta testing has different meanings to different people. Alpha testing is the acceptance testing done at the development site. Some organizations have a different visualization of alpha testing. They consider alpha testing as testing which is conducted on early, unstable versions of software. On the contrary beta testing is acceptance testing conducted at the customer end.



In short, the difference between beta testing and alpha testing is the location where the tests are done.

In some projects there are scenarios where we need to do boundary value testing. For instance, let's say for a bank application you can withdraw a maximum of 25000 and a minimum of 100. So in boundary value testing we only test the exact boundaries rather than hitting in the middle. That means we only test above the max and below the max. This covers all scenarios. The following figure shows the boundary value testing for the bank application which we just described. TC1 and TC2 are sufficient to test all conditions for the bank. TC3 and TC4 are just duplicate/redundant test cases which really do not add any value to the testing. So by applying proper boundary value fundamentals we can avoid duplicate test cases, which do not add value to the testing.
In equivalence partitioning we identify inputs which are treated by the system in the same way and produce the same results. You can see from the following figure applications TC1 and TC2 give the same results (i.e., TC3 and TC4 both give the same result, Result2). In short, we have two redundant test cases. By applying equivalence partitioning we minimize the redundant test cases.


So apply the test below to see if it forms an equivalence class or not:

  • All the test cases should test the same thing.
  • They should produce the same results.
  • If one test case catches a bug, then the other should also catch it.
  • If one of them does not catch the defect, then the other should not catch it.
Random testing is sometimes called monkey testing. In Random testing, data is generated randomly often using a tool. For instance, the following figure shows how randomly-generated data is sent to the system. This data is generated either using a tool or some automated mechanism.

A negative test is when you put in an invalid input and receive errors.


A positive test is when you put in a valid input and expect some action to be completed in accordance with the specification.



Exploratory testing is also called adhoc testing, but in reality it's not completely adhoc. Ad hoc testing is an unplanned, unstructured, may be even an impulsive journey through the system with the intent of finding bugs. Exploratory testing is simultaneous learning, test design, and test execution. In other words, exploratory testing is any testing done to the extent that the tester proactively controls the design of the tests as those tests are performed and uses information gained while testing to design better tests. Exploratory testers are not merely keying in random data, but rather testing areas that their experience (or imagination) tells them are important and then going where those tests take them.

Regression testing is used for regression defects. Regression defects are defects occur when the functionality which was once working normally has stopped working. This is probably because of changes made in the program or the environment. To uncover such kind of defect regression testing is conducted. 

TestCase For ATM:
C 1:- successful card insertion.


TC 2:- unsuccessful operation due to wrong angle card insertion.



TC 3:- unsuccessful operation due to invalid account card.



TC 4:- successful entry of pin number.



TC 5:- unsuccessful operation due to wrong pin number entered 3 times.



TC 6:- successful selection of language.



TC 7:- successful selection of account type.



TC 8:- unsuccessful operation due to wrong account type selected w/r to that inserted card.



TC 9:- successful selection of withdrawal option.



TC 10 :- successful selection of amount.



TC 11:- unsuccessful operation due to wrong denominations.



TC 12:- successful withdrawal operation.



Tc 13 :- unsuccessful withdrawal operation due to amount greater than possible balance.



TC 14 :- unsuccessful due to lack of amount in ATMhttp://images.intellitxt.com/ast/adTypes/icon1.png.



TC 15 :-un due to amount greater than the day limit.



TC 16 :- un due to server down.



TC 17 :- un due to click cancel after insert card.



TC 18:- un due to click cancel after indert card and pin no.



TC 19:- un due to click cancel after language selection,account type selection,withdrawal selection, enter amount