Sunday, 29 May 2011

Testing Tools - Manual Testing


TESTING TOOLS

SOFTWARE TESTING:
Testing is conduct to ensure the application is working as per the customer requirement.
Example: When enter the valid user name and password then user is able to login.
Example: Verify a valid Pin number then able to login.
Testing is conducted to find the defects in the application.
Defects: Any functionality is not working as per the requirements.
Testing is conduct to assess the Quality of the software.
Quality: If the product is satisfying customer needs it is a Quality.
*SDLC: Software Development Life Cycle.
Any project development has to follow the below phrases
1)      Analysis.
2)      Design.
3)      Code.
4)      Testing.
5)      Release and Maintenance.
1. Analysis: It is the responsibility of project Manager, Business Analyst to discuss with Business users (or) SMES [Subject Matter Expert] and collect all the project requirements.
The requirements collected are two types.
A. Functional Requirements: Behavior of the application.
Ex: In login page has user Id and Password, when enter valid details, the user should login when enter invalid details user should not login.
B. Non Functional Requirements: Characters and features.
Ex: - Performance, Usability, Compatibility …. Etc.
All the requirements collected should be document in FRS, SRS Etc.
FRS [Functional Requirements Specification]
SRS [System Requirements Specification]                                                                                                                                           
 Responsibilities                                                                                                Docs
Project Manager                                                                       FRS, SRS, Project Plan
Business Analysis
Business User
Q. What are the different approaches to collect requirements from the customers?
1, One to one discussion
2, Brain storming session
3, JAD session [Joint Application Development]
2, Design:
After the requirements are collected, the developers or systems architecture will prepare the design for UI [user interface] Business Logic.
Responsible                                                                             Docs
Development System Architecture                               UI Design -> Proto type (Sample design)
Project Manager                                                         BL Design-> HLD High Level Design
                                                                                                         LLD Low Level Design
Business Users                                                            Data Base -> ER diagram.
Note: As a tester we need to understand the FRS, SRS Documents and validate the application as per the requirements, Design documents (HLD, LLD) are requirements only for developers to write the coding.
3, Coding:
It is responsible of developers to write the coding as per the design.
4, Testing:
After the coding, testing is conducted to validate the application as per the requirements.
Responsible                                                                 Docs
Developers - Unit testing                                           Test Plan
Dev/Testers – Integration Testing                                Test Case Design Docs
Testers – System testing                                             Test log
BU – UAT (User Acceptance testing)                        Review Reports
(Business Users)                                                        RTM (Requirements Traceability Matrix)
                                                                                 Defects Reports
                                                                                 Test Summer Reports
                                                                                 Metrics Etc
5, Release Maintenance:
After testing is completed the application is released in to product, if any issues in the application are handled by the maintenance team or post production team (This team is also have Dev/Testers)
Project Development Environment:
Any project development has minimum the below three environments.
1. Development Environment.
2. Tester Environment.
3. Production Environment (live Env.).
Build - Any developed software.
Deploy - Install the software.
Application:
The software developed based on specific customer requirements.
Ex: Esava application, IRCTC Etc.
Product:
Product is developed with requirements used by any customers.
Ex: Windows OS, Tally, MCAFEE antivirus Etc.
Important Questions:
1. Explain SDLC.
2. Testing is conducting in which environment
3. What is the difference between Product and Application?
*Software Development Models:       
Any project developed will follow the SDLC phases, based on the size of the project, budget, resource Etc.., we can implement (follow) different development models
I. Water Fall Models (for small models it can be used)
This Model is easy to understand, easy to implement
->It is a step by step model, after completion of one phase the next phase is started.
->This model is suitable if the requirements are finished in the beginning only. 
->The model is suitable for small and medium projects.
II. Spiral Model (Incremental Development Model)
1. This Model is suitable for large and complex projects.
2. In this approach the high priority required.
3. In the next phase few more requirements are developed and integrate the existing application.
III, V-Model 

1. Verification:
This is conduct throughout the project cycle to ensure are we developing the right project or not.
Example: Reviews, Static Testing (testing is conducting without executing or without running the application.                                                                                                                                      

2. Validation:
Conduct after coding to ensure the develop project is working right or not as per the requirements.
Example: System testing, UAT, Integration etc.
->Dynamic testing: Testing is conduct by executing, giving input data and validate the output data.
Testing includes both verification and validation.
As per V-Model testing should be conduct from beginning of the project (from requirement phase).
Advantages of V-Model:
1. As testing is conduct at every level, defects are found at every level, the final product is a quality product.
2. Cost testing is conducting at every stage, if defects found at every stage the cost of fixing is less.
3. As testing is conduct form beginning better understand of requirements.
Levels Of Testing:
1. Requirements Reviews:
            After the requirements are prepaid, reviews are conduct to ensure the correctness and completeness of the required.
Note: If any mistake in the requirements or if any requirements are incomplete are communicated with project manager.
2. Design Reviews:
            After the design is prepaid reviews are conduct to ensure the design is prepaid as per the requirements to ensure the correctness and completeness of the design.
Design review are conducted by technical managers are project manager.
3. Code review (or) Walk through:
            During the coding review are conducted to verify the code developed as per the design, verify the logic of the code, verifying the coding standards etc
Usually code review are conducted by project manager or technical manager
4. Unit testing (or) Component testing:
->Testing is conducted on single program (or) single component
->Testing is conducted by developers are white box testers
->It is a white box testing technique
->To perform this test programming knowledge is required
Following are the techniques is unit testing
I. Statement Coverage Testing (or) Basic Path testing:
Testing is conduct such that each statement in the program is testing at least one time
II. Conditional statement testing:   
Verify the conditional statement in the program is working correct
Example:
If Condition then
_______
_______          True
_______
Else
______
______            False
Else if
III. Branch Coverage Testing:
Verify all the branches are working correct, verify for all the combinations.
IV. Path Coverage Testing:
->Verify the program execution.
->The Path of execution of the program needs to be tested.
Example: Call1 ---> call2 --->  call3 ---> call4.
V. Error Based Testing:
Intentionally keep some errors in the program and testing is conducted to verify the impact of the error on the application.
VI. Mutation Testing:
Testing is conducted by doing some changes in the code changes may be done to improve the performance of the program (or) usability.
VII. Functional Testing
Verify the program is working correct the output results are as per customer requirements (FRS).
5. Integration Testing:
Testing is conducted by integrating two or more components within the same testing (or) two different systems are integrated
->During this test verify the data communication between the components.

->Check the performance.
->Verify the functionality.
Q. Following are the approach is integration testing?
1. Big Bang Integration:                                            (at a time)
->The multiple components are tested together.
->In this approach detailed testing is not possible and also if any defects is found it take time to analysis the reason for the defect.
2. Top Down Integration:

->In this approach testing is conducted for main module.
->Verify the changes in the main module are effected in the sub module.
3. Bottom Up Integration:

->In this approach testing is conduct from sub module to main module.
->During this test verification the changes in the sub module are effected in the main module.
4. Mixed Integration (or) Sand Which Integration:
It is a combination of both top down and bottom up integration.
STUB AND DRIVER:
These are the temporary components used alternative to sub module and main module.
Stub: In top down integration, this is substitute for sub module.
Driver: In bottom up integration, this is a substitute for main module.
Note:    àThe stub and driver environment is created by the developers.
àIn increment development during integration button up integration is followed.
6. Data Migration Testing:
The transferring of data from one source to another destination, during the process we verify the source data and destination data, the data transfer to the destination verify it is converted as per the requirements.
7. Systems Testing:
->Testing is conduct by testers.
->It is black box system testing.
->Testing is developed after the complete code is developed.
->During this testing we will conduct both functional and non-functional testing.

I. System Functional Testing
Testing the behavior or correctness of application as per the customer requirements
1. User Interface Testing
Verify all the requirements objects are available are not on the page or window.
Ex: - Verify the login page has ID, Password, sign in, forget password.
2. Object Properties testing:
Verify the properties of an object like enabled, disabled, visibility, focus etc.
Ex:- In the email inbox first page of mails previous link is disabled.
Input Domain Testing
The testing is conduct to validate minimum 6 and 32 characters.
To validate the input data we can use the below techniques.
                                                                                                                                        
                                                                                                                                               
I, BVA - Boundary Value Analysis
The techniques specifies to validate the input data from the below inputs.
            Min                              Max
            Min-1                           Max-1
            Min+1                          Max+1
Ex:- 1. To validate the password filed i.e., 6 to 32 characters.  BVA Condition as below
            6                                  32
            5                                  31
            7                                  33
2. The age field should accept 20 to 45
BVA Condition
            20                                45
            19                                44
            21                                46
3. The test field should 3 digits number, the first digit is one
Min- 100, Max -199
II. ECP - Equivalence Class Partitions (to validate the edit box)
Use these techniques to validate the input field for valid and invalid partitions of data
Example
1. Yahoo ID should accept only alphabets and numbers
            Valid                Invalid
            0-9                   special characters
            a - z
            A – Z
2. A field should accept 1000 – 1500 hundred
            Valid                              Invalid
            >=1000                        <1000
           <=1500                       >1500                                                                                                                                                                                                                                                 
Error Guessing
When performed invalid operations verify the system is handling the conditions and verify the error message display is meaning to understand
DATA BASE TESTING
Verify the data update in the display with respective the data transactions are performed in the frontend applications

To verify the data base with respective frontend transactions
1. Connect to database, write the SQL Query and very the data (or) we can verify the data in the frontend application only in same reports
2. In database testing we will verify
            Data in tables,              Security,
            Data Integration,          replication, Data migrations,
            Procedures,                  Locks
            Triggers,
            Transactions,
            Performance,  
LINKS AND URL:
This test is conduct for web application, verifying the links are working correctly, it is navigating to the correct page or not.
OVER ALL FUNCTIONALITY
Verify the complete functionality of the application as per the requirements for both valid combinations of data and invalid combination of data.
II. Non-functional testing:
Testing the characters and features of an application
1. Usability testing:
Verify the user friendliness of an application like easy to use, easy to operate, easy to understand, navigation (number of steps provided to complete the task)
Verify the help provided in the application.
2. Performance testing:
Testing is conduct to calculate the time taken to get the response from server to client.
To calculate the performance the below testing is conducted.
I. Load Testing (or) Scalability test:
Performance is conduct by gradually increasing the user at regular intervals of time.
Example: An application is access by 5000 users, all the users login in 45 min of time, on an average 100 users login for every 60 sec, the test is conduct by gradually increasing 100 users for every 60 sec.
II. Stress Testing:
Testing is conduct by increasing the users more than expected load.
This testis conduct to vary the maximum users supported.
III. Soak testing (or) Endurance Testing:
Testing is conduct to calculate the performance when the application is accessed continuously for more no of hours like 24, 48 etc
Note: The above testing is not possible to conduct manually, test are conduct using tools like
Load Runner
Jmeter
Web load
WAPT              (Web Application Performance Tool)
RPT                 (Rational Performance Tool)
Silk Performer
IV. Memory Testing (or) Memory Leakage Testing:
Verify the usage of memory when accessing the application.
To verify the memory usage we can use the build in tools available in the Windows OS like Task Manager, Performance monitor (start à run àperfmon)
V. Volume Testing:
Testing is conducted by increasing the amount of data for processing usually conducted in the database.
5MB – time (how much time it take to transfer)
10MB – time (how much time it take to transfer)
Performance of the application depends on
(a)    Configuration of the systems.
(b)   Network speed.
(c)    Load on the server.
(d)   Application Architecture.
During performance testing we will calculate.
1. Response time:
The time taken to get the response from server to client
2. Hits/Seconds:
The no of requests received by the server in one second of the time
3. Through put:
 The amount of data processed by the server (in one second) 
4. Elapsed time:
Total time taken to complete the transaction
3. Compatibility Testing:
Testing the application access in different environments
Example:-
1.      Testing the web application access on different bowers i.e.,
Fire fox, IE9, Google Chromo, Thunder Bird Etc…,
2.      Testing the application on different OS i.e.,
Windows XP, Windows 2000, Vista, Windows 7 Etc…,
There are two types of compatibility test
1. For word Compatibility:
The application access is supported on higher versions.
2. Back word Compatibility:
The application access is supported on previous versions.
4. Security Testing:
The test is conduct to ensure only authorized users are able to access the system.
To verify the security of systems
(a), Authentication:
Valid user name and password is required to access the application.
(b), Authorization:
Verify the permissions for different users accessing the application.

To conduct this test each time login as different users and very the permissions
(c). Cookies:
This is a temporary file created in our local system when connect to the website, usually these files should be expired when disconnect from website or sign out.
Note: If the cookies file is not deactivate next time when open the same website it will directly login, without asking for user details (ID and Password).
To test the cookies:                                                                  Expected
1. after sign out/ disconnect form web site,
for next time login                                                                   Should ask for login details
2. After login copy the URL & past in another browser                                   -do-
(d) Session:
This is created in the server for each user connection, it has the user transaction details, the session is automatically expired when disconnect from website (or) if connection is idle from some time.
To keep the session:
1. Keep the connection idle from some time and verify the session is expired.
2. After sign out click on back button and verify it is not navigating to the previous page.
SQL Injection
Testing is conduct by giving special characters like ‘? , --, % Etc’ is the login servers for user ID.
 We expect these types of characters are not accepted.
Encryption and Decryption:
Encryption: Converting the data in to not understandable format.
Decryption: Converting the data in to machine understandable language.
Verifying the few inputs details like passwords, Pin numbers etc are encrypted.
Penetration testing (or) Pen testing:
Try to access the system with different combinations of input to hack the system and get the confidentional data.
Apart from the above test verification are conducted like checking the server, who is given access, firewalls, installed the process if scanning the income & out going data etc.
Note:  Based on the security standards implements the company is apply for BS7799 certification is given for quality standards.
Recovery Testing:
Testing is conducted after the failure system is resolved to verify it is able to recover the data are not.
To perform this test intentionally makes the system failure like.
->Restart the system.
->Unplug the network connection during process.
Ex: - MS Office recovery page (look at left side), Gmail auto saving mail are comes under the recovery testing.
Compliance (or) Conventional Testing:
Testing the application as per the standards
Example: Verify the keyboard is manufacture as per the standards.
Installation Testing:
During Installation verify
a.       The step followed is the installation.
b.      It should ask for location to install the software.
c.       Before Installation it should check for available space and required space.
d.      Verify the complete application installed properly without any errors. (If required install the software on different environments also)
e.       After installation and uninstall verify the existing system is not affected.
f.       Also verify the updates (installing the patches).
g.       After Installing the software if it has to communicate with other software, verify the communication (Interoperable Testing)
Note: For any type of project functional testing may be required or not required.
Test Factors:
There are 15 test factors, based on our project we need to identify the factors required for testing.
1.      Correctness -------------------------------------------         Functional Testing
2.      Authentication----------------------------------------           Security
3.      Authorization ----------------------------------------            Security
4.      Audit trail----------------------------------------------         Memory Testing
5.      Performance-------------------------------------------         Load, Stress, Soak etc
6.      Easy to use---------------------------------------------        Usability
7.      Easy to operate----------------------------------------         Usability         
8.      Service levels------------------------------------------          Usability (Navigation)
9.      Couples-------------------------------------------------       Compatibility
10.  Reliable------------------------------------------------         Recovery Testing        
11.  Maintainable-------------------------------------------         Recovery Testing        
12.  Continuity of process---------------------------------            Recovery Testing
13.  File Integration----------------------------------------           Interoperable Testing
14.  Compliance--------------------------------------------         Compliance
15.  Portable------------------------------------------------         Compatible     
UAT (USER ACCEPTANCE TESTING) TESTING
->After completion of system testing before realizing the application in to production, the business users will validate the Application as per their requirements.
There are two levels of UAT
1.      Alpha Test / Pre UAT
2.      Beta Test / Post UAT
Alpha Test                                                                                           Beta Test
1. Testing is conduct internally in the                     1. Testing is conduct in the customer company
 Development Company
2. Testing is conduct in the test Env.                      2. Testing is conduct in the real time Env. 
3. It is conduct for application and products         3. It is conduct for products
4. During UAT if any defects are found, these
defects are fixed by developers, once again
testing after all the test is passed the application
is implementing in the production
Types of Reviews:
There are four types of reviews.
1. Formal Review: Any review is conduct as per planned schedule like requirements reviews, test case reviews, management reviews Etc.,
2. Informal Review: Any review is conduct not as per planned schedule.                                        
3. Walkthrough: Go though the document, code, application etc and explain to the team members.
Example: - Req. walkthrough, code walkthrough.
4. Inspection: These are conduct by the senior management to verify are the team members ar3e following the process, standards Etc.,
Q. What is non-conformance (NC)?
A. Any deviation from the process.
Testing Terminologies (or) Types of Testing
1. Adhoc testing:
->Testing the application functionalities randomly with the knowledge, experience on the application  
->This type of test is performed on already tested application to conform the functionality.
Note:
Q. If less time is provided what is your testing approach?
A. Identify the high priority functions, conduct the testing based on priority, i.e. high priority functionality tested first next medium and then low priority test.
Q. If Req. documents are not available, what is the testing approach?
A. Understand the existing application, discuss with project manager, developers, and understand the functionalities, when the build is given for testing conduct Exploratory Testing.
2. Exploratory Testing:
Testing the requirement and also learning the new functionalities.
3. Sanity Testing / BUT (Build Verification Testing) 
->Whenever the build is given this is the first test conduct to check the build is working are not.
->During this test all the Req. basic functionality is tested (Only positive testing, Basic testing)
->During this testing test if any show stopper defects are found we reject the build (because of the defect we can’t continue testing)
->After the test is sanity pass or fail it is informed to the developers.
3. Re-Testing:
Testing the same functionality with different combinations of input data
Example: - Deposit Module
1.      Verify with cash deposit.
2.      Verify with cheque deposit.
3.      Verify with deposit amount in another bank.
4.      Verify to deposit > 50,000/-
*4. Regression Testing:
->If the changes are done to the existing build, these tests are conducted on the modified build to verify the changes are working correct, because of these changes there are no side effects.
->Testing is conduct on the changed functionalities and dependent functionalities.
Example: -
1.      To test the pen.                                                                                    Regression Test on
2.      Leakage of ink.                                                                                    1,2,4,5
3.      Color.
4.      Clarity of writing.                                Defects
5.      Writing on different objects.                There is leakage
6.      Comfortable to hold.                           Of ink
7.      Dimension of cap.                               (After Fix
8.      Any breakage                                      Regression are)
9.      Write in different climate
->There are two types of regressions
1.      Par shall Regression.
2.      Full Regression.
I. Par shall Regression: Testing the changes and dependency. 
II. Full Regression: Testing the complete functionality of application usually when environment is changed.
Note: The Maintenance projects includes only regression test.
5. White Box Testing: (Glass box testing / Open box testing / Clear Box testing)   
Testing the application Functionality with the knowledge of programming skills
Example: - Unit Testing, Code Review Etc.,
6. Black Box Testing: (Functional Testing / Closed testing)
Testing the application functionality of application without any programming skills and verify the application as per the customer requirements.
Example: - System Testing, UAT
*7. Gray Box Testing:
Testing the application functionality with the knowledge of technology Environment skills
We can verify the permissions for different users by changing the same users to different groups without creating multiple users.
8. Parallel Testing:
Testing the same functionality on other environment or another version
Example: when testing the new version if any defect is found of the old functionality we can verify the same functionality on the previous version.
9. Concurrent Testing: (Multiple)
Testing the functionality of application when more than one user access the same functionality at the same time
Note: When one request is process the data, the other request should wait in Queue i.e. two requests cannot update the same data at the same time.
10. End to End Testing:
Testing the complete scenario from first module to last module, also verify if the application is integrated with other application the integration also need to be tested.
11. I18N Testing: (Internalization Testing)
Testing the application for different International languages, during these verify the content, time settings, currency Etc.,
12. Localization Testing:
Similar like I18N Testing, Testing the application for local languages specific region
13. Smoke Testing:
This is similar like sanity test, after the build is created these test is conducted to verify
a)      All the requirements are include in the build are not.
b)      The build is properly created are not.
c)      These build can be given are not.
If the build is created successfully and working the build is realized.

14. Risk Based Testing: (This is an approach)
In this approach identify the requirements which are high risks, more complex etc, these requirements are tested first and then low risk requirements tested.
15. Bench Mark Testing:
->Testing the approach based on expected goals and objectives,
->To specify the branch mark take the specifications on existing applications, this is the bench mark for new versions.
The bench mark standards can be taken from existing application or based on customer need (or) based on computer product.  
Sign Off                       = Approved (or) agreed.
Kick off meeting         = A short meeting before starting the project.
Work around              = Alternative approach.
Monkey test                = Testing the application without any knowledge.
(This type of test generally conducted to understand the usability of approch)
16. Gorilla Test: Testing the same functionality thoroughly in detail
17. Buddy Test: Two or more testers testing to gather the same functionality.
Agile Development Models:
In Aglie development there are different models
1)      Scrum Model
2)      XP Model (extreme programming)
1. Scrum Model:
Objectives:
1)      All the team members are working to gather to deliver the Quality project.
2)      Daily standards meetings are conducted to data the status of the project.
3)      Management is involved continually the project life cycle.
4)      Weekly release is expected.
5)      If any large requirement Split in to units so that development and testing can be completed in a week time.
6)       Importance is given for quality of project, not for process.
7)      Based on current status weekly plans are prepared.
8)      As all the team members are to gather better understanding of requirements and any issues are open
9)      Every team member is given responsible as an ownership.
STLC: SOFTWARE TESTING LIFE CYCLE
Test Planning   ----------------------------------------------   Test lead
Test case design
Text Execution                                                                        Tester
Defect Reporting & tracking
Test Closure
The testing the life cycle explains the phases involved in from begging of project.
1. Test Planning:  
When we get a project for testing it is responsibility of test lead to prepare the plan.
Test lead
Scope of testing
Resource
Schedule
Approach
Risks
Test factors    Etc,
2. Test case design:
Testers
1.      Under the application functionality (FRS, SRS)
2.      Identify the scenarios.
3.      Design the test case.
4.      Review TCs
5.      Prepare RTM (Requirement Traceability Matrix).
3. Test executing:       Testers
1.      Deploy the Build in the test Environment.
2.      Execute the cases.
3.      Validate the application
4. Defects reporting and tracing:
1.      Report defects to developers.
2.      Track the status of defect.
5. Test closure:
Testers, Test lead
1.      Stop testing based on
a.       If all functionality are tested.
b.      No new defects are found.  Etc.,
Test Planning:
To prepare the test plan lead should have information
A)    Application functionality.
B)     No. of requirements, complexity of requirements.
C)     Current status of a project.
D)    Developers schedules, Project release dates
E)     Services are provided.
F)      Scope of testing.
G)    Environment Requirement.
H)    Responsibility of team members.
The test plan document has below phases
Introduction                                         Test Environment
Features to be tested                            Configuration management
Features to not be tested                                  Training Planes
Pass / Fail criteria                                 Risk and mitigation plan
Entry and Exit criteria                          Approach
Approach
Test Factors
Resources
Test schedule
Deliverables
Introduction
Specify the purpose, Requirements in the project, project architecture Etc.,
Features to be tested (In scope or what to do)
Specify the requirements which are to be tested.
Features not to be tested (out of scope or what 3rd party will do)
The features not to be tested.
Pass/Fail criteria (conditions)
Specify the conditions if statements to accept the build for testing (or) Reject the build.
Example: - Suspension criteria.
1)      Sanity test in fail.
2)      Wrong build is given.
3)      Build is unstable.
4)      Build Deploy in test environment not working.
Note: The build is suspended, if the developers resolve the issue the build is accepted and testing.
Entry and Exit criteria
1. Entry Criteria: The conditions to start testing.
2. Exit Criteria: The conditions to stop testing.
For all levels of levels testing like unit, integration, systems etc there is a entry and exit criteria.
Example: - System testing
Entry Criteria:
1)      Integration testing should be completed.
2)      The test Environment should be ready, the requirement access is provided.
3)      The defects found in Integration test should be fixed.
4)      All the requirements are developed.
Exit Criteria:
1)      All the functionality is tested.
2)      Know new defects are found.
3)      All the defects are fixed.
4)      No risk in the project.
5)      Testing schedule is completed.
Approach:
Specify the testing approach followed in the project.
                        Planning           Design             Execution        Reporting
Responsible     ----                   ----                   ----                   ----
                       ----                   ----                   ----                   ----
                       ----                   ----                   ----                   ----
Activities         ----                   ----                   ----                   ----
                       ----                   ----                   ----                   ----
Deliverables     ----                   ----                   ----                   ----
                       ----                   ----                   ----                   ----
Test Factors:
Specify the test factors required for our project like comparitibility, performance, usability, security etc.
Resource: 
Specify all the project team members like Developers, Test, Project manager etc, their responsibilities, the contact persons for the team (spoc)
Test Schedule:
The testing schedule is prepared based on development schedule.
Sno.     Task                             Planning Dt                  Actual Date                  Remarks
                                           Start Dt   End Dt         Planned Dt Actual Dt
1          Planning                       ---        ----      
2          Understand Req.          ---        ----
3          Design TCs                  ---        ----
4          system Test                  ---        ----
5          UAT                            ---        ----

Deliverables:
Specify the document to be delivered to the customers during testing, after testing.
Test plan
Test cases
Defects Reports
Merits (Measure Quality)
Test cog Reports
Test summary reports
Test Environment
Specify the Env. Required to access the application and conduct testing like required hardware, the additional software required, OS Etc
Configuration Management:
A specify how to manage all the project related documents, code etc so that all these are accessible to all the team members in the project.
For managing all these Doc’s the configuration management tools like
VSS - Visual Source Safe 
CSS - Concurrent Version System can be used.
The advantage if using tool is only Authorized users can assess the document, the tool provided feature called controlling each time a file is updated it will maintain new version. 
Training Plan:
Specific if only training required for the team members.
Risks and maintenance: 
Specify the possible risks in the project.
Following are the different risks in the project
1)      Req. document are not available.
2)      Estimations are not present.
3)      Lack of skills resources.
4)      Lack of coordination in the team.
5)      Frequent changes in the req.
Approvals:
After the review of the test plan that manger has to approve the plan.
During the test plan the testing team is required based on the skills at required for our project.
Test Case Design:
It is the responsible of testers to design the test cases for the functionalities to be tested.
To design the test cases following are the approach.
1)      Under the application functionality (Study the FRS & SRS).
2)      Identify the Scenario.
3)      Design the test cases for the scenarios.
Q, Test Conditions to create a new file in notepad
1)      File menu -> New.
2)      Ctrl + new.
3)      Alt + F + N.
4)      Verify to create new file without saving exit data.
Test Case:
1)      There are the conditions required to validate the functionality of application.
2)      By preparing the test cases.
a)      We will not miss any conditions
b)      Reporting is easy like no of pass and fails condition.
c)      Easy for new team members to continue
Test case design format:
Project Name:
Module:
Date:
Author:
Pre Conditions:
Test Case ID    Test name    Priority   Steps   Description   Expected   Actual   Status   Test Date
Note: Above are the fields in the test case format can change company to company.
Example: In a Bank Deposit form format is different form one bank to another bank.
Project Name: What we are doing the project, that name we have to write.
Module: What the module we are checking, that name we have to write.
Date: On which date we check that Date we have to write.
Author: Who check the application (or) product his name to write.
Pre Conditions: The conditions to be satisfied to test the requirement.  
Example: To test in box
1)      User should be able to login successfully.
2)      Few mails should be exists in box.
Example: To deposit module
1)      Cashier user should able to login.
2)      Customer A/c should be exists and active
Test case ID: A unique number to identify the test case.
Example: A test case ID the below format
TC_Project_Module_001
TC_Gmail_Inbox_001
TC_Gmail_Inbox_002
TC_Gmail_Inbox_003
Test Name: The conditions to be tested (What to test).
Priority: Importance of the test cases.
P0 = High
P1 = Medium
P2 = Minor
Note: During Execution if less time is provided first we will validate P0 cases then P1 and the P2 cases.
Description: Specify the navigation steps how to test the functionality.
Expected: Specify the output result as per the requirements.
Actual: The output results in the application.
Status: Pass / Fail is based on expected and actual, if expected and actual are same the status is pass
Test Data: The input data received for testing
General Test Cases for any GUI application:
1. Verify the objects display in the page or window.
Example: Verify the login page display i.e. ID, PW and Sign In
2. If any combo bass or lists box
a)      Verify the values.
b)      Verify the values are not duplicate
c)      Verify the selections of values.
d)      Verify based on selection of values the functionality.
3. If any check boxes
a)      Verify the input data as per the requirement like BVA, ECP Etc.
4. If any check boxes, Radio buttons verify the selection
a)      Based on the selection verify the functionality
5. Verify the functionality of application by giving valid data and invalid data.
6. If any data field
a)      Verify the format of data
b)      Verify for future, post dates Etc,
Example:-
1. Test case for railways passenger reservation (www.indianrail.gov.in)
1)      Verify the object in source and destination is same.
2)      Verify the selection of values in source and destination.
3)      Verify the values in the class.
4)      Verify to select the values in the class.
5)      Verify the data format.
6)      Verify the values in departure and arrival time.
7)      Verify the values in train type.
8)      Verify the selection on train type.
9)      Verify one way journey.
10)  Verify the return journey.
11)  Verify to get details by giving all valid details.
12)  Verify to get details for some source and destination stages.
13)  Verify to get details for post dates.
14)  Verify to get details for future dates.
15)  Verify to get details for one ways, return journey.
16)  Verify to clear the details.
17)  Verify to calendar control.
18)  Verify to get details without any required data.
2. Write test case for yahoo registration page.
3. Write a case to create a triangle for the given three sides.
1)      Enter 3 different sides (5,9,8)                                        Scalar triangle
2)      Enter 2 sides same (5,5,8)                                            Associate triangle
3)      Enter 3 sides same (5,5,5)                                            Equal lent triangle
4)      Enter 3rd side square = sum of 2 sides square (3,4,5)   Right angle triangle
5)      Enter 3 sides, sum of any two sides < 3rd side (1,2,3)  Not create a triangle
6)      Enter negative numbers                                                Not create a triangle
7)      Enter decimals numbers                                               Not create a triangle
4. Write a test case for a calculator (2 +ve, 2 –ve)
Positive case
1)      Verifying the functionality of all keys
2)      Verifying the basic calculations (+,-,*,/)
Negative case
1)      Verifying divide with zero
2)      Verifying multiply with negative value
5. Write stress cases to perform stress testing on mobile phone
1)      Perform more than one transaction at a same time.
2)      Send MSG to all contacts.
3)      Try to download large size of data.
6, Write test cases for Coffee Machine, Electronic Doors, Mobile Phone, ATM.
Test Case Review
After the test cases are prepared, reviews are conduct to ensure the completeness of the test cases.
First the tester has to do reviews (Self review) and finally the test lead reviews the test cases and approve.
Peer Review. (Co-employee review)
In few projects before test lead review, the reviews are conducted within the team.
Traceability Matrix
In this document the test cases are mapped to the corresponding requirements, these documents helps to ensure
a)      Coverage of test cases.
b)      Identify any gaps between requirements and test cases
Test Execution:
àBefore receiving the build as per the schedule we have to complete the test design.
àAfter the build is given we will conduct the test execution, valuating all the functionality. 
Build Release:
1)      Any one developer creates a built.
2)      The build is copied on to the server so that it is accessible to all the project team.
3)      The developer will create the below document and given to the concerned team responsible to deploy the build in the test Env.
4)      Below document is given when the build is released
SRS (Software Release Notes) / DD (Deployment Document)
Build Location:                       (Path of the build)
Build Version:                         Build Version number.
Requirements/ Modules:         What are the functions are developed)
Preconditions:                         (System requirement SW/ HW)
Know issues:               (Dev. Know if any functionality is not working)
Defects fixes:              Defects ID fixed
Date: ------------
Prepared by: ---------------
1)      The build is deploying in the test Env. As per the Instructions in the release notes.
Sanity Testing:
After the build is deploy successfully, this test is conducted to verify the basic functionalities and conform the build is stable are not.
For sanity test we have to prepare list of test cases, any tester conducting sanity test will check the same test cases. 
Note: From Each req. from already existing test cases identifying import test cases and prepared the list of sanity test.
Example: - Test case
1                    Registration
1          Login page
3          Inbox
1          Compose
1          setting
            Total    7                      7 Test cases for sanity testing
Test Execution:
After sanity test is passed continue testing and validate the functionalities.
If we conduct functional testing it is called manual testing.
If the same functionality is validating using the tools called Automation
1. During execution validate the application as per expected and specify actual and status.
2. The test case status can be
Passed  -        Expected (Req.) and Actual (app.) are same.
Fail       -       Expected (Req.) and Actual (app.) are not same.
NA       -        This Test case is invalid.(when we are in new version, previous version used as NA
                        Not applicable)
No Runà        Pending / Not able to run (These TCs might be depending on previous TCs which is failed)
Incomplete -  Few cases done and frequently are pending.
Defect Reporting: If any test case is failed (or) if any functionality are not working as per expected, we can report a defect have the below defects
Defect Id   Summary   Date Found   Detected By   Assigned to    Severity   Status   Builds   Module  

Defect type   Environment   Reproducibility Repro Steps   Expected   Actual   Comments

1. Defect ID: Unique number for each defect (i.e. 1, 2, 3, 4, 5….)
2. Summary: Description of the defect
Ex: - Delete mail is not working, attachment >20 is not working.
3. Detected By: Name of the tester (or) Business user who found the defect.  
4. Assigned to: Currently the defect is assigned to whom.
5. Severity: Specific the seriousness (or) impact of the defect on the application, it has values High, Medium, Low.
6. Priority: Specifies the importance of the defect, it has values high, medium, low (or) 1 to 100%
Example                                                                                  Severity           Priority
1, Login not working                                                               High                 High
2, Logo of a company not correct                                           Low                 High
3, Change pass word not working                                           High                 Low
4, Delete one mail is working more mails not deleting               Medium           High
5, UI Defects (Spilling, alignment)                                            Low                 Md/LW                       
The priority is given by developers, the defects are fixed by developers.
Note: If the functionality is important, if any defect found priority is high.
7. Status: - Specify the status of the defect.
1)      New:    New defect is reported.
2)      Open:   Developers has accepted the defect and he is working on the defect.
3)      Fixed:  The developer has fixed the defect.
4)      Deferred:   Post pone the defect (will fixed later)
5)      Duplicate: This defect is same like one of the previous defect.
6)      Reject:      Developer is not accepting this as a defect.
7)      Closed:                After regression test if the defect is working correct.
8)      Reopen:    After regression test if still the defect exists.
Note: If developers reject a defect, check the reason, once again recheck, if developer is correct close the defect else reopen after discussing.
Note: If developers say he won’t fix the defect discuss with the test lead, project manager.
8. Build Version: - Specify the build version the defect is found.
9. Module: - The module in which the defect is found.
10. Defect type: - specify the type of defect like
Functionality Defect                 Data Base Defect
User Interface Defect              Environment Defect
Performance Defect
11. Environment: - In which Environment the defect is found like test, UAT, Production.
12. Reproducibility: - It has values like Yes (or) No.
            Yes: Every time defect is occurring.
            No: Many times it is working correct, only few cases not working.
13. Repro Steps: - It specifies the navigation steps how to reproduce the defect.
Note: Usually defect reporting can be done using the tools like Quality Center, Bugzilla, Jiera, Bugtracker Etc,
Regression Testing: After developers fix the defects, modified build is given, conduct the regression test, if the defect is working correct close the defect else reopen.
Test Closure: After the testing is completed we can plan to stop the testing.
àWe can stop testing if no new defects are found
àNo risks is the Project
àSchedule is completed Etc.,
When exit criteria conditions are satisfied
Defect (or) Bug Life Cycle:
I. Defect, bug, Issue, Ticket all are same if any functionality is not working as per accepted.
II. Error: - Any mistake in the program, there are three types of errors.
1)      Syntax error: Any mistake in the program not as per the programming language.
2)      Runtime error: During execution error is displayed.
3)      Logical Errors: The program executed successfully, the output is not as per expected.
III. Failure: - It is the terminology used by the customer if it is not satisfying customer needs.
IV. Defect Injection: - Adding additional defects in the application usually because of one defect fix there might be some side effects.
V. Defect leakage: - The ratio of defects escaped from one level of testing to next level.
                        Unit---------- Integration -----------system testing
VI. Defect Triaze: - It is a meeting conducted when more no of open defected and less time for fixing defects, Identified the defect which are possible to fix and not possible to fix within the time
Note: The defect identified for fixing is approved by the customer.
VII. Defect Clustering: - Defects are found in groups.
VIII. Defect Masking: - The defects are hidden i.e, a show stopper defect hiding the other defects in the application.
Quality Control and Quality Assurance:-
1)      Testing the software, finding the defects, reporting the defect and ensure the application is working as per the requirements is quality control.
2)      Defining the process in the organization, redefining the process if required in the responsibility of QA.
3)      QC works at project level, QA is responsible at organization.
4)      QA is responsible to prevent the defects QC is responsible to find the defects.
USE CASES:
It explains the functionality of the requirements.
There are two types of use cases
1. Diagrammatic Use Case: - The functionality is a diagrammatic format.
 Ex: - ATM Administration
The below diagrams are the Diagrammatic Use Case
2. Technical Use case: - The technical use case document has the below details.
I. Use case ID: A unique number for every use case.
II. Use case number: Name of the requirement (use case)
III. Reference: The source of information to prepare these use case
IV. Actors: The users have permission to work with the requirement.
V. Goal: The object (or) Purpose of the requirement.
VI. Importance: Importance of the requirement either higher (or) medium (or) low
VII. Frequently of use: How frequently the requirement is access.
VIII. Pre conditions: The conditions to be satisfied to work with the requirement.
IX. Description: Specify all the business rules, functionalities of the requirements.
X. Post Conditions: The expected output result.
XI. Scenario: The business functionalities in the requirement.
XII. Approvals: The name of the manager who review and upload the requirement.
Metrics: Metrics are used for measuring the quality in the project.
1. Test efficiency: Test efficiency is calculated the quality of testing
Test efficiency = A / A+B *100% (A=10, B=10 defects found)
A= No of defect found by the testers.
B= No of defect found by the customers.
10 / 10+2 * 100% = 83 % (>95 % is the good testing)
2. Schedule Variance: Actual schedule – planned schedule.
3. Test Coverage: No of required covered / Total no of Req. * 100%
(How much percent we completed the test)
4. Defect Density: No of Defects found / Total no of lines of code tested * 100%
(To Calculate Quality of development)
5. Cyclomatric complexity: N= Nodes, E= Edges  
            E-N+2
            4-4+2 = 2.
This metric is used to calculate the no of execute paths so that each statement is tested at least one time
Note: What metric are required in the project, we can calculate these metric and submit to the manager 
Defects per size = Defects detected / system size
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of defects located
Achieving Budget = Actual cost of testing / Budgeted cost of testing
Defects detected in testing = Defects detected in testing / total system defects
Defects detected in production = Defects detected in production/system size
Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
System complaints = Number of third party complaints / number of transactions processed
Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10
Source Code Analysis = Number of source code statements changed / total number of tests.
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation
Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
Project Architecture:
1. Single tire Architecture: Single tire Architecture the UI (User Interface), BL (Business Logic), DB (Data Base) all are included in to single layer not possible to separate.
Example:  UI+BL+DB.
2. Two tire Architecture (or) Client server Architecture: In this architecture the data base is separate from UI and BL (This is Thick line Architecture)
3. Three tire Architecture: In this Architecture the business logic is also separate from UI, the business logic from DB is common from all the users. (This is Thin line Architecture)
Web Application check list
Check list for web application Testing.
1. Functional Testing.
            Check all the links
            Test forms in all pages
            Cookies testing
            Validate HTML / CSS (Cascade Style Sheet)
            Date Base Testing
2. Usability Testing.
            Test for navigation
            Contact checking
            Other user information for user help
3. Interface Testing.
4. Compatibility Testing.
            Browser Compatibility
            Operation System Compatibility
            Mobile Browsing
            Printing options
5. Performance Testing.
            Web load Testing.
            Web Stress Testing
6. Security Testing.
Testing Case Design Technique:
We can use the below technique to design the functional Test Case.
1. Input Domain Technique:
This technique is used for validating data in the edit box, we can use ECP, BVA Conditions.
2. Decision Table:
Use this technique to identify all the test cases for functionality testing with all the combinations
 Example: -
Sal       Age      G         Expects
            >5L      >60      M        
            >5L      >60      F
            >5L      <60      M
            >5L      <60      F
            <5L      >60      M
            <5L      >60      F
3. Cause Effect Graph:
Use technique to identify the possible causes which will affect the system and identify the impact.
Ex: When online transport is going network problem occur what will happen.
Q. What is PDCA cycle?
A. This is basic cycle implemented in project development.
P = Plan, D = Do, C= Check, A= Act
Q, Edward Demy Quality Principles
80:20 Principle (or) Pareto Chart: In this point of view 80% defects are caused because of 20% of issue
Post Mortem review: This is the final review meeting conducted in the project after the project is realized in to production.
This review meeting is conducted by QA team discuss the defects found by customers the defect which are reopen for multiple times, challenges faced in the project, best practices implemented
The object of this review is to improve the process.
Audit: In a project these are conduct by the QA to verify the team members are following process, to verify the standards to verify document in project etc.
Coverage Analysis: This is a calculation to verify the testing is covered or not for all the requirements.
Criticality: The terminology used to specify the importance of the issue (critical Req.)
Test Log: It is the execution result report to specify the no of test cases executed pass or fail.
Test Item: The item (or) Req. to be tested
Qualification Testing: Conducted by developers in fount of customers to prove the software developed is meeting the customer’s expectations.
URL: Uniform Resource Locator specify the address of webpage
Internet & intranet:
The internet web application are accessible within the company network not accessible outside the company, it is a private network
Internet web application is accessible
TCP: Transaction Control Protocol, Internet protocol within the network every system has a unique IP address, this protocol is used to communicate multiple systems in the network.
HTTP: Hyper text transfer protocol.
It is used to communicate web application.
FTP: File transfer protocol to send and receive file in the network.
SMP: Simple Mail Transfer Protocol used to send and receive mails
Https: Hyper text transfer protocol Secure.
Protocol used to connect the web server it used the separate port which request special security Authentication
Cache: It is a buffer which can remember the connection information is remembered next time when connect to web site it will connect faster compile next time.
Application server: When working on a web application, the business logic (Program), Validations all are implemented in the application.
Example: Websphere, web logic, EJB server, Oracle Apps server,
Web Server: In the server all the web pages are configured, when a user connect to the web application the request is send to the web application the request is send to the web server which will the response with all the required UI.
Proxy server: this server is configured between client and web server the request will go to the web server to proxy server to proxy server which will maintain the cache memory buffer to improve the performance (when is connect what is connect)
Client side scripting: The script which is executed on the client system.
Server Side Scripting: The script executed on the server and not available to the client user.
Note:
Usually before submitting the data to the server client side validation are conducted.
Client side scripting in clearly visible to user
(Right click on the webpage à View source)
Dynamic web pages: The web pages which display dynamic web pages i.e. the data continue changes. (Ex: - Newspapers web pages)
Static Web Page: The web pages which display static web pages i.e. the data didn’t changes
(Ex: - Gmail login page, TCS web site, Infosys web site ETC)
Pilot Project: (Model Project)
It is like a model project, after the model project is successful, same model is implemented for the other projects. (In another way Estimation is called the pilot project)
Responsibilities of a Tester:
1)      Analysis the requirements.
2)      Identify the scenario.
3)      Design the test cases, validating the application as per requirement.
4)      Reporting defects and tracking.
5)      Prepare status report.
6)      Discuss with developers.
7)      Fixing the defects
8)      Preparing summary reports.
9)      Involving the test case review, collecting and reporting metrics








.







No comments:

Post a Comment