Tuesday, 27 March 2012

Smoke and Sanity Testing

SMOKE TESTING:


  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth.
SANITY TESTING:
  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
  • A sanity test is usually unscripted.
  • A Sanity test is used to determine a small section of the application is still working after a minor change.
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Sunday, 25 March 2012

Severity and priority

Severity and priority are two ways of thinking about software bugs and deciding which ones should get fixed and in what order. Severity tells us how bad the defect is. Priority tells us how soon it is desired to fix the problem.

Thursday, 22 March 2012

Difference Between Load , Stress and Performance Testing

There is always confusion between Stress ,  Performance and Load testing.
here are the clarification about these testing techniques. 



Stress Testing: This Is usually done to fine the breakpoint of the application . In this Application is stretched to its maximum limit to find the breakpoint of application.

Performance Testing: In this we check the performance of the application under various load. The main focus is on the response time by application to process the request under various loads conditions.

Load Testing: It is very similar to Stress testing , In here also we find the breakpoint of the application at which system stop responding . But in this we use stress the application with some unusual stress and those condition can rarely happen in real world.

Tuesday, 20 March 2012

Testing Types


Acceptance Test Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Alpha Testing: Testing of a software product or system conducted at the developer's site by the customer.


Automated Testing Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.


Background testing. is the execution of normal functional testing while the SUT is exercised by a realistic work load. This workload is being processed "in the background" as far as the functional testing is concerned.


Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.


Black box testing: A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.


Boundary Value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation

Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items.


Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions


Data-Driven testing An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script. 


Data flow testing Testing in which test cases are designed based on variable usage within the code.


Database testing. Check the integrity of database field values.

Dirty testing Negative testing.

Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.


Range Testing. For each input identifies the range over which the system behavior should be the same

Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.


Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

When we should stop Testing


You Can't determine when to stop testing. Now a days 
software applications are so complex and should run in a 
interdependent environment that complete 100 % testing can 
never be done. Few things are there to stop....

1. When bug rate falls to certain level
2. Testcases completed with certain percentage passed
3. Depending on Test Budget
4. Deadlines (may be release deadlines or testing deadlines)