Friday, October 26, 2007

Integration Testing: Why? What? & How?

Integration Testing: Why? What? & How?

Introduction:
As we covered in various articles in the Testing series there are various levels of testing:
Unit Testing, Integration Testing, System Testing

Each level of testing builds on the previous level.
“Unit testing” focuses on testing a unit of the code. “Integration testing” is the next level of testing. This ‘level of testing’ focuses on testing the integration of “units of code” or components.

How does Integration Testing fit into the Software Development Life Cycle?
Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application.

Once unit tested components are delivered we then integrate them together. These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.

It is possible that different programmers developed different components.

A lot of bugs emerge during the integration step.

In most cases a dedicated testing team focuses on Integration Testing.

Prerequisites for Integration Testing:
Before we begin Integration Testing it is important that all the components have been successfully unit tested.

Integration Testing Steps:
Integration Testing typically involves the following Steps:
Step 1: Create a Test Plan
Step 2: Create Test Cases and Test Data
Step 3: If applicable create scripts to run test cases
Step 4: Once the components have been integrated execute the test cases
Step 5: Fix the bugs if any and re test the code
Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’?
As you may have read in the other articles in the series, this document typically describes one or more of the following:
- How the tests will be carried out
- The list of things to be Tested
- Roles and Responsibilities
- Prerequisites to begin Testing
- Test Environment
- Assumptions
- What to do after a test is successfully carried out
- What to do if test fails
- Glossary

How to write an Integration Test Case?
Simply put, a Test Case describes exactly how the test should be carried out. The Integration test cases specifically focus on the flow of data/information/control from one component to the other.

So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.

The various Integration Test Cases clubbed together form an Integration Test Suite Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.

As mentioned before a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.

Sample Test Case Table:







Test Case ID



Test Case Description



Input Data



Expected Result



Actual Result



Pass/Fail



Remarks




Additionally the following information may also be captured:
a) Test Suite Name
b) Tested By
c) Date
d) Test Iteration (One or more iterations of Integration testing may be performed)

Working towards Effective Integration Testing:
There are various factors that affect Software Integration and hence Integration Testing:

1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right version of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.

2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.

3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.

4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.

Summary:
Integration testing is the most crucial steps in Software Development Life Cycle. Different components are integrated together and tested. This can be a daunting task in enterprise applications where diverse teams build different modules and components. In this article you learned the steps needed to perform Integration Testing.

Source: http://www.exforsys.com/tutorials/testing/integration-testing-whywhathow.html

Thursday, October 25, 2007

Software Testing - Boundary Values Testing

Software Testing - Boundary Values Testing
WHAT IS BOUNDARY VALUE ANALYSIS IN SOFTWARE TESTING?

Concepts: Boundary value analysis is a methodology for
designing test cases that concentrates software testing effort on cases
near the limits of valid ranges Boundary value analysis is a method
which refines equivalence partitioning. Boundary value analysis
generates test cases that highlight errors better than equivalence
partitioning. The trick is to concentrate software testing efforts at
the extreme ends of the equivalence classes. At those points
when input values change from valid to invalid errors are most
likely to occur. As well, boundary value analysis broadens the
portions of the business requirement document used to generate tests.
Unlike equivalence partitioning, it takes into account the output
specifications when deriving test cases.

HOW DO YOU PERFORM BOUNDARY VALUE ANALYSIS?
Once again, you'll need to perform two steps:
1. Identify the equivalence classes.
2. Design test cases.
But the details vary. Let's examine each step.

STEP 1: IDENTIFY EQUIVALENCE CLASSES
Follow the same rules you used in equivalence partitioning. However,
consider the output specifications as well. For example, if the output
specifications for the inventory system stated
that a report on inventory should indicate a total quantity for all
products no greater than 999,999, then you d add the following classes
to the ones you found previously:
6. The valid class ( 0 < = total quantity on hand < = 999,999 )
7. The invalid class (total quantity on hand <0)
8. The invalid class (total quantity on hand> 999,999 )

STEP 2: DESIGN TEST CASES
In this step, you derive test cases from the equivalence classes. The
process is similar to that of equivalence partitioning but the rules for
designing test cases differ. With equivalence partitioning, you may select
any test case within a range and any on either side of it with boundary
analysis, you focus your attention on cases close to the edges of the range.
The detailed rules for generating test cases follow:

RULES FOR TEST CASES
1. If the condition is a range of values, create valid test cases for each
end of the range and invalid test cases just beyond each end of the
range. For example, if a valid range of quantity on hand is -9,999
through 9,999, write test cases that include:
1. the valid test case quantity on hand is -9,999,
2. the valid test case quantity on hand is 9,999,
3. the invalid test case quantity on hand is -10,000 and
4. the invalid test case quantity on hand is 10,000

You may combine valid classes wherever possible, just as you did
with equivalence partitioning, and, once again, you may not combine
invalid classes. DonĂ¯¿½t forget to consider output conditions as well. In
our inventory example the output conditions generate the following test cases:
1. the valid test case total quantity on hand is 0,
2. the valid test case total quantity on hand is 999,999
3. the invalid test case total quantity on hand is -1 and
4. the invalid test case total quantity on hand is 1,000,000

2. A similar rule applies where the, condition states that the number of
values must lie within a certain range select two valid test cases, one
for each boundary of the range, and two invalid test cases, one just
below and one just above the acceptable range .
3. Design tests that highlight the first and last records in an input or
output file.
4.Look for any other extreme input or output conditions, and generate a
test for each of them.

SUMMARY
In this lesson, you learned

how boundary value
analysis refines these test cases and derives others by examining output
specifications as well as inputs. Using either of these techniques,
preferably, the second, wherever possible , you'll be able to test your,
system. But what if the system is complex? In that case, there are bound
to be many modules to test How do you plan the order in which to test them?
That is the subject of the next lesson.
------------------------------------------------------------------------------------------
Definition of Boundary Value Analysis from our Software Testing Dictionary:
Boundary Value Analysis (BVA). BVA is different from equivalence
partitioning in that it focuses on "corner cases" or values that are usually
out of range as defined by the specification. This means that if function
expects all values in range of negative 100 to positive 1000, test inputs would
include negative 101 and positive 1001. BVA attempts to derive the value often
used as a technique for stress, load or volume testing. This type of validation
is usually performed after positive functional validation has completed
(successfully) using requirements specifications and user documentation

A definition of Equivalence Partitioning from our software testing dictionary:
Equivalence Partitioning: An approach where classes of inputs are
categorized for product or function validation. This usually does not include
combinations of input, but rather a single state value based by class. For
example, with a given function there may be several classes of input that may be
used for positive testing. If function expects an integer and receives an
integer as input, this would be considered as positive test assertion. On the
other hand, if a character or any other input class other than integer is
provided, this would be considered a negative test assertion or condition.

Equivalence partitioning definition from From Wikipedia, the free encyclopedia.

A technique in black box testing is equivalence partitioning. Equivalence
partitioning is designed to minimize the number of test cases by dividing
tests in such a way that the system is expected to act the same way for all
tests of each equivalence partition. Test inputs would be selected from each partition.

Equivalence partitions are designed so that every possible input belongs
to one and only one equivalence partition.
END BOUNDARY VALUE ANALYSIS IN SOFTWARE TESTING.
Source: http://www.geocities.com/xtremetesting/BoundaryValues.html

Sunday, October 14, 2007

QuickTest Professional Q&A

QuickTest Professional (QTP) Questions and Answers Part # 1http://softwareqate stings.com/ content/view/ 188/38/
QuickTest Professional (QTP) Questions and Answers Part # 2http://softwareqate stings.com/ content/view/ 189/38/
QuickTest Professional (QTP) Questions and Answers Part # 3http://softwareqate stings.com/ content/view/ 190/38/
QuickTest Professional (QTP) Questions and Answers Part # 4http://softwareqate stings.com/ content/view/ 191/38/
QuickTest Professional (QTP) Questions and Answers Part # 5http://softwareqate stings.com/ content/view/ 192/38/

Source: QuickTestPro@yahoogroups.com

Friday, October 12, 2007

New added link in Related Links

See the new added lin in Related Links located in the Sidebar. Regarding most common Interview Questions in Testing.

Tuesday, October 9, 2007

Tester/Developer Perceptions

Testing is a difficult effort. It is the task that’s both infinite and indefinite.
No matter what testers do, they can’t be sure they will find all the problems,
or even all the important ones.

It is hard to find individuals who are good at testing. It takes someone
who is a critical thinker motivated to produce a quality software product,
likes to evaluate software deliverables, and is not caught up in the assumption
held by many developers that testing has a lesser job status than development.
A good tester is a quick learner and eager to learn, is a good team
player, and can effectively communicate both verbally and in written form.

The output from development is something that is real and tangible. A
programmer can write code and display it to admiring customers who
assume it is correct. From a developer’s point of view, testing results in
nothing more tangible than an accurate, useful, and all-too-fleeting perspective
on quality. Given these perspectives, many developers and
testers often work together in an uncooperative, if not hostile, manner.

In many ways the tester and developer roles are in conflict. A developer
is committed to building something to be successful. A tester tries to minimize
the risk of failure and tries to improve the software by detecting
defects. Developers focus on technology, which takes a lot of time and
energy when producing software. A good tester, on the other hand, is motivated
to provide the user with the best software to solve a problem.

Testers are typically ignored until the end of the development cycle
when the application is “completed.” Testers are always interested in the
progress of development and realize that quality is only achievable when
they take a broad point of view and consider software quality from multiple
dimensions.

Source: Software Testing and Continuous Quality Improvement 2nd Edition

Wednesday, October 3, 2007

Project: Automation Framework

I am currently developing Automation Framework using QTP for my company and want to share the processes i undertook and will undertake in the next couple of months.

The BEST online resources i got are the following:
http://safsdev.sourceforge.net/FRAMESDataDrivenTestAutomationFrameworks.htm - This site will tell get you started in developing you very first Automation Framework.

http://www.sqaforums.com/showflat.php?Number=337920 - This is a forum for the QTP Framework Demo and get continous update from experts around the world. I got most of the coding idea in this demo. You need QTP 9.2 to run this though.

http://www.sqa-test.com/method.html - Provides summary of how different types of Automation Framework works.

Step 1: You need to define the Coding Standards first. I researched the Coding Standards for VBScripting since QTP uses this code. This will make the scripts easier to maintain and track. Also if you want to assign specific tasks to other members of your team your codes will be much easier to read and comprehend.
Step 2: Choose the right framework for you. If you can't decide then i suggest you use the most flexible and common which is the Hybrid of Data-Driven and Keyword-Driven framework (see sources above). This might take longer to implement but its worth it since it's comparable to using the latest technique or technology available.
Step 3: Learn that framework and all its concepts.
Step 4: Understand the flow of the QTP Framework Demo.
Step 5: Design the pseudocode first and all the process flows before coding. This will save you a lot of time in revisions. Make sure you already visualize all possible scenarios and improvements you need to make before coding. That way you will just concentrate on making the code work and not thinking of the plan on the fly which is prone to error and inefficient.
Things to consider.
- Data of your source document
- Things that needs to be dynamic like browsers, Window title, Page title, etc
- Your framework must support running a single test case only or group of test cases
- Your Test Report should indicate the Step Name or Suite Name or Group Name
- Be consistent. If you use .xls then use .xls all throughout.
- Visualize using the framework on an actual project so that you can formulate the features you may need
Cont...