India Equity Research

Monday, August 6, 2007

The Business Case for Outsourcing to independent software testing company.

How an independent software testing company can help you improve your quality and
time-to-market while reducing risk and cost

Introduction
Increasingly companies are coming to realize that delivering high-quality software ontime
and on-budget requires that they partner with the best-of-breed vendors for all
aspects of the software development process. These companies want to use more than
one vendor, or multi-source, allowing them to select vendors who are expert in
development, as well as vendors who are expert in software testing.
This paper will explore both the single-source, and the multi-source approach to
application development, highlighting the advantages and disadvantages of each, and
pointing out the distinct advantages that multi-sourcing has over single-sourcing. This
paper will then highlight LogiGear’s strengths, expertise, and thought-leadership as a
preferred multi-source partner for your software testing needs.

Single-Source Application Development
In single source application development, a single vendor is hired to develop an
application or applications, as well as perform all or most of the software testing and
quality assurance (QA) functions. The selection of a single-source vendor may be part of
a larger outsourcing effort or business process outsourcing (BPO) engagement. The goal
in selecting a singe vendor is to achieve the efficiency of only having to deal with a
single vendor, while, of course, driving down costs.

The advantages of having a single source vendor for both software development and
testing are that there is a single point of management, a single service level agreement
(SLA), and a single point of accountability. Many companies incorrectly assume that this
organizational simplicity will further help to drive costs down, while delivering quality
software on-time.

Single source application development is not necessarily the panacea that it would
initially appear to be. There are many trade-offs and potential problems with single
sourcing both application development and software testing. The potential problems with
single sourcing can include:
• Compromising on best-of-breed in testing – When single sourcing, companies
usually focus on selecting a software development partner who also does software
testing. Typically, the primary focus of a vendor is on software development, with
software testing being an additional item for which they can bill. In many
instances, testing is an afterthought, or an adjunct to their primary development
business. It is important to remember that software testing is its own discipline,
with its own methodologies that are separate and distinct from software
development. Treating testing as an afterthought to software development can
potentially lead to quality problems with the resulting software.
• Testing can be in the hands of less-experienced and less-interested engineers -
It is common practice for software development firms to assign their junior staff
to software testing. Testing is often the first step in the career path of new
software engineers who aspire to become software developers. As such, they
typically do not have a testing background, or a good understanding of the
fundamental concepts and practices that make up good software testing, nor do
they want to do testing. Further, they typically do not have, and do not receive,
any formal training in software testing. This can leave one of the most important
parts of the software development process, the testing, in the hands of relative
novices. To make matters worse, these junior testers may be hesitant to point out
problems in the work of their more-experienced co-workers.
• Lack of independence introduces risk into the development process – When
developers and testers are in the same organization, testers can come under
intense pressure to not delay or prevent delivery from occurring. This can be
especially true in an outsourced relationship where meeting or missing delivery
dates may have financial ramifications for the outsourcing vendor. Because of this
dynamic, the customer (outsourcer) may not get true insights into the quality of
the application under development. Even with an SLA, bugs may go unreported
to the customer so that the outsourcing firm can keep its delivery costs down and
meet its delivery commitments.

Multi-Source Application Development
With multi-source application development (multi-sourcing), multiple “best-of-breed”
outsourcing firms are hired, each to perform its own specialized tasks. Using this
approach, a firm would hire both an independent software development company, and an
independent software testing company. Each of these companies would be expert in their
particular discipline.
In such a multi-sourcing arrangement, onshore managers would manage the multiple
vendor relationships and the coordination between the various vendors and themselves. In
many cases, the relationships would be managed at the departmental level with a
company’s development department managing the software development vendor, and the
software testing or quality assurance department managing the testing relationship. Such
a structure would actually create beneficial independence between development and
testing.

There are many advantages in favor of multi-sourcing. These include:
• Companies gain access to best-of-breed firms – By multi-sourcing, a company
can gain access to vendors who are specialized experts in their own discipline, in
this case they would be outsourcing to both a firm expert in development and a
firm expert in software testing.
• Companies gain access to trained testing experts – From the standpoint of
software testing, hiring a dedicated vendor that is strictly focused on software
testing means that a company would be acquiring the services of engineers who
are trained software testing experts. Such engineers understand testing processes,
methodologies, and tools to a high degree of proficiency. This enables them to
perform a more thorough and complete job testing software. In software testing
companies, software testing is not a step towards becoming a developer, but rather
a chosen career path with a clear progression of professional growth.
• Companies lower risk – Risk to the development effort is reduced on many
fronts by multi-sourcing. Companies will be obtaining a high degree of software
testing expertise and professionalism. Companies will also be creating the
necessary and desired independence between the two disciplines, development
and testing, that can contribute to delivering higher quality software. Since the
testing vendor has no “vested interest” in the developed software, they can be
very honest about its quality and readiness for market. Such insight can be
invaluable to company management as they try to weigh quality/time-to-market
trade-offs. Having good information from a testing vendor that you trust can
enable you to make more informed decisions.
• Companies can save money by tapping smaller outsourcing markets –
Breaking up the development process into the components of development and
testing means that a company may be able to tap into smaller outsourcing markets
and vendors, enabling them to lower costs and potentially reducing their exposure
to the engineering turnover (churn) that exists in larger markets and firms.
The main disadvantages of multi-sourcing are that there are multiple vendor
relationships, no single point of contact, and the onshore employees will have to manage
and coordinate the work of multiple companies. This is mitigated by the desired effect of
creating independence between development and testing. Such independence can
inherently contribute to the goal of higher quality software while reducing the risk of
delays, unacceptable post-release support costs, and embarrassing software failures.
These disadvantages can be further minimized or eliminated through the use of effective
testing frameworks that facilitate global teams and good communication. A number of
these tools are commercially available, or they can be developed in-house.

Wednesday, August 1, 2007

Software Testing hitches and Tips to handle them.

This is an attempt to identify common software testing hitches, most of the testing projects face and a few tips to overcome them. Software testing is an integral part of the software development life cycle. Inadequately tested applications result in loss of credibility with the customers, be it an existing customer or a new one. It is therefore very essential that effective testing be performed with an intention to eliminate the common problems that might cause havoc before releasing any software.

I. Poor Planning or Estimation
Effective planning is one of the most critical and challenging steps in a test project. Test planning and estimating indicate the order and way in which the tasks are to be carried out and the resources required to execute the same. Proper planning is not possible without a sound and reliable estimate.
1. Effort: Delays can result because of lack of resources to perform the activities in a certain time frame or in less efficient use of resources because too many resources are allocated
2. Schedule: Schedule is estimated after the effort is estimated. Developers underestimate the effort and resources required for testing. As a consequence of which, deadlines are not met or software is delivered only partially tested to the ultimate end user
3. Cost: When budgets are not correctly estimated, it becomes relatively expensive; it might result in some test activities to be cancelled causing more insecurity about the quality of the project

How to tackle?
Take a percentage of the total effort, employ standard ratios in testing based on previous similar test processes, allow for overheads, estimate the hours for individual activities and extrapolate the results. Inadequate testing because of lack of knowledgeable resources such as using testers with little or no experience also results in poor quality of testing. Do not forget to include
1. The training time required to improve the knowledge level of the resources on the domain or technology
2. Buffer time required to resolve any risks that you foresee

II. Ambiguous Requirements
Without adequate documentation, the testing effort usually takes longer and allows more errors to get through to the released version. Ambiguity in requirements makes the test design phase a tedious task. The cost of uncovering and correcting requirement deficiencies at this stage is significantly less than that incurred during the acceptance-testing phase. There may be numerous implied or underlying requirements, which may be overlooked by the testers on glancing the requirements. It is therefore essential that the requirements be understood thoroughly at the initial phase of testing.

How to tackle?
The testers can review the requirements and prepare a list of queries to be addressed on the requirements and get them clarified even before preparing the test cases to enable them deliver a quality product. A report with the deficiencies in requirements may also be prepared.


III. Insufficient Test Coverage
A good test suite will achieve high coverage. Inadequate number of cases cannot complete testing the functionality in its entirety. Test coverage is only a measure of the quality of testing. If high-test coverage is not achieved, it is imperative that the process needs to be strengthened. Another factor to be added is the inadequate test data that does not completely the possible ranges.

How to tackle?
1. The associated test case identification number (say a unique number for every case) can be marked against the requirements in an excel sheet to ensure that test cases are written for all requirements. Low coverage indicates a process problem, which might require test generation technique to be improved, or training to be imparted to the tester. Many tools are available in the market to measure the test coverage.
2. It is not possible to test all conditions in an application system. Data design with valid, invalid data to cover normal processing operations adequately can be prepared. Techniques of boundary values, equivalence partitioning can be applied while preparing test data.

IV. Uncontrolled Test Environment
The more the test environment resembles the final production environment, the more is the reliability of testing. Lack of such an environment will result in unpredictable results while in production.

How to tackle?
1. Testing should take place in a controlled environment. It is therefore separated from the development or production environment. The ownership of test environment should be with the testing team and without their permission no change should happen in the environment.
2. Measures can be taken up to set up the test environment in time and ensure that it is well managed; ie. The test environment should be sufficiently representative for the test to be performed that is, it should be closer to or the same as that of the production environment. It is necessary that the Test Manager or a coordinator manages deliveries from development team and is made responsible for the set up, version management, authorizations etc. If an independent test team is established, it will be ideal to have an independent configuration team also.

V. Testing as an Afterthought
Underestimation of the effort and resources required for testing results in starting testing activities at the fag end of the development cycle when it becomes difficult to fix the major bugs unearthed by the testers and also results in compromising on details in the test documents owing to time constraints.

How to tackle?
Test planning can be initiated as soon as the requirements are defined. Process of test execution in parallel with application development can be adopted.

VI. Inadequate Test documentation
Inadequate/improper test documents (test plans, test specifications, defect reports etc) results in loss of time while analyzing what to be tested/re-tested and the related areas to be tested, which might in turn have an impact in the delivery or the quality of the product.

How to tackle?
1. Adequate effort needs to be spent on documentation also, as test documentation is a very important task in all phases of testing.
2. Care can be taken that all documents related to testing are prepared right from the beginning of the SDLC and updated continuously.

What is actual testing process in practical or company environment?

Those who get just out of college and start for searching the jobs have this curiosity, How would be the actual working environment in the companies?

Here I focus on software Testing actual working process in the companies. I will try to share more practically rather than theoretically.

Whenever we get any new project there is initial project familiarity meeting. In this meeting we basically discuss on who is client? what is project duration and when is delivery? Who is involved in project i.e manager, Tech leads, QA leads, developers, testers etc etc..?


From the SRS (software requirement specification) project plan is developed. The responsibility of testers is to create software test plan from this SRS and project plan. Developers start coding from the design. The project work is devided into different modules and these project modules are distributed among the developers. In meantime testers responsibility is to create test scenario and write test cases according to assigned modules. We try to cover almost all the functional test cases from SRS. The data can be maintained manually in some excel test case templates or bug tracking tools.

When developers finish individual modules, those modules are assigned to testers. Smoke testing is performed on these modules and if they fail this test, modules are reassigned to respective developers for fix. For passed modules manual testing is carried out from the written test cases. If any bug is found that get assigned to module developer and get logged in bug tracking tool. On bug fix tester do bug verification and regression testing of all related modules. If bug passes the verification it is marked as verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.(I have already cover bug life cycle in other post)

Different tests are performed on individual modules and integration testing on module integration. These tests includes Compatibility testing i.e testing application on different hardware, OS versions, software platform, different browsers etc. Load and stress testing is also carried out according to SRS. Finally system testing is performed by creating virtual client environment. On passing all the test cases test report is prepared and decision is taken to release the product!

So this was a brief outline of process of project life cycle.

Here is detail of each step what testing exactly carried out in each software quality and testing life cycle specified by IEEE and ISO standards:

Review of the software requirement specifications

Objectives is set for the Major releases

Target Date planned for the Releases

Detailed Project Plan is build. This includes the decision on Design Specifications

Develop Test Plan based on Design Specifications

Test Plan : This includes Objectives, Methodology adopted while testing, Features to
be tested and not to be tested, risk criteria, testing schedule, multi-
platform support and the resource allocation for testing.

Test Specifications
This document includes technical details ( Software requirements )
required prior to the testing.

Writing of Test Cases
Smoke(BVT) test cases
Sanity Test cases
Regression Test Cases
Negative Test Cases
Extended Test Cases

Development - Modules developed one by one

Installers Binding: Installers are build around the individual product.

Build procedure :
A build includes Installers of the available products - multiple platforms.

Testing
Smoke Test (BVT) Basic application test to take decision on further testing

Testing of new features
Cross-platform testing
Stress testing and memory leakage testing.

Bug Reporting
Bug report is created

Development - Code freezing
No more new features are added at this point.

Testing
Builds and regression testing.

Decision to release the product
Post-release Scenario for further objectives.

Testing Checklist

Are you going to start on a new project for testing? Don’t forget to check this Testing Checklist in each and every step of your Project life cycle. List is mostly equivalent to Test plan, it will cover all quality assurance and testing standards.

1 Create System and Acceptance Tests.
2 Start Acceptance test Creation.
3 Identify test team.
4 Create Workplan.
5 Create test Approach.
6 Link Acceptance Criteria and Requirements to form the basis of
acceptance test.
7 Use subset of system test cases to form requirements portion of
acceptance test.
8 Create scripts for use by the customer to demonstrate that the system meets
requirements.
9 Create test schedule. Include people and all other resources.
10 Conduct Acceptance Test.
11 Start System Test Creation.
12 Identify test team members.
13 Create Workplan.
14 Determine resource requirements.
15 Identify productivity tools for testing.
16 Determine data requirements.
17 Reach agreement with data ceter.
18 Create test Approach.
19 Identify any facilities that are needed.
20 Obtain and review existing test material.
21 Create inventory of test items.
22 Identify Design states, conditions, processes, and procedures.
23 Determine the need for Code based (white box) testing. Identify conditions.
24 Identify all functional requirements.
25 End inventory creation.
26 Start test case creation.
27 Create test cases based on inventory of test items.
28 Identify logical groups of business function for new sysyem.
29 Divide test cases into functional groups traced to test item inventory.
30 Design data sets to correspond to test cases.
31 End test case creation.
32 Review business functions, test cases, and data sets with users.
33 Get signoff on test design from Project leader and QA.
34 End Test Design.
35 Begin test Preparation.
36 Obtain test support resources.
37 Outline expected results for each test case.
38 Obtain test data. Validate and trace to test cases.
39 Prepare detailed test scripts for each test case.
40 Prepare & document environmental set up procedures. Include back up and
recovery plans.
41 End Test Preparation phase.
42 Conduct System Test.
43 Execute test scripts.
44 Compare actual result to expected.
45 Document discrepancies and create problem report.
46 Prepare maintenance phase input.
47 Re-execute test group after problem repairs.
48 Create final test report, include known bugs list.
49 Obtain formal signoff.