India Equity Research

Sunday, November 25, 2007

CMM-Capability Maturity Model

Capability Maturity Model:The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end.

What is CMM (SEI Capability Maturity Model)? The Capability Maturity Model for Software (CMM) is a framework that describes the key elements of an effective software process. There are CMMs for non software processes as well, such as Business Process Management (BPM). The CMM describes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process. The CMM covers practices for planning, engineering, and managing software development and maintenance. When followed, these key practices improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. The CMM establishes a yardstick against which it is possible to judge, in a repeatable way, the maturity of an organization's software process and compare it to the state of the practice of the industry. The CMM can also be used by an organization to plan improvements to its software process. It also reflects the needs of individuals performing software process, improvement, software process assessments, or software capability evaluations; is documented; and is publicly available.

Testing Life Cycle Team Structure

  • An effective testing team includes a mixture of members who has Testing expertise/Tools expertise.
  • Database expertise/Domain/Technology expertise.
  • Consultants/End users.
  • The testing team must be properly structured, with defined roles and responsibilities that allow the testers to perform their function with minimal overlap.
  • There should not be any certainty regarding which team member should perform which duties.
  • The test manager will be facilitating any resources required for the testing team.

Iteration Model

Spiral Model

Prototype Model

Waterfall Model

The waterfall Model is an engineering model designed to be applied to the development of software.The idea is the following: there are different stages to the development and the outputs of the First Stage "Flow" into the second stage and these outputs "flow" into the third stage and so on.there are Usually five stages in this model of software Development.
Stages of Water Fall Model
Requirement analyis and Planninng:- In This stage the requirements of the "To be developed Software" are established.These are usually the services it will provide,its constraints and Goalsof the Software.Once theae are Established they Have to be defined such a way that are usable in the next Stage.This Stage is often Preludes by a Feasibility or a Feasible study is included in this stage.the Feasibility study includes Questions like;Should we develop the Software,what are the alternatives? It could be called the conception of a software project and might be seen as the very begining of the life cycle.

V & V PROCESS MODEL :

V&V Model is Verification & Validation Model.In This Model We work simultaneously Development and Testing.In this Model One V for Verification and one For Validation first 'V' we follow SDLC(software Development Life Cycle) and Second 'V' we follow STLC-(Software Testing Life Cycle).
  • Testing normally done in a large system in 2 parts. The functional verification and validation against the Requirement specification and Performance evaluation against the indicated requirements.
  • Testing activity is involved right from the beginning of the project.
  • Use of V&V process model increases the rate of success in a project development company to deliver the application on time and increases the cost effectiveness.
Testing Related Activities During Requirement Phase
  • Creation and finalization of testing template.
  • Creation of test plan and test strategy .
  • Capturing Acceptance criteria and preparation of acceptance test plan.
  • Capturing Performance Criteria of the software requirements.
Testing activities in Design Phase
  • Develop test cases to ensure that product is on par with Requirement Specification document.
  • Verify Test Cases & Test Scripts by peer reviews.
  • Preparation of traceability matrix from system requirements.
Testing activities in Unit Testing Phase
  • Unit test is done for validating the product with respect to client requirements.
  • Testing can be in multiple rounds.
  • Defects found during system test should be logged in to defect tracking system for the purpose of resolving and tracking.
  • Test logs and defects are captured and maintained.
  • Review of all test documents.
Testing activities in Integration Testing Phase
  • This testing is done in parallel with integration of various applications or components.
  • Testing the product with its external and internal interfaces without using drivers and stubs.
  • Incremental approach while integrating the interfaces.
Performance testing
  • This is done to validate the performance criteria of the product/ application. This is non-functional testing.

Business Cycle testing

  • This refers to end to end testing of real life like business scenarios.

Testing activities during Release phase

  • Acceptance testing is conducted at the customer location.
  • Resolves all defects reported by the customer during Acceptance testing.
  • Conduct Root Cause Analysis (RCA) for those defects reported by customer during acceptance testing.

Software Testing Dictionary

Acceptance Testing
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

Accessibility Testing
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

Ad Hoc Testing
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

Agile Testing
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

Automated Software Quality (ASQ)
The use of software tools, such as automated testing tools, to improve software quality.

Basis Path Testing
A white box test case design technique that uses the algorithmic flow of the program to design tests.

Basis Set
The set of tests derived using basis path testing.

Beta Testing
Testing of a re-release of a software product conducted by customers.

Binary Portability Testing
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

Black Box Testing
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

Bottom Up Testing
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

Boundary Testing
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Bug
A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Boundary Value Analysis
BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

Branch Testing
Testing in which all branches in the program source code are tested at least once.

Breadth Testing
A test suite that exercises the full functionality of a product but does not test features in detail.

CAST
Computer Aided Software Testing.

Capture/Replay Tool
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

CMM
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

Cause Effect Graph
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

Code Complete
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

Code Inspection
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Compatibility Testing
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

Concurrency Testing
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

Conformance Testing
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

Context Driven Testing
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

Conversion Testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

Cyclomatic Complexity
A measure of the logical complexity of an algorithm, used in white-box testing.

Data Driven Testing
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

Defect
Nonconformance to requirements or functional / program specification

Dependency Testing
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

Depth Testing
A test that exercises a feature of a product in full detail.

Dynamic Testing
Testing software through executing it. See also Static Testing.

Endurance Testing
Checks for memory leaks or other problems that may occur with prolonged execution.

End-to-End testing
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Equivalence Class
A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification.

Equivalence Partitioning
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

Exhaustive Testing
Testing which covers all combinations of input values and preconditions for an element of the software under test.

Functional Decomposition
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

Functional Specification
A document that describes in detail the characteristics of the product with regard to its intended features.

Functional Testing
· Testing the features and operational behavior of a product to ensure they correspond to its specifications.
· Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

Gorilla Testing
Testing one particular module,functionality heavily.

Gray Box Testing
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

High Order Tests
Black-box tests conducted once the software has been integrated.

Independent Test Group (ITG)
A group of people whose primary responsibility is software testing.

Inspection
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

Integration Testing
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

Installation Testing
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Localization Testing
This term refers to making software specifically designed for a specific locality.

Loop Testing
A white box testing technique that exercises program loops.

Monkey Testing
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

Negative Testing
Testing aimed at showing software does not work. Also known as "test to fail".

Path Testing
Testing in which all paths in the program source code are tested at least once.

Performance Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users.

Positive Testing
Testing aimed at showing software works. Also known as "test to pass".

Quality Assurance
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

Quality Audit
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

Quality Circle
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

Quality Control
The operational techniques and the activities used to fulfill and verify requirements of quality.

Ramp Testing
Continuously raising an input signal until the system breaks down.

Recovery Testing
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

Regression Testing
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

Release Candidate
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Sanity Testing
Brief test of major functional elements of a piece of software to determine if its basically operational.

Scalability Testing
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

Security Testing
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

Smoke Testing
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

Soak Testing
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

Software Testing
A set of activities conducted with the intent of finding errors in software.

Static Analysis
Analysis of a program carried out without executing the program.

Static Analyzer
A tool that carries out static analysis.

Static Testing
Analysis of a program carried out without executing the program.

Storage Testing
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

Structural Testing
Testing based on an analysis of internal workings and structure of a piece of software.

System Testing
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

Testability
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testing
· The process of exercising software to verify that it satisfies specified requirements and to detect errors.
· The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
· The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

Test Bed
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

Test Case
· Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
· A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver
A program or test tool used to execute a tests. Also known as a Test Harness.

Test Environment
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

Test First Design
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

Test Harness
A program or test tool used to execute a tests.

Test Plan
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

Test Procedure
A document providing detailed instructions for the execution of one or more test cases.

Test Script
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

Test Specification
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

Test Suite
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Tools
Computer programs used in the testing of a system, a component of the system, or its documentation.

Thread Testing
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

Top Down Testing
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Total Quality Management
A company commitment to develop a process that achieves high quality product and customer satisfaction.

Traceability Matrix
A document showing the relationship between Test Requirements and Test Cases.

Usability Testing
Testing the ease with which users can learn and use a product.

Use Case
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

Unit Testing
Testing of individual software components.

Validation
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

Verification
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Volume Testing
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

Walkthrough
A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.

White Box Testing
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.

Workflow Testing
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.

Functional Testing Vs Non-Functional Testing

Functional Testing: Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.

Functional Testing covers:

  • Unit Testing
  • Smoke testing / Sanity testing
  • Integration Testing (Top Down,Bottom up Testing)
  • Interface & Usability Testing
  • System Testing
  • Regression Testing
  • Pre User Acceptance Testing(Alpha & Beta)
  • User Acceptance Testing
  • White Box & Black Box Testing
  • Globalization & LocalizationTesting

Non-Functional Testing: Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client.

Non-Functional Testing covers:

  • Load and Performance Testing
  • Ergonomics Testing
  • Stress & Volume Testing
  • Compatibility & Migration Testing
  • Data Conversion Testing
  • Security / Penetration Testing
  • Operational Readiness Testing
  • Installation Testing
  • Security Testing (ApplicationSecurity, Network Security, System Security)

Software Testing Metrics

- Cost of finding a defect in testing (CFDT):

  • Total effort spent on testing / defects found in testing

Note: Total time spent on testing including time to create, review, rework, execute the test cases and record the defects. This should not include time spent in fixing the defects.

- Test Case Adequacy: This defines the number of actual test cases created vs estimated test cases at the end of test case preparation phase. It is calculated as

  • No: of actual test cases / No: of test cases estimated

- Test Case Effectiveness: This defines the effectiveness of test cases which is measured in number of defects found in testing without using the test cases. It is calculated as

  • No: of defects detected using test cases*100/Total no: of defects detected

- Effort Variance:

  • {(Actual Efforts-Estimated Efforts) /Estimated Efforts} *100

- Schedule Variance:

  • {(Actual Duration - Estimated Duration)/Estimated Duration} *100

- Schedule Slippage: Slippage is defined as the amount of time a task has been delayed from its original baseline schedule. The slippage is the difference between the scheduled start or finish date for a task and the baseline start or finish date. It is calculated as

  • ((Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100

- Rework Effort Ratio:

  • {(Actual rework efforts spent in that phase / Total actual efforts spent in that phase)} * 100

- Review Effort Ratio:

  • (Actual review effort spent in that phase / Total actual efforts spent in that phase) * 100

- Requirements Stability Index:

  • {1 - (Total No. of changes /No of initial requirements)}

- Requirements Creep:

  • (Total No. of requirements added/No of initial requirements) * 100

Weighted Defect Density:

  • WDD = (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)

Note: Here the Values 5, 3, 1 correspond to severities as mentioned below:

  • Fatal - 5
  • Major - 3
  • Minor - 1

Load Runner - Interview Questions

1. What is load testing?
Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

2. What is Performance testing?
Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

3. What is LoadRunner?
LoadRunner works by creating virtual users who take the place of real users operating client software, such as sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by Load Generators in order to create a load on various servers under test these load generator agents are started and stopped by Mercury's Controller program. The Controller controls load test runs based on Scenarios invoking compiled Scripts and associated Run-time Settings. Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers. With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller. At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.

Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis. Errors during each run are stored in a database file which can be read by Microsoft Access.

4. What is Virtual Users?
Unlike a WinRunner workstation which emulates a single user's use of a client, LoadRunner can emulate thousands of Virtual Users.Load generators are controlled by VuGen scripts which issue non-GUI API calls using the same protocols as the client under test. But WinRunner GUI Vusers emulate keystrokes, mouse clicks, and other User Interface actions on the client being tested. Only one GUI user can run from a machine unless LoadRunner Terminal Services Manager manages remote machines with Terminal Server Agent enabled and logged into a Terminal Services Client session.

During run-time, threaded users share a common memory pool. So threading supports more Vusers per load generator.

The Status of Vusers on all load generators start from "Running", then go to "Ready" after going through the init section of the script. Vusers are "Finished" in passed or failed end status. Vusers are automatically "Stopped" when the Load Generator is overloaded.

To use Web Services Monitors for SOAP and XML, a separate license is needed, and vUsers require the Web Services add-in installed with Feature Pack (FP1). No additional license is needed for standard web (HTTP) server monitors Apache, IIS, and Netscape.

5. Explain the Load testing process?
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.

Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.

Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.

Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.

Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.

Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunners graphs and reports to analyze the applications performance.

6. When do you do load and performance Testing?
We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

7. What are the components of LoadRunner?
The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.

8. What Component of LoadRunner would you use to record a Script?
The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

9. When do you do load and performance Testing?
We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

10. What are the components of LoadRunner?
The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

11. What Component of LoadRunner would you use to play Back the script in multi user mode?
The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

12. What is a rendezvous point?
You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

13. What is a scenario?
A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.

14. Explain the recording mode for web Vuser script?
We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

15. Why do you create parameters?
Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

16. What is correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

17. How do you find out where correlation is required?
Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.

18. Where do you set automatic correlation options?
Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

19. What is a function to capture dynamic values in the web Vuser script?
Web_reg_save_param function saves dynamic data information to a parameter.

20. Which is VuGen Recording and Scripting language?
LoadRunner script code obtained from recording in the ANSI C language syntax, represented by icons in icon view until you click Script View.

21. What is Scenarios?
Scenarios encapsulate the Vuser Groups and scripts to be executed on load generators at run-time.

Manual scenarios can distribute the total number of Vusers among scripts based on the analyst-specified percentage (evenly among load generators). Goal Oriented scenarios are automatically created based on a specified transaction response time or number of hits/transactions-per-second (TPS). Test analysts specify the % of Target among scripts.

22. When do you disable log in Virtual User Generator, When do you choose standard and extended logs?
Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

23. How do you debug a LoadRunner script?
VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

24. How do you write user defined functions in LR?
Before we create the User Defined functions we need to create the external library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)

25. What are the changes you can make in run-time settings?
The Run Time Settings that we make are:
Pacing - It has iteration count.
Log - Under this we have Disable Logging Standard Log and
Extended Think Time - In think time we have two options like Ignore think time and Replay think time.
General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.

26. Where do you set Iteration for Vuser testing?
We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.

27. How do you perform functional testing under load?
Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.

28. How to use network drive mappings?
If several load generators need to access the same physical files, rather than having to remember to copy the files each time they change, each load generator can reference a common folder using a mapped drive. But since drive mappings are associated with a specific user:

  • Logon the load generator as the user the load generator will use
  • Open Windows Explorer and under Tools select Map a Network Drive and create a drive. It saves time and hassle to have consistent drive letters across load generators, so some organizations reserver certain drive letters for specific locations.
  • Open the LoadRunner service within Services (accessed from Control Panel, Administrative Tasks),
  • Click the "Login" tab.
  • Specify the username and password the load generator service will use. (A dot appears in front of the username if the userid is for the local domain).
  • Stop and start the service again.

29. What is Ramp up? How do you set this?
This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be specified. To set Ramp Up, go to Scenario Scheduling Options.

30. What is the advantage of running the Vuser as thread?
VuGen provides the facility to use multithreading. This enables more Vusers to be run pergenerator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

31. If you want to stop the execution of your script on error, how do you do that?
The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the Continue on error option in Run-Time Settings.

32. What is the relation between Response Time and Throughput?
The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.

33. Explain the Configuration of your systems?
The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.

34. How do you identify the performance bottlenecks?
Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.

35. If web server, database and Network are all fine where could be the problem?
The problem could be in the system itself or in the application server or in the code written for the application.

36. How did you find web server related issues?
Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

37. How did you find database related issues?
By running Database monitor and help of Data Resource Graph we can find database related issues. E.g. you can specify the resource you want to measure on before running the controller and than you can see database related issues.

38. What is the difference between Overlay graph and Correlate graph?
Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph shows the current graphs value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graphs Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graphs Y-axis.

39. How did you plan the Load?
Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.

40. What does vuser_init action contain?
Vuser_init action contains procedures to login to a server.

41. What does vuser_end action contain?
Vuser_end section contains log off procedures.

42. What is think time?
Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

43. What is the difference between standard log and extended log?
The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about:

  • Parameter substitution
  • Data returned by the server
  • Advanced trace

44. What is lr_debug_message?
The lr_debug_message function sends a debug message to the output log when the specified message class is set.

45. What is lr_output_message?
The lr_output_message function sends notifications to the Controller Output window and the Vuser log file.

46. What is lr_error_message?
The lr_error_message function sends an error message to the LoadRunner Output window.

47. What is lrd_stmt?
The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed.

48. What is lrd_fetch?
The lrd_fetch function fetches the next row from the result set.

49. What is Throughput?
If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered.

50. What are the various types of Goals in Goal-Oriented Scenario ?
Load Runner provides you with five different types of goals in a goal oriented scenario:

The number of concurrent Vusers

  • The number of hits per second
  • The number of transactions per second
  • The number of pages per minute

The transaction response time that you want your scenario Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

Defect Removable Efficiency

The Defect Removable Efficiency (DRE) is the percentage of defects that have been removed during an activity, computed with the equation below:

DRE = (Number of Defects Removed / Number of Defects at Start of Process) * 100

The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g. design inspection, code walkthrough, unit test, 6 month operation, etc.)

We can also calculate DRE as:

DRE = A / (A+B)

where A = Defects by raised by testing team and B = Defects raised by customer

If dre <=0.8 then good product otherwise not.

Software Testing Estimation Process

Software Testing estimation process is one of the most difficult and critical activity. When say that one project will be completed in a particluar time at a particular cost, then it must happen. If it does not happen, it may result in peer's comments and senior management’s warnings to being fired depending on the reasons and seriousness of the failure.

Here are a few rules for effective software testing estimation:

- Estimation must be based on previous projects. All estimation should be based on previous projects.

- Estimation must be recorded. All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again.

- Estimation shall be always based on the software requirements. All estimation should be based on what would be tested. The software requirements shall be read and understood by the testing team as well as development team. Without the testing participation, no serious estimation can be considered.

- Estimation must be verified. All estimation should be verified. Two spreadsheets can be created for recording the estimations. At the end, compare both the estimations. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.

- Estimation must be supported by tools. Tools such as spreadsheet containing metrics calculates automatically the costs and duration for each testing phase. Also, a document containing sections such as: cost table, risks, and free notes should be created. Showing this document to customer can help the customer to decide which kind of test he needs.

- Estimation shall be based on expert judgment. The experinced resources can easily make estimate that how long it would take for testing.

- Classifiy the requirements in the following categories:

  • Critical: The development team has little knowledge in how to implement it.
  • High: The development team has good knowledge in how to implement it but it is not an easy task.
  • Normal: The development team has good knowledge in how to implement.

What is the difference between Bug, Error and Defect?

Error : It is the Deviation from actual and the expected value. Bug : It is found in the development environment before the product is shipped to the respective customer.

Defect :
It is found in the product itself after it is shipped to the respective customer.

Explain Peer Review in Software Testing

It is an alternative form of Testing, where some colleagues were invited to examine your work products for defects and improvement opportunities.

Some Peer review approaches are,

Inspection – It is a more systematic and rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews.
Ex : In Motorola’s Iridium project nearly 80% of the defects were detected through inspections where only 60% of the defects were detected through informal reviews.

Team Reviews – It is a planned and structured approach but less formal and less rigorous comparing to Inspections.

Walkthrough – It is an informal review because the work product’s author describes it to some colleagues and asks for suggestions. Walkthroughs are informal because they typically do not follow a defined procedure, do not specify exit criteria, require no management reporting, and generate no metrics.

Pair Programming – In Pair Programming, two developers work together on the same program at a single workstation and continuously reviewing their work.

Peer Deskcheck – In Peer Deskcheck only one person besides the author examines the work product. It is an informal review, where the reviewer can use defect checklists and some analysis methods to increase the effectiveness.

Passaround – It is a multiple, concurrent peer deskcheck where several people are invited to provide comments on the product.