The International Institute for Software Testing is
giving away FIFTY days of free training.
Up to TEN companies may win. Each company will get Five
days of free training.
Apply for the IIST's Software Testing Best Practice Award
at http://www.iist.org/bestpractice and receive the following benefits:
**** Get your company featured as an Award Winning Company
In all IIST publicity channels and press releases
**** Tell everyone how great your test process is
**** Get 5 days of free training at the International
Testing Certification Super Week to be held in
Las Vegas, NV, November 26-30, 2007
Chicago, IL, March 24-28, 2008
See details at http://iist.org/superweek
**** Get your company and your test team recognized and identified
as an Award Winning Team during these remarkable events
The International Testing Certification Super Week will be
held in the following cities:
Las Vegas, NV, November 26-30, 2007
Chicago, IL, March 24-28, 2008
During these events, IIST will offer 25 Full day
in-depth courses taught by Leading Industry Experts
in Software Testing & Quality.
**** Register by September 30th, and save 20% (Las Vegas)
See details at: http://www.iist.org/stpw/lasvegas/index.php
**** Register by December 31st, 2007 and save 30% (Chicago)
See details at: http://www.iist.org/stpw/chicago08/index.php
India Equity Research
Saturday, February 23, 2008
Software Testing Best Practice Award
Reporting bugs - A how to guide
http://www.edgeofmyseat.com/articles/2007/07/08/reporting-bugs/
Full Text of the article
-----------------------------------------------------
When working with a developer or team of developers on an application – whether you are a designer working with developers or an end client hiring developers – you all want the same end result, a slick and bug free application. During the testing process of any application it is likely that some bugs or issues will show up and this article aims to explain how to report bugs and problems effectively so that your developers don’t need to spend time working out what the problem is before being able to fix it. This helps to ensure that projects stay on budget and that developers are spending their time adding features to the application rather than trying to get enough details to be able to reproduce and fix issues.
“It’s just not working!”
When you find a problem, it is very tempting to just fire off an email and presume that the developer will immediately be able to see the problem too. However, by taking a few minutes to describe the problem you have encountered accurately you can prevent any confusion occurring as to what the problem is and save both your time and the developer’s as she won’t need to get back to you to find out what actually happened, or spend a long time trying to reproduce the issue.
A good report
A good bug report tells your developer three vital things:
- What you expected to happen
- What actually happened
- What you did/were doing when it happened
What you expected to happen
There are two kinds of ‘bugs’, the first is where something breaks – you see an error message, your uploaded data disappears, you submit a form and the change isn’t saved. These bugs are generally pretty easy to report and identify as all your developer needs is to know exactly what you were doing or inputting at the time and they should be able to reproduce and fix the issue.
The second kind of bug is where the application doesn’t function as you expected. This might be because the developer has misinterpreted part of the specification or it could be that what you expect just isn’t how something can work. In this case the developer believes that it is working fine – and in fact it is ‘working’ even if it is incorrect. If your bug report is that the feature is broken, the developer may then spend time looking for some error in this part of the application when what they need to realize is that it isn’t working as you expected. By giving the information about what you expected to happen the developer can think ‘ah … you wanted it to do x and it is doing y’ and a resolution can be sorted out quickly.
What actually happened
What actually happened is very rarely ‘nothing’ yet bug reports often contain the phrase, ‘nothing happened’. If what happened was ‘nothing’ in terms of the intended result then explain that in a few more words, for example, if you clicked the submit button on a form and it didn’t submit and go onto the next page you could say,
“The form didn’t submit – it just remained on the same page.”
Or perhaps the form submitted and a blank page displayed,
“After submitting the form a blank page loaded.”
If an error message displays on the screen then include that in the report. Just copy and paste the error message.
If you use Internet Explorer then your browser may not display the error message generated by the server, instead showing a generic error page. You can ensure the IE displays the real error message by going to Tools > Internet Options > Advanced. Then scroll down to the browsing section and uncheck ‘Show Friendly HTTP error messages’.
What you were doing when it happened
Your developer wants to know this information – not because they want to tell you that you were doing something wrong, but because it is highly likely that the bug occurs only when a certain path of actions is followed, or when a certain type of data is entered. The more information you can give your developer the easier it will be for them to reproduce the problem you saw and fix it. Things you should include:
The steps taken
List exactly what you did, in the order you did it if possible. If you can go back and try the same steps again and the problem happens again that is great – note down exactly how you made the problem occur. Your developer will be pleased as you have just saved her time trying to reproduce the issue. Even if you can’t reproduce it, no-one is going to doubt that the problem happened – just describe as much as you can remember how you got to the broken point.
Any data you were entering
If the problem happened after you added some data to a form, include the data with the bug report. If you were uploading something such as an image into the application then include that too.
It may also be helpful to copy and paste the URL out of the address bar of the browser so the developer knows exactly which page you were on at the time.
The browser and operating system you were using at the time
With web applications problems may only be occurring in one browser. Let your developer know exactly what you are using – including the version number - so they can create the same environment to test the problem.
Effective bug reporting can make a huge difference in how quickly problems can be resolved, and prevent frustration on both sides of the process. Including the above information, even if it doesn’t seem relevant, will be appreciated by the developer. You don’t need to write an essay, just a few clear lines explaining the key information of:
- what you expected to happen
- what actually happened and,
- what you did/were doing when it happened.
This will be enough to isolate all but the most complicated of issues, and once an issue can be reproduced it is well on its way to being fixed.
Requirements Testing
Testing software is an integral part of building a system. However, if the software is based on inaccurate requirements, then despite well written code, the software will be unsatisfactory. Most of the defects in a system can be traced back to wrong, missing, vague or incomplete requirements.
Requirements seem to be ephemeral. They flit in and out of projects, they are capricious, intractable, unpredictable and sometimes invisible. When gathering requirements we are searching for all of the criteria for a system's success. We throw out a net and try to capture all these criteria.
The Quality Gateway
As soon as we have a single requirement in our net we can start testing. The aim is to trap requirements-related defects as early as they can be identified. We prevent incorrect requirements from being incorporated in the design and implementation where they will be more difficult and expensive to find and correct.
To pass through the quality gateway and be included in the requirements specification, a requirement must pass a number of tests. These tests are concerned with ensuring that the requirements are accurate, and do not cause problems by being unsuitable for the design and implementation stages later in the project.
Make The Requirement Measurable
In his work on specifying the requirements for buildings, Christopher Alexander describes setting up a quality measure for each requirement.
"The idea is for each requirement to have a quality measure that makes it possible to divide all solutions to the requirement into two classes: those for which we agree that they fit the requirement and those for which we agree that they do not fit the requirement."
In other words, if we specify a quality measure for a requirement, we mean that any solution that meets this measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable.
The quality measures will be used to test the new system against the requirements. The remainder of this paper describes how to arrive at a quality measure that is acceptable to all the stakeholders.
Quantifiable Requirements
Consider a requirement that says "The system must respond quickly to customer enquiries". First we need to find a property of this requirement that provides us with a scale for measurement within the context. Let's say that we agree that we will measure the response using minutes. To find the quality measure we ask: "under what circumstances would the system fail to meet this requirement?" The stakeholders review the context of the system and decide that they would consider it a failure if a customer has to wait longer than three minutes for a response to his enquiry. Thus "three minutes" becomes the quality measure for this requirement.
Any solution to the requirement is tested against the quality measure. If the solution makes a customer wait for longer than three minutes then it does not fit the requirement. So far so good: we have defined a quantifiable quality measure. But specifying the quality measure is not always so straightforward. What about requirements that do not have an obvious scale?
Non-quantifiable Requirements
Suppose a requirement is "The automated interfaces of the system must be easy to learn". There is no obvious measurement scale for "easy to learn". However if we investigate the meaning of the requirement within the particular context, we can set communicable limits for measuring the requirement.
Again we can make use of the question: "What is considered a failure to meet this requirement?" Perhaps the stakeholders agree that there will often be novice users, and the stakeholders want novices to be productive within half an hour. We can define the quality measure to say "a novice user must be able to learn to successfully complete a customer order transaction within 30 minutes of first using the system". This becomes a quality measure provided a group of experts within this context is able to test whether the solution does or does not meet the requirement.
An attempt to define the quality measure for a requirement helps to rationalise fuzzy requirements. Something like "the system must provide good value" is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure "good value" we identify the diverse meanings.
Sometimes by causing the stakeholders to think about the requirement we can define an agreed quality measure. In other cases we discover that there is no agreement on a quality measure. Then we substitute this vague requirement with several requirements, each with its own quality measure.
Requirements Test 1
Does each requirement have a quality measure that can be used to test whether any solution meets the requirement?
By adding a quality measure to each requirement we have made the requirement visible. This is the first step to defining all the criteria for measuring the goodness of the solution. Now let's look at other aspects of the requirement that we can test before deciding to include it in the requirements specification.
Requirements Test 2
Does the specification contain a definition of the meaning of every essential subject matter term within the specification?
When the allowable values for each of the attributes are defined it provides data that can be used to test the implementation.
Requirements Test 3
Is every reference to a defined term consistent with its definition?
Requirements Test 4
Is the context of the requirements wide enough to cover everything we need to understand?
Requirements Test 5
Have we asked the stakeholders about conscious, unconscious and undreamed of requirements?
Requirements Test 5 (enlarged)
Have we asked the stakeholders about conscious, unconscious and undreamed of requirements? Can you show that a modelling effort has taken place to discover the unconscious requirements? Can you demonstrate that brainstorming or similar efforts taken place to find the undreamed of requirements?
Requirements Test 6
Is every requirement in the specification relevant to this system?
Requirements Test 7
Does the specification contain solutions posturing as requirements?
Requirements Test 8
Is the stakeholder value defined for each requirement?
Requirements Test 9
Is each requirement uniquely identifiable?
Requirements Test 10
Is each requirement tagged to all parts of the system where it is used? For any change to requirements, can you identify all parts of the system where this change has an effect?
Conclusions
The requirements specification must contain all the requirements that are to be solved by our system. The specification should objectively specify everything our system must do and the conditions under which it must perform. Management of the number and complexity of the requirements is one part of the task.
The most challenging aspect of requirements gathering is communicating with the people who are supplying the requirements. If we have a consistent way of recording requirements we make it possible for the stakeholders to participate in the requirements process. As soon as we make a requirement visible we can start testing it. and asking the stakeholders detailed questions. We can apply a variety of tests to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning. We can ask the stakeholders to define the relative value of requirements. We can define a quality measure for each requirement, and we can use that quality measure to test the eventual solutions.
Testing starts at the beginning of the project, not at the end of the coding. We apply tests to assure the quality of the requirements. Then the later stages of the project can concentrate on testing for good design and good code. The advantages of this approach are that we minimise expensive rework by minimising requirements-related defects that could have been discovered, or prevented, early in the project's life.
References:
An Early Start to Testing: How to Test Requirements
Suzanne Robertson
Wednesday, February 6, 2008
Testac, a proprietary test design tool
Testac is a pioneering test design workbench that significantly enhances the quality of Maveric’s testing services and compresses test cases to a fraction of 10. Testac brings rigour to the test design phase, focussing on “What to test” and “How much to test”. It is the only knowledge-embedded test design workbench that forces a detailed systematic view of the application under test by following business processes and transaction flows to generate scenarios and cases. Moving forward, Maveric will continue to invest on this robust platform to further enrich their User Acceptance Testing (UAT) services.
“According to an IDC report on the software testing industry, the market opportunity for the Indian offshore testing companies is currently $2 billion, and is estimated to rise to $8 billion in 2008,” said Mr. Ranga Reddy, Chief Executive Officer, Maveric Systems. “Testac has led to a paradigm shift from a pure service model to an IP based services model thereby making us the only Indian company to have its proprietary intellectual property in the testing domain. The financial services domain expertise of our testing team coupled with our knowledge embedded design workbench have resulted in significant benefits to our customers,” he added.
Over the last six months, Maveric has deployed Testac for their clients in the middle-east including The Dubai Islamic Bank, Bank Muscat and Emirates Bank. Maveric has recently been awarded the coveted ‘Product Innovation Award’ by Frost & Sullivan in the software testing category at the Sixth ‘India ICT Awards’ hosted in Mumbai in Dec 2007. At present, Maveric plans to target corporates in the banking, finance, insurance and BPO sectors in India and UK.
Maveric has strongly advocated the “Model based testing” approach to effectively test end-to-end business scenarios and Testac is based on this concept. A functional framework of the related business segments is embedded into the test design workbench. Some of the key benefits of Testac include
1. Helps deliver better test coverage
2. Significant reduction in timelines upto 30%
3. Minimises test duplication
4. Carries deep domain knowledge
5. Permits prioritisation during execution
Testac Workbench has three integral components viz.,
1. A Business Modeler that holds pre-defined standard models for specific verticals. Models capture business definitions and rules at the product, transaction and business process levels.
2. A Design Generator that applies powerful algorithms and design principles to the business model to come up with optimised test cases.
3. An integrator that connects the workbench to upstream and downstream tools in defect and test management
Maveric as an organisation constantly innovates and adapts to the ever-changing landscape of testing domain. This new test design workbench will support Maveric’s commitment to provide an industry-leading software-testing platform for the future.
About Maveric Systems Ltd
Maveric Systems Ltd. is one of India’s top three independent software-testing companies since the year 2000. Over the past eight years, Maveric has been pioneering the outsourcing model in software testing. Maveric has successfully moved up the value chain in software testing by taking clients away from a resource augmentation model to provide value added IP-led services that significantly enhance quality of applications deployed by clients. As a leading pure-play, independent software testing company, Maveric today has multiple delivery locations in India, UK, Middle East and USA. Maveric’s impressive client list includes a wide array of leading software product companies, system integrators and financial institutions. With employee strength of 500, Maveric has been growing rapidly over the past five years,
For more information on Maveric Systems, please visit www.maveric-systems.com
Tuesday, January 29, 2008
Delivery Management
Responsibilities - The typical responsibilities of Delivery Management are to:
Ensure that all deliverable work products and services are delivered as required by the binding legal agreements.
Ensure that all deliverable work products and services are delivered according to the budget and schedule.
Eliminate risks due to poor delivery.
Preconditions - Delivery management typically may begin when the following preconditions hold:
Either the: Endeavor has started or Center exists.
The management team is adequately:
Staffed.
Trained in delivery management.
Completion Criteria - Delivery management is typically complete when the following postconditions hold:
Either the:
Endeavor has been completed or the Center has been retired.
All deliverable work products and services have been successfully delivered to (and accepted by) the customer organization.
All commercial packages have been successfully delivered by the vendor organization(s) to the development organization.
All components have been successfully delivered by the subcontractor organization(s) to the development organization.
Steps - Delivery management typically involves the management team performing the following steps in an iterative, incremental, parallel, timeboxed, and ongoing manner:
Work with the development organization and customer organization to ensure that the deliverable work products and services are delivered as agreed.
Work with the vendor organization(s) to ensure that they successfully deliver acceptable commercial packages to the development organization.
Work with the subcontractor organization(s) to ensure that they successfully deliver acceptable components to the development organization.
Track deliveries against schedules and milestones.
Techniques - Delivery management typically involves the following techniques:
Close collaboration with external organizations.
Correspondance.
Phone conversations.
Face to face conversations.
Meetings.
Work Products - Delivery management results in the following work products:
Management Set:
Delivery Statement
Guidelines - Delivery management is critical because most endeavors fail to deliver all that was promised on time and within budget.
Relationship management and communication management are critical to successful delivery management.
Happy Testing !!!
Staging environment ???
The staging environment is any development environment that is primarily used to stage tested applications prior to their deployment to the production environment.
Objectives - The typical objectives of the staging environment are to provide a separate environment for the:
Content management team to incorporate approved content prior to publication to the content management production environment.
User experience team to perform system usability testing.
Customer representatives to evaluate the completeness, quality, and general status of the tested application prior to deployment.
Benefits - The typical benefits of the staging environment include:
Content problems can be found prior to publication in the production environment.
System usability testing can be performed in its own realistic environment without slowing down system and launch testing.
Customer representatives can evaluate the tested application without slowing down testing and deployment.
Hope this explains you about staging environment mail me if u need some more clarification on this.
Happy Testing !!!
Wednesday, January 16, 2008
Cricket Vs Software testing :)
1. (Un)Predictability: Experts say, ‘unpredictability of cricket is its greatest charm’! It is very difficult to predict a win or loss before the last ball is bowled. And it is the unpredictability, which makes cricket so interesting. Teams might look strong or weak on a piece of pen and paper. But in cricket, unpredictability reins, not the statistics.
Coming to software testing, there could be testers who believe that it is easy and straightforward to expect some fixed output after executing a series of test steps, but I only wish; testing software was as simple as that! Echoing some experts in software testing: trying to predict the result of a test in terms of PASS or FAIL criteria can be one of those dangerous traps in software testing world, where a tester can shoot at his own feet! Irrespective of the number of test scripts (either manual test cases or automated test scripts) a tester has written, until the tester gets the application module to test, nothing can be told for sure about the state of the application and its behavior. Unpredictability is one of those things that make software testing so much fun.
2. Skills: Cricket is a game where only skillful players can make their team a winner. I am not sure about other countries, but if you have ever been to India, chances are high that you might have seen cricket being played on a street behind your hotel room! They say, cricket is a fever here in India. Here people are so obsessed with the game that they not only play and watch cricket but also eat, sleep and even drink cricket! I have played fair amount of cricket in my school and college days. You might have too. But the reason, why players like Sachin Tendulkar, Brian Lara and Ricky Ponting are considered as one of the finest cricketers and not us, lies in the cricketing skills they possess.
Likewise, in software testing too, it is the testing skill that differentiates a good tester from a mediocre one. The one, who has the better testing skills in his arsenal, can find more important defects quickly in the software that he is testing. Learning, practicing and applying are three golden rules to acquire any skill. With determination and strong will power, nothing is impossible to learn. Fortunately, software testing is no exception!
3. Game Planning (Knowing the Opponent): Professional cricket is all about knowing the strengths and weaknesses of the opponent team and devising a game plan in order to combat their strengths and to exploit their weaknesses.
In software testing, knowing the testing mission is the first step in determining the goal of the testing effort. Without being clear about the goal, it might turn out fatal to go about testing straightaway. Once the tester is clear about the testing mission, then he can analyze his chances of success or failure depending on the availability and expertise of the resource in hand and the complexity of the testing problem. e.g. imagine a situation where the tester has to test the application to find out it’s robustness to guard against hackers and other malicious users. Knowledge of a fair deal of details about the level of attack that can be attempted against the application, can give the tester a better chance to plan out a strategy to emulate the attack and to test how the application guards against it.
4. Handling Pressure: Winning a game of cricket is all about handling the pressure well. The team that is able to handle the tremendous amount of pressure of the game finally wins the game.
For those who think or have been told that software testing is an easy job and can be done by every Tom Dick and Harry, let me warn you, you have been terribly misguided! Testing is a career, which demands lot of intellectuality and stability of mind to work under variety of pressures (like technical pressure, pressure due to workload, managerial pressure, pressure of deadline, pressure arising from the nature of the job etc). As testers, the basic requirement of our job demands us to deliver bad news (presence of defects, buggy modules that fail miserably on testability grounds etc) to different stakeholders (the programmers, management staffs, clients) of the product under development. Nobody likes to hear bad news. Unfortunately, gone are the days when messengers were not hanged just because they brought in a bad news to the king! Hence, unless the tester is quite good at handling the pressure arising as a byproduct of his work, and is not so good at being diplomatic, he might find it tough to carry on with his job for long. On the other hand, the tester who has got the capability to handle the pressure till the end, has every chance of winning the Testing World Cup!
5. Adaptability: If you have been watching cricket for sometime now, then you must be experienced enough to know that whenever a team visits a different continent for playing cricket, it often finds it difficult to play up to its usual standards, of course unless we are talking about a team like Australia. Most Asian teams find it difficult to play down under and the vice versa. And the reason lies mainly in the difference in the pitch condition in the different continents. Soil texture, clay quality, humidity, temperature, amount of grass, dust etc can affect the behavior of a cricket pitch. The players who can adapt themselves quickly with the new environment can gain an edge in the game.
Just as no two cricket pitches can be exactly identical, no two AUTs (Application Under Test) can be same. Hence, I often get surprised when I see testers trying to clean themselves off testing assignments with excuses such as:
» “I have not tested anything like this before!”
» “This platform is completely new to me!”
» “I am new to this technology. I can not test it!”
» “I am a manual tester. I can not use this test automation tool!”
» “I need the domain knowledge before I can test this application!”
» “Hey, you don’t have any base documents (URS, SRS, BRS blah blah…)! How can I start testing your application?”
To me, this simply translates as a bad attitude of a tester, who is not ready to adapt and more importantly not ready to learn something new. For such testers, I have just one thing to say:
"Don’t forget that, even big and powerful animals like the Dinosaurs and the Wooly Mammoth became extinct just because they FAILED to adapt to their changing surrounding environments; we are just testers!"
When I say so, I DON’T mean that I am a great tester who has been excellent at adaptability throughout his testing career. On the contrary, I have been through several such instances when I was stupid enough to oppose such changes and was in fact, hostile to proposed changes. And don't be too surprised when I tell you that, some of the excuses that I have mentioned above were from my own mouth! However, I have learnt my lesson of adaptability in the hard, bitter way. So I just wanted to warn others who are new and passionate about testing but might commit the same mistake of lack of adaptability that I have committed in my earlier years of testing career.
6. Patience: Cricket is a game of patience. At times, a batsman might be finding it tough to steal a run due to the sharp bowling attack, tough ground conditions etc. But if the batsman has got enough temperament and patience, he might soon find it easy to not only steal a single but also score boundaries against the same bowler under same pitch conditions. Similarly with patience and consistent good bowling, a bowler could turn around the fate of a match by picking up quick wickets.
Similarly, a lot of patience is required while testing an application. There might be days when nothing seemed to work right for you and you would end up without catching a single defect in the whole testing session. There might be times when you would find yourself hitting your head on the keyboard while trying to find ways to reproduce a hard-to-reproduce defect. But you ask any expert in software testing, and he would suggest you to have patience. Putting in extra effort and having patience can help you come out of such dry days of testing.
7. Team Game: Cricket is a team game. Here the team wins the match, which displays excellent exhibition of team play. And so is software testing. All the team members of a testing team can’t be with equal skill sets and with equal level of expertise. Having a testing team with members with different specialization can add to the versatility of the team’s overall performance. I do understand that there can be testing geniuses, who are jack of all testing trades and are all-rounders when it comes to testing. But finding such a tester can be a tough task for any test manager. So as a workaround a good test manager might look forward to build a team which has testers specializing in different aspects of testing (functionality testing, performance/stress testing, database testing, usability testing, automation engineers, risk-based testing etc).
8. Lack of Concentration can Doom you: Cricket requires a high level of concentration. As a batsman you loose your concentration and you could loose your wicket. As a bowler you loose your concentration and you could be hit for boundaries all over the ground! As a fielder you loose your concentration and you could drop a catch (remember! catches win matches!) or give extra runs by mis-fielding.
Coming back to testing, imagine a tester who misses to identify a defect that just happens in front of his eyes, as he was looking for some other defect at that moment. Experts call this phenomenon as inattentional blindness. This can also happen if your mind is too tired due to continuous testing and you start loosing concentration, and in turn, start loosing defects!
9. Wearing safety Helmets and Testers: Batsmen (and even the close-in fielders) wear helmets to safeguard their head from the fast coming deliveries from the bowler. In case of software testing, unfortunately, the testers are like the safety helmet! They act as the representative of the client/end users of the project. And act as the last line of defense of the client/end users against the possible ways in which the software might fail. So as testers, start feeling proud of your profession, as you are safeguarding somebody by taking the hits yourself.
10. Learning Lessons from Failures: Even world champions like Australia was once defeated in a cricket match against the under-dogs like Bangladesh. Success and failures are two integral part of any game. The team that wins today can loose the next match and vice versa. But what differentiates a world champion from an average team is the extent to which they learn their lessons from their failures.
On a similar note, a software testing project can go terribly bad due to various reasons. I have witnessed and have been part of few such testing projects that went nowhere nearer where they were intended to go! But each experience, good or bad, has some lessons in it. A good tester tries to learn lessons from his past failures to make sure they are not repeated in future. The failure might not have been as a result of a mistake on the part of the tester, but still, there might be some lessons in that failure that could help the tester in becoming better at his testing in future.
If you have reached this paragraph after patiently reading through my post, then I must appreciate you for your patience. You have certainly shown a desirable skill of a good tester – patience. :) I would like to hear your ideas about my analogy between cricket and software testing. Can you think of more such points, which can link them together? Let me know your ideas by commenting. Howzat!!!
Happy Testing…
“If the software product has some defects, can it still be called a quality product?”
Software Product: A software product is typically a single application or suite of applications built by a software company to be used by *many* customers, businesses or consumers. The mass-market notion differs from custom software built for the use of a single customer by software development firms. – Wikipedia
Defects: A software defect (or "bug") is an error, flaw, mistake, failure, or fault in a computer program that prevents it from behaving as intended (e.g., producing an incorrect result). Most software defects arise from mistakes and errors made by people in either a program's source code or its design, and a few can also be caused by compilers producing incorrect code. – Wikipedia
Quality: Quality is value to some person, who matters. – Gerry M. Weinberg
Coming back to the original question, when someone asks, “if the software product has some defects, can it still be called a quality product”, I think a possible answer is: “I can’t tell you for sure unless you are ready to answer few of my questions first”!
If you are a software tester, I am almost sure that you are yet to see software, which does not have any defects in it (even after thorough testing). Even if the testers are not able to identify any significant amount of defects with further testing activity, still no tester with a stable brain on his shoulder would ever dare to state that “the software is defect-free after the testing effort”! In that sense, all software is with some defect (either known defects or yet-to-be-found defects). Since no software is without defect, what is the point in asking something like: "if the software product has some defect …"? What do you think?
I am not sure if the interviewer actually wanted to ask: “if the software product has some *known* defects *after it’s release*, can it still be called a quality product?”
This sounds to me like a more meaningful and tactable question which can be attempted for answering! As a software tester, I understand that there can be situations when a product is known to have some unfixed defects and yet the management can decide to release the product. This can particularly happen when,
a) The known defects are of low severity and are less devastating. Even if the end-user encounters such defects, it is speculated that the defects would not cause much trouble/nuisance to the end-user.
b) These are corner case defects and the chance of the end-user being hit by such defects is less.
c) The stakeholders of the project don’t feel that these defects are of much importance!
Whatever might be the reason behind the decision to release the product with those known defects, I am not sure if we can surely tell that the shipped product is of high quality or not, simply on the basis of the defects shipped with the product! Echoing some experts in software testing, quality of software is much more than presence or absence of defects. Imagine a software product, which is shipped with a belief that it does not have any known defects and is quite stable. But wait! What if the end-user does not like it, as it is difficult to learn and operate? What if, it is difficult to train others how to use the software? In a single sentence, what if the product fails severely from usability aspect? Think of a video game, which is very robust (does not crash easily) and lacks any visible defect, and yet the persons who play this game don’t find it interesting enough! Can we still call it a quality software/video game? I guess not!
Coming back to our original question, even though a software product has defects, it could still be called a quality product. At the same time, even though a software product has lesser known-defects, it might not be of good quality! Quality of software is multi-dimensional and can't be decided by the presence or absence of defects alone! Hence, it might be little tricky to answer the above question, without being clear on the understanding of quality by the particular organization. As a tester, do you think you can answer the above question? I am eager to hear your views on the subject.
Happy Testing…