Showing posts with label QA. Show all posts

Important Full Forms of Computer Terminology

Important Full Forms of Computer Terminology
**********************************************************
1.) GOOGLE : Global Organization Of Oriented Group Language Of Earth .
2.) YAHOO : Yet Another Hierarchical Officious Oracle .
3.) WINDOW : Wide Interactive Network Development for Office work Solution
4.) COMPUTER : Common Oriented Machine Particularly United and used under Technical and Educational Research.
5.) VIRUS : Vital Information Resources Under Siege .
6.) UMTS : Universal Mobile Telecommunications System .
7.) AMOLED: Active-matrix organic light-emitting diode
8.) OLED : Organic light-emitting diode
9.) IMEI: International Mobile Equipment Identity .
10.) ESN: Electronic Serial Number .
11.) UPS: uninterrupted power supply .
12). HDMI: High-DefinitionMultimedia Interface
13.) VPN: virtual private network
14.) APN: Access Point Name
15.) SIM: Subscriber Identity Module
16.) LED: Light emitting diode.
17.) DLNA: Digital Living Network Alliance
18.) RAM: Random access memory.
19.) ROM: Read only memory.
20) VGA: Video Graphics Array
21) QVGA: Quarter Video Graphics Array
22) WVGA: Wide video graphics array.
23) WXGA: Wide screen Extended Graphics Array
24) USB: Universal serial Bus
25) WLAN: Wireless Local Area Network
26.) PPI: Pixels Per Inch
27.) LCD: Liquid Crystal Display.
28.) HSDPA: High speed down-link packet access.
29.) HSUPA: High-Speed Uplink Packet Access
30.) HSPA: High Speed Packet Access
31.) GPRS: General Packet Radio Service
32.) EDGE: Enhanced Data Rates for Global Evolution
33.)NFC: Near field communication
34.) OTG: on-the-go
35.) S-LCD: Super Liquid Crystal Display
36.) O.S: Operating system.
37.) SNS: Social network service
38.) H.S: HOTSPOT
39.) P.O.I: point of interest
40.)GPS: Global Positioning System
41.)DVD: Digital Video Disk / digital versatile disc
42.)DTP: Desk top publishing.
43.) DNSE: Digital natural sound engine .
44.) OVI: Ohio Video Intranet
45.)CDMA: Code Division Multiple Access
46.) WCDMA: Wide-band Code Division Multiple Access
47.)GSM: Global System for Mobile Communications
48.)WI-FI: Wireless Fidelity
49.) DIVX: Digital internet video access.
50.) .APK: authenticated public key.
51.) J2ME: java 2 micro edition
53.) DELL: Digital electronic link library.
54.)ACER: Acquisition Collaboration ExperimentationReflection
55.)RSS: Really simple syndication
56.) TFT: thin film transistor
57.) AMR: Adaptive Multi- Rate
58.) MPEG: moving pictures experts group
59.)IVRS: Interactive Voice Response System
60.) HP: Hewlett Packard
Learn more »

What is Test Case?



Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.

 A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Learn more »

What's the difference between load and stress testing ?



One of the most common, but unfortunate misuse of terminology is treating “load testing” and “stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly “load tested” nor subjected to a meaningful stress test.

Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts, etc.) needed to process that load. 

The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. 

The term 'load testing' by itself is too vague and imprecise to warrant use. For example, do you mean representative load,' 'overload,' 'high load,' etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions >suffer (application-specific) excessive delay. A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, 'load testing' is merely testing at the highest transaction arrival rate in performance testing.
Learn more »

What are 5 common solutions to software development problems?



1. Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. 

2. Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. 


3. Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 


4. Stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on. 


5. Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so that customers' expectations are clarified.

Learn more »

What steps are needed to develop and run software tests?



The following are some of the steps to consider:
- Obtain requirements, functional design, annd internal design specifications and other necessary documents
- Obtain budget and schedule requirements - Determine project-related personnel and thheir responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
- Identify application's higher-risk aspectss, set priorities, and determine scope and limitations of tests
- Determine test approaches and methods - unnit, integration, functional, system, load, usability tests, etc.
- Determine test environment requirements (hhardware, software, communications, etc.)
-Determine testware requirements (record/plaayback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
- Determine test input data requirements - Identify tasks, those responsible for taskks, and labor requirements
- Set schedule estimates, timelines, milestoones
- Determine input equivalence classes, bounddary value analyses, error classes
- Prepare test plan document and have neededd reviews/approvals
- Write test cases
- Have needed reviews/inspections/approvals of test cases
- Prepare test environment and testware, obttain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data 


- Obtain and install software releases
- Perform tests
- Evaluate and report results
- Track problems/bugs and fixes
- Retest as needed
- Maintain and update test plans, test casess, test environment, and testware through life cycle

Learn more »

What can be done if requirements are changing continuously?


A common problem and a major headache.
- Work with the project's stakeholders earlyy on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible. 

- It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. 


- If the code is well-commented and well-doccumented this makes changes easier for the developers.


- Use rapid prototyping whenever possible too help customers feel sure of their requirements and minimize changes. 


- The project's initial schedule should alloow for some extra time commensurate with the possibility of changes.


- Try to move new requirements to a 'Phase 22' version of an application, while using the original requirements for the 'Phase 1' version. 


- Negotiate to allow only easily-implementedd new requirements into the project, while moving more difficult new requirements into future versions of the application. 


- Be sure that customers and management undeerstand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job. 


- Balance the effort put into setting up auttomated testing with the expected effort required to re-do them to deal with changes. 


- Try to design some flexibility into automaated test scripts. 


- Focus initial automated testing on applicaation aspects that are most likely to remain unchanged. 


- Devote appropriate effort to risk analysiss of changes to minimize regression testing needs. 


- Design some flexibility into test cases (tthis is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans) 


- Focus less on detailed test plans and testt cases and more on ad hoc testing (with an understanding of the added risk that this entails).


Learn more »

What is 'good code'?



'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation:
- minimize or eliminate use of global variabbles.
- use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
- use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions.
- function and method sizes should be minimiized; less than 100 lines of code is good, less than 50 lines is preferable.
- function descriptions should be clearly sppelled out in comments preceding a function's code.
- organize code for readability.
- use whitespace generously - vertically andd horizontally
- each line of code should contain 70 characcters max.
- one code statement per line.
- coding style should be consistent throughtt a program (eg, use of brackets, indentations, naming conventions, etc.)
- in adding comments, err on the side of tooo many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code.
- no matter how small, an application shouldd include documentaion of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation.
- make extensive use of error handling proceedures and status and error logging.
- for C++, to minimize complexity and increaase maintainability, avoid too many levels of inheritance in class heirarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.)
- for C++, keep class methods small, less thhan 50 lines of code per method is preferable.
- for C++, make liberal use of exception hanndlers

Learn more »

How can World Wide Web sites be tested?



Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between HTML pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. 

The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
- What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)? 


- Who is the target audience? What kind of bbrowsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
- What kind of performance is expected on thhe client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
- Will down time for server and content mainntenance/upgrades be allowed? how much?
- What kinds of security (firewalls, encrypttions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
- How reliable are the site's Internet conneections required to be? And how does that affect backup system or redundant connection requirements and testing?
- What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
- Which HTML specification will be adhered tto? How strictly? What variations will be allowed for targeted browsers?
- Will there be any standards or requirementts for page appearance and/or graphics throughout a site or parts of a site?? 


- How will internal and external links be vaalidated and updated? how often?
- Can testing be done on the production systtem, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
- How extensive or customized are the serverr logging and reporting requirements; are they considered an integral part of the system and do they require testing?
- How are cgi programs, applets, javascriptss, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
- Pages should be 3-5 screens max unless conntent is tightly focused on a single topic. If larger, provide internal links within the page.
- The page layouts and design elements shoulld be consistent throughout a site, so that it's clear to the user that they're still within a site.
- Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
- All pages should have links external to thhe page; there should be no dead-end pages.
- The page owner, revision date, and a link to a contact person or organization should be included on each page.

Learn more »

What is Extreme Programming and what's it got to do with testing?



Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. 
It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained'. Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.
Learn more »

Will automated testing tools make testing easier?



- Possibly. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable. 

- A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms.


- Other automated tools can include:
code analyzers - monitor code complexity, adherence to standards, etc.
coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc.
memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server and web applications under various load levels.
web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure.
other tools - for test case management, documentation management, bug reporting, and configuration management.

Learn more »

What's the difference between black box and white box testing?



Black-box and white-box are test design methods. Black-box test design treats the system as a “black-box”, so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the “box”, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box. 

While black-box and white-box are terms that are still in popular use, many people prefer the terms 'behavioral' and 'structural'. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this 'gray-box' or 'translucent-box' test design, but others wish we'd stop talking about boxes altogether.


It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc. can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate.

Learn more »

What's the difference between QA and testing?



QA is more a preventive thing, ensuring quality in the company and therefore the product rather than just testing the product for software bugs.

TESTING means 'quality control.' QUALITY CONTROL measures the quality of a product QUALITY ASSURANCE measures the quality of processes used to create a quality product.
Learn more »

What do you mean by WBS? Why do we need to create Work Breakdown Structure?

Defining WBS:
 Work breakdown structure means breaking down of a work into individual components on the basis of priorities and coherent sequences so that each slice of work can be handled by different individual at different instant of time.

  • It helps to easily identify the priorities of task.
  • WBS contains no overlapping activities. 
  • When activities are identified they are added to the branch if related else added as new task.
  • It helps to present project structure.
Example of Work Breakdown Structure

Reasons for creating WBS:

  • Assists with accurate project organization
  • Helps with assigning responisiblites
  • Shows the control points and project milestoens
  • Allows for more accurate estimation of cost, risk and time. 
  • Helps explain the project scope to stakeholders
A WBS diagram expressed the project scope in simple graphic terms. The diagream starats with a single box or other graphic at the top to represent the entire project. The project is then divided into main, or disparate compoinets, with related ctivities listed under them. Generally, the upper components are the deliverable and the lower level elements are the activities that create the deliverables. One common view is a Gantt Chart. 


Learn more »

What is Quality Assurance? What are its purposes?





Quality Assurance is based on finding the flaws on the products and reporting to right people so that they can fix it. QA includes management of the quality of raw materials, assemblies, products and components, services related to production, and management, production and inspection processes.


Quality Assurance or Quality Assurance Tester is a way of preventing mistakes or defects in manufactured products and avoiding problems when delivered to customer.

QA ensures:
  • ensures quality in work
  • acitivites are effective
  • software or products meet the requirements

Principle of QA

There are two major principle of QA:
  1. Fit for purpose: Products should be suitable for intended products
  2. Right first time:  Mistakes should be eliminated

Example:

There are different versions of I-phones, making sure, each version is better than earlier version and making sure, any  new application works according to requirements,  etc are some of the works of QA. 
  • Cross-browser testing
  • Customer product acceptance Testing
  •  Beta Testing etc
Learn more »

What kinds of testing should be considered during Quality Assurance in Software Enginering?



Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. 

White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.


unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. 


incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. 


integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system. 


end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. 


load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.


usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. 


install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. 


failover testing - typically used interchangeably with 'recovery testing'


security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment. 


exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. 


context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game. 


user acceptance testing - determining if software is satisfactory to an end-user or customer. comparison testing - comparing software weaknesses and strengths to competing products. alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. 


beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Learn more »

Why is it often hard for management to get serious about quality assurance?



Solving problems is a high-visibility process; preventing problems is low-visibility.
This is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. The physician was asked which of his family was the most skillful healer. He replied, "I tend to the sick and dying with drastic and dramatic treatments, and on occasion someone is cured and my name gets out among the lords." "My elder brother cures sickness when it just begins to take root, and his skills are known among the local peasants and neighbors." "My eldest brother is able to sense the spirit of sickness and eradicate it before it takes form. His name is unknown outside our home."

Learn more »

Why does software have bugs?


1. Miscommunication or no communication -

 as to specifics of what an application should or shouldn't do (the application's requirements).

2. Software complexity - 

the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. programming errors - programmers, like anyone else, can make mistakes.

3. Changing requirements (whether documented or undocumented) - 

the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

4. Poorly documented code - 

it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

5. software development tools - 

visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.
Learn more »

How can new Software QA processes be introduced in an existing organization?

A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary.
Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers.
 
The most value for effort will often be in
(a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, 

(b) design inspections and code inspections, and 

(c) post-mortems/retrospectives.
Learn more »