Tuesday, January 26, 2010

Test Management

Organisation - organisational structures for testing; team composition
Explain that organisations may have different testing structures: testing may be the developer’s responsibility, or may be the team’s responsibility (buddy testing), or one person on the team is the tester, or there is a dedicated test team (who do no development), or there are internal test consultants providing advice to projects, or a separate organisation does the testing.

A multi-disciplinary team with specialist skills is usually needed. Most of the following roles are required: test analysts to prepare strategies and plans, test automation experts, database administrator or designer, user interface experts, test environment management, etc.

Configuration Management
Describe typical symptoms of poor CM such as: unable to match source and object code, unable to identify which version of a compiler generated the object code, unable to identify the source code changes made in a particular version of the software, simultaneous changes are made to the same source code by multiple developers (and changes lost), etc.

Configuration identification requires that all configuration items (CI) and their versions in the test system are known. Configuration control is maintenance of the CIs in a library and maintenance of records on how CIs change over time.

Status accounting is the function recording and tracking problem reports, change requests, etc.
Explain that configuration auditing is the function to check on the contents of libraries, etc. for standards compliance, for instance.

CM can be very complicated in environments where mixed hardware and software platforms are being used, but sophisticated cross-platform CM tools are increasingly available.

Test Estimation, Monitoring and Control

Test estimation - explain that the effort required to perform activities specified in the high-level test plan must be calculated in advance and that rework must be planned for.

Test monitoring – describe useful measures for tracking progress (e.g. number of tests run, tests passed/failed, incidents raised and fixed, retests, etc.). Explain that the test manager may have to report on deviations from the project/test plans such as running out of time before completion criteria achieved.

Test control – explain that the re-allocation of resources may be necessary, such as
changes to the test schedule, test environments, number of testers, etc.

Incident Management

An incident is any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. Incidents are raised when expected and actual test results differ.

Incidents may be raised against documentation as well as code or a system under test.

Incidents may be analysed to monitor the test process and to aid in test process improvement.

Incidents should be logged when someone other than the author of the product under test performs the testing. Typically the information logged on an incident will include expected and actual results, test environment, software under test id, name of tester(s), severity, scope, priority and any other information deemed relevant to reproducing and fixing the potential fault.

Incidents should be tracked from inception through various stages to eventually close out and resolution.

Labels: , , , ,

Monday, January 18, 2010

Static Testing and Static Analysis

Why, when and what to review?

Any document can be reviewed. For instance, requirement specifications, design specifications, code, test plans, user guides, etc. Ideally review as soon as possible.

Costs – on-going review costs of approx. 15% of development budget. The cost of reviews includes activities such as the review process itself, metrics analysis and process improvement.

Benefits – include areas such as development productivity improvements, reduced development time-scales, testing cost and time reductions, lifetime cost reductions, reduced fault levels, etc.

Types of Reviews

Walkthroughs – scenarios, dry runs, peer group, led by author.

Inspections – led by trained moderator (not author), defined roles, includes metrics, formal process based on rules and checklists with entry and exit criteria.

Informal reviews – undocumented, but useful, cheap, widely-used.

Technical reviews (also known as peer reviews) – documented, defined fault-detection process, includes peers and technical experts, no management participation.

Goals – validation and verification against specifications and standards, (and process improvement). Achieve consensus.

Activities – planning, overview meeting, preparation, review meeting, and follow-up (or similar).

Roles and responsibilities – moderators, authors, reviewers/inspectors and managers (planning activities).

Deliverables – product changes, source document changes, and improvements (both review and development).

Pitfalls – lack of training, lack of documentation, lack of management support (and failure to improve process).

Static Analysis

- compiler-generated information; dataflow analysis; control-flow graphing; complexity analysis

Explain that static analysis involves no dynamic execution and can detect possible faults such as unreachable code, undeclared variables, parameter type mismatches, uncalled functions and procedures, possible array bound violations, etc.

Explain that any faults found by compilers are found by static analysis. Compilers find faults in the syntax. Many compilers also provide information on variable use, which is useful during maintenance.

Explain that data flow analysis considers the use of data on paths through the code, looking for possible anomalies, such as ‘definitions’ with no intervening ‘use’, and ‘use’ of a variable after it is ‘killed’.

Explain use of, and provide example of production of control flow graph for a program.

Introduce complexity metrics, including cyclomatic complexity.

Labels: ,

Monday, January 11, 2010

Types of Testing - Part II

Functional System Testing - functional requirements; requirements-based testing; business process-based testing

Functional requirement as per IEEE definition, which is “A requirement that specifies a function that a system or system component must perform”.

Requirements-based testing – where the user requirements specification and the system requirements specification (as used for contracts) may be used to derive test cases.

Business process-based testing – based on expected user profiles (e.g. scenarios, use cases, etc.).

Non-Functional System Testing - non-functional requirements; non-functional test types: load, performance and stress; security; usability; storage; volume; installability; documentation; recovery

Explain that non-functional requirements are as important as functional requirements.

Integration Testing in the Large - testing the integration of systems and packages; testing interfaces to external organisations (e.g. Electronic Data Interchange, Internet)

Integration with other (complete) systems.

Identification of, and risk associated with, interfaces to these other systems.

Incremental/non-incremental approaches to integration.

Integration Testing in the Small - assembling components into sub-systems; sub-systems to systems; stubs and drivers; big-bang, top-down, bottom-up, other strategies

Integration testing tests interfaces and interaction of modules/subsystems.

Role of stubs and drivers.

Incremental strategies, to include: top-down, bottom-up and functional incrementation. Non-incremental approach (“big bang”).

Maintenence Testing - problems of maintenance; testing changes; risks of changes and regression testing

Testing old code – with poor/missing specifications.

Scope of testing with respect to changed code.

Impact analysis is difficult – so higher risk when making changes – and difficult to decide how much regression testing to do.

Labels: , , ,

Wednesday, January 6, 2010

Types of Testing - Part I

Though there are various types of testing, but we will only talk about those types which are used in Real life or in Real Time Environment. Part I & II will give you only brief introduction about them. Details about each and every type of testing will be covered in the later posts.

1) Regression Testing:

Whenever a fault is detected and fixed then the software should be re-tested to ensure that the original fault has been successfully removed. You should also consider testing for similar and related faults.

Tests should be repeatable, to allow re-testing / regression testing.

Regression testing attempts to verify that modifications have not caused unintended adverse side effects in the unchanged software (regression faults) and that the modified system still meets its requirements. It is performed whenever the software, or its environment, is changed.

Regression test suites are run many times and generally evolve slowly, so regression testing is ideal for automation. If automation is not possible or the regression test suite is very large then it may be necessary to prune the test suite. You may drop repetitive tests, reduce the number of tests on fixed faults, combine test cases, designate some tests for periodic testing, etc. A subset of the regression test suite may also be used to verify

2) Acceptance Testing:

Acceptance testing may be the only form of testing conducted by and visible to a customer when applied to a software package.

User acceptance testing – the final stage of validation. Customer should perform or be closely involved in this. Customers may choose to do any test they wish, normally based on their usual business processes. A common approach is to set up a model office where systems are tested in an environment as close to field use as is achievable.

Contract acceptance testing – A demonstration of the acceptance criteria, which would have been defined in the contract, being met.

Alpha & beta testing – In alpha and beta tests, when the software seems stable, people who represent your market use the product in the same way(s) that they would if they bought the finished version and give you their comments. Alpha tests are performed at the developer’s site, while beta tests are performed at the testers’ sites.

Labels: , , , , ,