Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền

Software Testing

1. What is it?

2. Who does it?

3. What are the steps?

4. What is the work product?

5. How do I ensure that I’ve done it right?Topics covered

1. Development testing

2. Test-driven development

3. Release testing

4. User testing

3Program testing

£ Testing is intended

p to show that a program does what it is intended to do and

p to discover program defects before it is put into use.

£ When you test software, you execute a program using

artificial data.

£ You check the results of the test run for errors,

anomalies or information about the program's nonfunctional attributes.

£ Can reveal the presence of errors NOT their absence.

£ Testing is part of a more general verification and

validation process, which also includes static validation

techniques.

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 1

Trang 1

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 2

Trang 2

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 3

Trang 3

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 4

Trang 4

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 5

Trang 5

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 6

Trang 6

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 7

Trang 7

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 8

Trang 8

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 9

Trang 9

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền trang 10

Trang 10

Tải về để xem bản đầy đủ

pdf 60 trang xuanhieu 7240
Bạn đang xem 10 trang mẫu của tài liệu "Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền

Bài giảng Introduction to Software Engineering - Week 9: Software testing - Nguyễn Thị Minh Tuyền
 station object interface
 WeatherStation
 identifier
 reportWeather ( )
 reportStatus ( )
 powerSave (instruments)
 remoteControl (commands)
 reconfigure (commands)
 restart (instruments)
 shutdown (instruments)
 20
 Remind: Weather station state diagram
 Controlled
 Operation
 shutdown() remoteControl()
 reportStatus()
 restart() Testing
 Shutdown Running
 transmission done test complete
 configuration done
 reconfigure()
 Transmitting
 powerSave()
 clock collection
 done reportWeather()
Configuring weather summary
 complete
 Summarizing
 Collecting
 21
 Weather station testing
£ Using a state model, identify sequences of state
 transitions to be tested and the event sequences
 to cause these transitions
£ For example:
 p Shutdown à Running à Shutdown
 p Configuring à Running à Testing à Transmitting à
 Running
 p Running à Collecting à Running à Summarizing à
 Transmitting à Running
 22
 Automated testing
£ Whenever possible, unit testing should be
 automated so that tests are run and checked
 without manual intervention.
£ In automated unit testing: use a test
 automation framework (such as JUnit) to write
 and run program tests.
 p Unit testing frameworks provide generic test
 classes that you extend to create specific test
 cases.
 p They can then run all of the tests that you have
 implemented and report, often through some GUI,
 on the success of otherwise of the tests.
 23
 Automated test components
£ A setup part
 p initialize the system with the test case, namely the inputs
 and expected outputs.
£ A call part
 p call the object or method to be tested.
£ An assertion part
 p compare the result of the call with the expected result. If
 the assertion evaluates to true, the test has been
 successful; if false, then it has failed.
 24
 Unit test effectiveness
£ The test cases should show that the component that
 you are testing does what it is supposed to do.
£ If there are defects in the component, these should be
 revealed by test cases.
£ 2 types of unit test case:
 p The first type: reflect normal operation of a program and
 should show that the component works as expected.
 p The second type: based on testing experience of where
 common problems arise. It should use abnormal inputs to
 check that these are properly processed and do not crash the
 component.
 25
 Testing strategies
£ Partition testing
 p Identify groups of inputs that have common
 characteristics and should be processed in the same
 way.
 p Should choose tests from within each of these groups.
£ Guideline-based testing
 p Use testing guidelines to choose test cases.
 p These guidelines reflect previous experience of the
 kinds of errors that programmers often make when
 developing components.
 26
 Partition testing
£ Input data and output results often fall into different
 classes where all members of a class are related.
£ Each of these classes is an equivalence partition
 or domain where the program behaves in an
 equivalent way for each class member.
£ Test cases should be chosen from each partition.
 27
 Equivalence partitions
 3 11
 4710
 Less than 4Between 4 and 10 More than 10
 Number of input values
 9999 100000
 10000 50000 99999
 Less than 10000Between 10000 and 99999 More than 99999
Input values
 28
 Equivalence partitioning
Input equivalence partitions Output partitions
 System
 Possible inputs Correct outputs Possible outputs
 29
 Example: Testing guidelines (sequences)
£ Test software with sequences which have only a
 single value.
£ Use sequences of different sizes in different tests.
£ Derive tests so that the first, middle and last
 elements of the sequence are accessed.
£ Test with sequences of zero length.
 30
 General testing guidelines
£ Choose inputs that force the system to generate all
 error messages
£ Design inputs that cause input buffers to overflow
£ Repeat the same input or series of inputs
 numerous times
£ Force invalid outputs to be generated
£ Force computation results to be too large or too
 small.
 31
 Component testing
£ Software components are often composite
 components that are made up of several
 interacting objects.
£ You access the functionality of these objects
 through the defined component interface.
£ Testing composite components should therefore
 focus on showing that the component interface
 behaves according to its specification.
 p You can assume that unit tests on the individual
 objects within the component have been completed.
 32
 Interface testing
 Test
 cases
A B
 C
 33
 Interface testing
£ Objectives: detect faults due to interface errors or
 invalid assumptions about interfaces.
£ Interface types
 p Parameter interfaces Data passed from one method or
 procedure to another.
 p Shared memory interfaces Block of memory is shared
 between procedures or functions.
 p Procedural interfaces Sub-system encapsulates a set
 of procedures to be called by other sub-systems.
 p Message passing interfaces Sub-systems request
 services from other sub-systems
 34
 Interface errors
£ Interface misuse
 p A calling component calls another component and
 makes an error in its use of its interface e.g. parameters
 in the wrong order.
£ Interface misunderstanding
 p A calling component embeds assumptions about the
 behaviour of the called component which are incorrect.
£ Timing errors
 p The called and the calling component operate at
 different speeds and out-of-date information is
 accessed.
 35
 Interface testing guidelines
£ Design tests so that parameters to a called
 procedure are at the extreme ends of their ranges.
£ Always test pointer parameters with null pointers.
£ Design tests which cause the component to fail.
£ Use stress testing in message passing systems.
£ In shared memory systems, vary the order in
 which components are activated.
 36
 System testing
£ Involves integrating components to create a
 version of the system and then testing the
 integrated system.
£ Focus on testing the interactions between
 components.
£ Checks that components are compatible, interact
 correctly and transfer the right data at the right
 time across their interfaces.
 37
 System and component testing
£ During system testing, reusable components that
 have been separately developed and off-the-shelf
 systems may be integrated with newly developed
 components. The complete system is then tested.
£ Components developed by different team
 members or sub-teams may be integrated at this
 stage. System testing is a collective rather than an
 individual process.
 p In some companies, system testing may involve a
 separate testing team with no involvement from
 designers and programmers.
 38
 Use-case testing
£ The use-cases developed to identify system
 interactions can be used as a basis for system
 testing.
£ Each use case usually involves several system
 components so testing the use case forces these
 interactions to occur.
£ The sequence diagrams associated with the use
 case documents the components and interactions
 that are being tested.
 39
 Collect weather data sequence chart
 Weather
information system
 SatComms WeatherStation Commslink WeatherData
 request (report)
 acknowledge
 reportWeather ()
 acknowledge get (summary) summarise ()
 send (report)
 acknowledge
 reply (report)
 acknowledge
 40
 Testing policies
£ Exhaustive system testing is impossible so testing
 policies which define the required system test
 coverage may be developed.
£ Examples of testing policies:
 p All system functions that are accessed through menus
 should be tested.
 p Combinations of functions that are accessed through the
 same menu must be tested.
 p Where user input is provided, all functions must be
 tested with both correct and incorrect input.
 41
 Topics covered
£ Development testing
£ Test-driven development
£ Release testing
£ User testing
 42
 Test-driven development (TDD)
£ An approach to program development in
 which you inter-leave testing and code
 development.
£ Tests are written before code and 'passing'
 the tests is the critical driver of development.
£ You develop code incrementally, along with a
 test for that increment.
£ Part of agile methods, such as Extreme
 Programming.
 p It can also be used in plan-driven development
 processes.
 43
 Test-driven development
Identify new pass
functionality
 fail Implement
 Write test Run test functionality and
 refactor
 44
 TDD process activities
£ Start by identifying the increment of functionality that
 is required.
 p This should normally be small and implementable in a few
 lines of code.
£ Write a test for this functionality and implement this as
 an automated test.
£ Run the test, along with all other tests that have been
 implemented.
 p Initially, you have not implemented the functionality so the
 new test will fail.
£ Implement the functionality and re-run the test.
£ Once all tests run successfully, you move on to
 implementing the next chunk of functionality.
 45
 Benefits of test-driven development
£ Code coverage
 p Every code segment that you write has at least one
 associated test è all code written has at least one test.
£ Regression testing
 p A regression test suite is developed incrementally as a
 program is developed.
£ Simplified debugging
 p When a test fails, it should be obvious where the problem lies.
 The newly written code needs to be checked and modified.
£ System documentation
 p The tests themselves are a form of documentation that
 describe what the code should be doing.
 46
 Regression testing
£ Regression testing is testing the system to check
 that changes have not 'broken' previously working
 code.
£ In a manual testing process, regression testing is
 expensive but, with automated testing, it is simple
 and straightforward. All tests are rerun every time
 a change is made to the program.
£ Tests must run 'successfully' before the change is
 committed.
 47
 Topics covered
£ Development testing
£ Test-driven development
£ Release testing
£ User testing
 48
 Release testing
£ Is the process of testing a particular release of a
 system that is intended for use outside of the
 development team.
£ Main goal: convince the supplier of the system
 that it is good enough for use.
£ Is usually a black-box testing process where
 tests are only derived from the system
 specification.
 49
 Release testing and system testing
£ Release testing is a form of system testing.
£ Important differences:
 p A separate team that has not been involved in the
 system development, should be responsible for release
 testing.
 p System testing by the development team should focus
 on discovering bugs in the system (defect testing). The
 objective of release testing is to check that the system
 meets its requirements and is good enough for external
 use (validation testing).
 50
 Requirements-based testing
£ Requirements-based testing involves examining
 each requirement and developing a test or tests for
 it.
£ Mentcare system requirements:
 p If a patient is known to be allergic to any particular
 medication, then prescription of that medication shall
 result in a warning message being issued to the system
 user.
 p If a prescriber chooses to ignore an allergy warning,
 they shall provide a reason why this has been ignored.
 51
 Requirements tests
1. Set up a patient record with no known allergies. Prescribe
 medication for allergies that are known to exist. Check that a
 warning message is not issued by the system.
2. Set up a patient record with a known allergy. Prescribe the
 medication to that the patient is allergic to, and check that the
 warning is issued by the system.
3. Set up a patient record in which allergies to two or more
 drugs are recorded. Prescribe both of these drugs separately
 and check that the correct warning for each drug is issued.
4. Prescribe two drugs that the patient is allergic to. Check that
 two warnings are correctly issued.
5. Prescribe a drug that issues a warning and overrule that
 warning. Check that the system requires the user to provide
 information explaining why the warning was overruled.
 52
 Performance testing
£ Part of release testing may involve testing the emergent
 properties of a system, such as performance and
 reliability.
£ Tests should reflect the profile of use of the system.
£ Performance tests usually involve planning a series of
 tests where the load is steadily increased until the
 system performance becomes unacceptable.
£ Stress testing is a form of performance testing where the
 system is deliberately overloaded to test its failure
 behaviour.
 53
 Topics covered
£ Development testing
£ Test-driven development
£ Release testing
£ User testing
 54
 User testing
£ Users or customers provide input and advice
 on system testing.
£ Is essential, even when comprehensive system
 and release testing have been carried out.
 p The reason for this is that influences from the user’s
 working environment have a major effect on the
 reliability, performance, usability and robustness of a
 system. These cannot be replicated in a testing
 environment.
 55
 Types of user testing
£ Alpha testing
 p Users of the software work with the development team
 to test the software at the developer's site.
£ Beta testing
 p A release of the software is made available to users to
 allow them to experiment and to raise problems that
 they discover with the system developers.
£ Acceptance testing
 p Customers test a system to decide whether or not it is
 ready to be accepted from the system developers and
 deployed in the customer environment. Primarily for
 custom systems.
 56
 The acceptance testing process
 Test Test Testing
 Test Tests
 criteria plan results report
 Define Plan Derive Run Negotiate Accept or
acceptance acceptance acceptance acceptance test results reject
 criteria testing tests tests system
 57
 Agile methods and acceptance testing
£ The user/customer is part of the development
 team and is responsible for making decisions on
 the acceptability of the system.
£ Tests are defined by the user/customer and are
 integrated with other tests in that they are run
 automatically when changes are made.
£ There is no separate acceptance testing process.
£ Main problem here is whether or not the
 embedded user is 'typical' and can represent the
 interests of all system stakeholders.
 58
 V-model
Requirements System System Detailed
specification specification design design
 System Sub-system Module and
 Acceptance
 integration integration unit code
 test plan
 test plan test plan and test
 Acceptance System Sub-system
 Service
 test integration test integration test
Questions?
 60

File đính kèm:

  • pdfbai_giang_introduction_to_software_engineering_week_9_softwa.pdf