ISTQB CTFL Certified Tester Foundation Level – 2018: Testing Throughout The Software Life Cycle Part 4 – Exam Labs Blog – IT Certifications in Easy Way

ISTQB CTFL Certified Tester Foundation Level – 2018: Testing Throughout The Software Life Cycle Part 4

April 11, 2023

9. Test Levels :
Component Testing

The D model discussed earlier, there were different test
stages. They are often called test levels. Test levels are groups of test
activities that are organized and managed together. Each test level is an
innocence of the testing process performed in relation to the software at a
given level of development from individual units or components to complete
systems or where applicable, systems of systems. Test levels are related to
other activities within the software development lifecycles.

The digital levels of testing are unit or component testing,
integration testing, system testing and acceptance testing. Each test level has
its own concerns and issues that we need to consider. Test levels are
characterized by the following attributes specific objectives test bases which
are the source that we can use to drive test cases for that level test object
for example, what is being tested, typical defects and failures and specific
approaches and responsibilities. To understand the whole idea better, think of
building a car. It has a lot of components. Before assembling the car, you need
to make sure that each component is working perfectly well by itself. So you
put some tests on each component individually.

That’s component or unit testing. After making sure that each
component is working fine, you will start adding the components one by one.
When you add a new component, you want to make sure that it works with the
various components quite well before moving along. So you want to make sure
that the newly added component interacts and integrates well with the other
component.

That’s integration testing. Now the whole car is built and
looks great, but you must do a test to drive and try as many scenarios as
possible. Like moving uphill. Try it on a TV road or Bombay road. That’s system
testing. So the car looks great and beautiful and works fine, but the buyer
still must try it first before purchasing it. That’s acceptance testing back to
the software world. Each of these test levels will include tests designed to uncover
problems specifically at that stage of development.

These levels of testing can be applied to any development
model, and they may vary depending on the development model objective. In
addition, for every test level, a suitable test environment is required in
acceptance testing. For example, a production like test environment is ideal,
while in component testing, the developers typically use their own development
environment for testing. Let’s talk about each test level one by one, starting
with component testing. Component testing is the lowest level of testing. Also
known as unit or module testing focuses on components that are usually tested
in isolation. Unit testing is usually done by the programmers, as they are the
best persons to know the code that constructs the component. So usually
developers alternate component development with finding and fixing defects.
Unit testing is intended to ensure that the code written for the unit meets its
specification barrier to its integration with other units.

Objectives of component testing include reducing the risk of
delivering a bad component verifying whether the functional and nonfunctional
behaviors of the component are as designed and specified building confidence in
the component quality finding defects in the component preventing defects from
escaping to higher test levels. Test Basis as we have mentioned before, test
Basis are the documents that we use as the basis of reference to perform the
testing. Examples of worker products that can be used as a test basis for
component testing include detailed design code data model component
specifications test Objects so what do we test while we are doing component or
unit testing? The answer is components.

Thus, test objects can be components, units or modules code
and data structures classes, database modules components are small beaches of
code developed by developers. Unit testing requires access to the code being
tested. Specification approaches and responsibilities. So all we want to do
here is making sure that the developer delivers a perfectly working unit and
not leave it to the tester to find CD mistakes as a component. Component
testing may cover functionality. For example, correctness of calculations non
functional characteristics for example, searching for memory leaks and
structural properties for example, decision testing. We will know more about
those in future videos. Component testing is often done in isolation from the
rest of the system depending on the software development, lifecycle model and
the system which may require mark objects, service, virtualization harnesses
stops and Drivers let me explain what stops and drivers are using the car
assembly example. Again, some components of the car can be tested in isolation
by itself.

But some other components might need some sort of external
tools, wires and maybe external bars to help test a component and software. We
call those helpers stops and drivers. Imagine if a developer implemented this
piece of code to calculate the salary of an inbree. Don’t worry, you don’t need
to know any programming languages to understand this. Just go along. In this
unit, the developer needs to interact with another component called Calculate
Bonus. But unfortunately, this unit is not ready yet. So what the developer
should do? Well, he might write or create a fake piece of code and name it
Calculate Bonus. It’s not the final version for sure yet, but it’s there only
to help the developer test his Calculate Salary unit. So what should he do or
write? Is the Calculate Bonus code the bare minimum? We don’t want the new code
to produce bugs by itself. So we might let the Calculate Bonus code just
returns $1,000. So while we test our Calculate Salary code, we will assume that
all employees will receive a $1,000 bonus. In our example, the Calculate Bonus
piece of code is called Stub. So a stub is a dummy code used for testing in
place of a cold component.

Now, if we want to test the Calculate Salary unit, we need
to write code to run it. It cannot be executed by itself in many languages. The
first start point of the software to run is a functional called the main
function. So in our case, we would write a main function to call our calculate
salary. Now, this main function is written only for testing purposes and it’s
called driver. So a driver is a drama code used for testing in place of a
caller component. In some cases, especially in incremental and iterative
development models. For example Agile, where coded changes are ongoing, we
would like to save the developers the time to do unit testing with every change
manually. So automated component regression tests play a key role in building
confidence that the changes have not broken existing components. Automated
testing means we automate the way we do the testing. So the computer would run
a script to run the tests for us instead of doing it manually. So far, so good.
Now I want your clear attention to what’s coming next.

As you know, developers will create the components, then
they will test those components. So developers will often write and execute
tests after having written the code for a component. Okay? However, in Agile
development, especially writing automated component tests may be seed writing
application Code this is an approach development called test driven development
or TDD. So the developers here start with writing the automated test cases.
Then they write the code to fulfill those automated tests. Of course, the first
time the developer runs the tests when there’s no actually any code written
yet, everything fails. Developers then will write the code bit by bit and
continuously runs the automated test scripts correcting mistakes till finally
all the scripts pass, which means coding is done. That’s why we call it Test
driven development. The tests are the ones that drive our way to development.
Later, the developers can refactor or refine or clean the code.

Projects that use TDD are highly iterative. Tested driven
development is an example of test first development. While Tested driven
development originated in Extreme Programming XP, which is one of Agile as
well, it has a spread to other forms of Agile and also to sequential life
cycles. As you don’t need to know much about TDD as it is explained in detail
in the Agile extension is to TD certificate, so we only need to know that it
exists. Typical defects and failures that can be found during component testing
are incorrect functionality, for example, not as described in Design
specification data flow problems or incorrect code and logic defects found
during unit tests are often not recorded with formal defect management systems
and developers can fix them instantaneously. However, when developers do report
defects or during component testing, it will be for the purpose of providing
important information for root cause analysis and the process improvement.

10. Testing Levels :
Integration Testing

Now that each unit has been proven to be working correctly,
individually and ready to go, the next level would be to put those units
together to build the system. This is what we call integration. At this stage,
testers concentrate slowly on the integration itself. For example, if they are
integrating module A with module B, they are interested in testing the
communication between those modules, not the functionality of the individual
module, as that was done already during component testing. Test basis thus,
examples of work of products that can be used as a testerbases for integration
testing include software and system design, sequence diagrams, interface and
communication protocol specifications use cases architecture at component or
system level workflows external interface definitions and the test objects
typically are subsystems databases, infrastructure interfaces, APIs and
microservices.

Objectives of integration testing Integration testing
focuses on interactions between components or systems. Objectives of
integration testing include reducing risk, verifying whether the functional and
non functional behaviors of the interfaces are as designed and specified building
confidence on the quality of the interfaces finding defects which may be in the
interfaces themselves or within the components or systems preventing defects
from escaping to higher test levels. As with component testing, in some cases
automated integration regression tests provide confidence that the changes have
not broken existing interfaces, components or systems.

There are two different sub levels of integration testing
described in this service component integration testing, which focuses on the interactions
and interfaces between integrated components and is performed after component
testing. This type of integration testing is usually carried out by developers
and is generally automated in Iterative and incremental development. Component
integration tests are usually part of the continuous integration process.
System integration testing, which focuses on the interactions and interfaces
between systems, packages and micro services. So we are talking about bigger
test objects here. Systems system integration testing can also cover
interactions with and interfaces provided by external organizations, for
example, web services. To use the example of the car, system integration is
done when the whole car is already assembled and tested and you want to try the
car with another system, say a camper.

The car is working perfectly fine by itself and the camper
is working fine by itself and now we want to try the two together, especially
the hedge part, which connects the car to the camper. This is system integration
testing. To have an example from the software industry, a trading system of an
investment bank will interact with the stock exchange to get the latest prices
for its stocks and shares on the international market. This type of integration
testing is usually carried out by testers. In this case, the developing
organization doesn’t control the external interfaces, which can create various
challenges for testing, for example, ensuring that tester blocking defects in
the external organization’s code are resolved, arranging for test environments,
and so on.

System integration testing may be done after system testing
or in parallel with ongoing system test activity in both sequential or
iterative and incremental development. Ideally, testers should understand the
architecture and influence integration planning. If integration tests are
planned before component or systems are built, maybe we can build those
components in the order required for most efficient testing.

Typical defects and failures for component integration
testing, typical defects and failures include incorrect data, missing data or
incorrect data encoding incorrect sequencing or timing of interface calls
interface mismatch failures in communication between components unhandled or
improperly handled communication failures between components, and incorrect
assumptions about the meaning units or boundaries of the data being passed
between components. Examples of typical defects and failures for system
integration testing include inconsistent method structures between systems,
incorrect data, missing data or incorrect data encoding, interface mismatch
failures in communication between systems, unhandled or improperly handled
communication failures between systems, incorrect assumptions about the meaning
units or boundaries of the data being passed between systems, and failure to
comply with mandatory security regulations.

Specific Approaches and Responsibilities for integration
Testing functional, non functional and structure test types are applicable. We
will explain those later. In addition, there are various integration
strategies. The first one, Big Bang integration, where we integrate a bunch of
units together in one single step, resulting in a complete system. The problem
with this kind of integration that it looks like building the system faster,
but in real life it’s much more time consuming. Either because we would have to
wait till we have a bunch of units ready to integrate or for the fact that when
testing of this system is conducted, it’s difficult to isolate any errors
found. We have an error caused when few units have been added, so each unit
caused the error. It’s hard to know.

But on the other hand, there are more systematic integration
strategies that are usually based on the system structure. For example, top
down integration or bottom up integration, or integration based on functional
tasks, transaction processing sequences, or some other aspects of the system or
components. In such systematic strategies, where we usually integrate the
components one by one, it’s much easier to isolate and detect defects. Early
integration should normally be incremental meaning a small number of additional
components or system at a time rather than big bank.

The greater the scope of integration, the more difficult it
becomes to isolate defects to a specific component or system, which may lead to
increased risk and additional time for troubleshooting. This is one reason that
continuous integration where software is integrated on a component by component
basis, for example, functional integration has become common practice. Such
continuous integration often includes automated regression testing, ideally at
multiple test levels. Finally, risk analysis of the most complex interfaces can
help to focus the integration testing.