Short Answer and Discussion (400 words total) - Reading
Finish it in 2 days, Read the article carefully or I will dispute
1. Short Answer
To read
• [required] Hong Zhu, P.A.V. Hall, and J.H.R. May. Software unit test
coverage and adequacy. ACM Computing Surveys, 29(4): 366-427, Dec.
1997. Read Sections 1-2.1 and 3.1.
• [required] Roger S. Pressman. Chapter 17: Software Testing
Strategies. Software Engineering: A Practitioners Approach, 7th ed.,
McGraw-Hill, 2010, pp. 449-480 [can also use 8th edition].
To turn in(250 words total)
Prepare a brief (no more than one page) written answer to the following two
questions. Write up your answer using MS Word. One well-presented paragraph
for each question is sufficient.
1. What role should measurement play in a good testing strategy?
2. If you were involved in a development group that already had used a
testing strategy for its products, what is the best way to decide how
that strategy can be improved?
2. Discussion
To read
• [required] A First Look at the Effectiveness of Personality Dimensions
in Promoting Users’ Satisfaction With the System
To turn in(150 words total)
Do you think UI personalization is a good idea?
Given a chance, would you choose personalized UI or common standard UI?
Why?
Have you heard about filter bubble? Do you think personalization of UI can lead
to similar circumstance?
1. Short Answer
To read
To turn in(250 words total)
To read
To turn in(150 words total)
Astrategy for software testing provides a road map that describes the stepsto be conducted as part of testing, when these steps are planned and thenundertaken, and how much effort, time, and resources will be required.
Therefore, any testing strategy must incorporate test planning, test case design,
test execution, and resultant data collection and evaluation.
A software testing strategy should be flexible enough to promote a customized
testing approach. At the same time, it must be rigid enough to encourage reason-
able planning and management tracking as the project progresses. Shooman
[Sho83] discusses these issues:
In many ways, testing is an individualistic process, and the number of different types
of tests varies as much as the different development approaches. For many years, our
449
C H A P T E R
17SOFTWARE TESTINGSTRATEGIES
What is it? Software is tested to
uncover errors that were made inad-
vertently as it was designed and
constructed. But how do you conduct
the tests? Should you develop a formal plan for
your tests? Should you test the entire program as
a whole or run tests only on a small part of it?
Should you rerun tests you’ve already conducted
as you add new components to a large system?
When should you involve the customer? These
and many other questions are answered when
you develop a software testing strategy.
Who does it? A strategy for software testing is
developed by the project manager, software
engineers, and testing specialists.
Why is it important? Testing often accounts for
more project effort than any other software engi-
neering action. If it is conducted haphazardly,
time is wasted, unnecessary effort is expended,
and even worse, errors sneak through undetected.
It would therefore seem reasonable to establish
a systematic strategy for testing software.
What are the steps? Testing begins “in the small”
and progresses “to the large.” By this I mean
Q U I C K
L O O K
that early testing focuses on a single component
or a small group of related components and
applies tests to uncover errors in the data and
processing logic that have been encapsulated
by the component(s). After components are
tested they must be integrated until the complete
system is constructed. At this point, a series of
high-order tests are executed to uncover errors
in meeting customer requirements. As errors are
uncovered, they must be diagnosed and cor-
rected using a process that is called debugging.
What is the work product? A Test Specification
documents the software team’s approach to test-
ing by defining a plan that describes an overall
strategy and a procedure that defines specific
testing steps and the types of tests that will be
conducted.
How do I ensure that I’ve done it right? By
reviewing the Test Specification prior to testing,
you can assess the completeness of test cases
and testing tasks. An effective test plan and pro-
cedure will lead to the orderly construction of
the software and the discovery of errors at each
stage in the construction process.
K E Y
C O N C E P T S
alpha test . . . . . .469
beta test . . . . . . .469
class testing . . . . .466
configuration
review . . . . . . . .468
debugging . . . . . .473
deployment
testing . . . . . . . . .472
independent
test group . . . . . .452
pre75977_ch17.qxd 11/27/08 6:09 PM Page 449
450 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
integration
testing . . . . . . . . .459
regression
testing . . . . . . . . .467
system testing . . .470
unit testing . . . . .456
validation
testing . . . . . . . . .467
V&V . . . . . . . . . .450
only defense against programming errors was careful design and the native intelligence
of the programmer. We are now in an era in which modern design techniques [and tech-
nical reviews] are helping us to reduce the number of initial errors that are inherent in the
code. Similarly, different test methods are beginning to cluster themselves into several
distinct approaches and philosophies.
These “approaches and philosophies” are what I call strategy—the topic to be pre-
sented in this chapter. In Chapters 18 through 20, the testing methods and tech-
niques that implement the strategy are presented.
1 7 . 1 A S T R AT E G I C A P P R O A C H T O S O F T WA R E T E S T I N G
Testing is a set of activities that can be planned in advance and conducted system-
atically. For this reason a template for software testing—a set of steps into which you
can place specific test case design techniques and testing methods—should be
defined for the software process.
A number of software testing strategies have been proposed in the literature.
All provide you with a template for testing and all have the following generic
characteristics:
• To perform effective testing, you should conduct effective technical reviews
(Chapter 15). By doing this, many errors will be eliminated before testing
commences.
• Testing begins at the component level and works “outward” toward the
integration of the entire computer-based system.
• Different testing techniques are appropriate for different software engi-
neering approaches and at different points in time.
• Testing is conducted by the developer of the software and (for large projects)
an independent test group.
• Testing and debugging are different activities, but debugging must be accom-
modated in any testing strategy.
A strategy for software testing must accommodate low-level tests that are neces-
sary to verify that a small source code segment has been correctly implemented
as well as high-level tests that validate major system functions against customer
requirements. A strategy should provide guidance for the practitioner and a set of
milestones for the manager. Because the steps of the test strategy occur at a time
when deadline pressure begins to rise, progress must be measurable and problems
should surface as early as possible.
17.1.1 Verification and Validation
Software testing is one element of a broader topic that is often referred to as verifi-
cation and validation (V&V). Verification refers to the set of tasks that ensure that
WebRef
Useful resources for
software testing can be
found at www.mtsu
.edu/~storm/.
pre75977_ch17.qxd 11/27/08 6:09 PM Page 450
software correctly implements a specific function. Validation refers to a different set
of tasks that ensure that the software that has been built is traceable to customer
requirements. Boehm [Boe81] states this another way:
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
The definition of V&V encompasses many software quality assurance activities
(Chapter 16).1
Verification and validation includes a wide array of SQA activities: technical
reviews, quality and configuration audits, performance monitoring, simulation, feasi-
bility study, documentation review, database review, algorithm analysis, development
testing, usability testing, qualification testing, acceptance testing, and installation test-
ing. Although testing plays an extremely important role in V&V, many other activities
are also necessary.
Testing does provide the last bastion from which quality can be assessed and,
more pragmatically, errors can be uncovered. But testing should not be viewed as a
safety net. As they say, “You can’t test in quality. If it’s not there before you begin test-
ing, it won’t be there when you’re finished testing.” Quality is incorporated into soft-
ware throughout the process of software engineering. Proper application of methods
and tools, effective technical reviews, and solid management and measurement all
lead to quality that is confirmed during testing.
Miller [Mil77] relates software testing to quality assurance by stating that “the
underlying motivation of program testing is to affirm software quality with methods
that can be economically and effectively applied to both large-scale and small-scale
systems.”
17.1.2 Organizing for Software Testing
For every software project, there is an inherent conflict of interest that occurs as
testing begins. The people who have built the software are now asked to test the soft-
ware. This seems harmless in itself; after all, who knows the program better than its
developers? Unfortunately, these same developers have a vested interest in demon-
strating that the program is error-free, that it works according to customer require-
ments, and that it will be completed on schedule and within budget. Each of these
interests mitigate against thorough testing.
From a psychological point of view, software analysis and design (along with
coding) are constructive tasks. The software engineer analyzes, models, and then
creates a computer program and its documentation. Like any builder, the software
C H A P T E R 1 7 S O F T W A R E T E S T I N G S T R A T E G I E S 451
uote:
“Testing is the
unavoidable part
of any responsible
effort to develop a
software system.”
William Howden
Don’t get sloppy and
view testing as a
“safety net” that will
catch all errors that
occurred because of
weak software engi-
neering practices. It
won’t. Stress quality
and error detection
throughout the
software process.
1 It should be noted that there is a strong divergence of opinion about what types of testing consti-
tute “validation.” Some people believe that all testing is verification and that validation is conducted
when requirements are reviewed and approved, and later, by the user when the system is opera-
tional. Other people view unit and integration testing (Sections 17.3.1 and 17.3.2) as verification
and higher-order testing (Sections 17.6 and 17.7) as validation.
uote:
“Optimism is the
occupational
hazard of
programming;
testing is the
treatment.”
Kent Beck
pre75977_ch17.qxd 11/27/08 6:09 PM Page 451
engineer is proud of the edifice that has been built and looks askance at anyone
who attempts to tear it down. When testing commences, there is a subtle, yet def-
inite, attempt to “break” the thing that the software engineer has built. From the
point of view of the builder, testing can be considered to be (psychologically)
destructive. So the builder treads lightly, designing and executing tests that will
demonstrate that the program works, rather than uncovering errors. Unfortunately,
errors will be present. And, if the software engineer doesn’t find them, the cus-
tomer will!
There are often a number of misconceptions that you might infer erroneously
from the preceding discussion: (1) that the developer of software should do no test-
ing at all, (2) that the software should be “tossed over the wall” to strangers who will
test it mercilessly, (3) that testers get involved with the project only when the testing
steps are about to begin. Each of these statements is incorrect.
The software developer is always responsible for testing the individual units
(components) of the program, ensuring that each performs the function or exhibits
the behavior for which it was designed. In many cases, the developer also conducts
integration testing—a testing step that leads to the construction (and test) of the
complete software architecture. Only after the software architecture is complete
does an independent test group become involved.
The role of an independent test group (ITG) is to remove the inherent problems
associated with letting the builder test the thing that has been built. Independent
testing removes the conflict of interest that may otherwise be present. After all, ITG
personnel are paid to find errors.
However, you don’t turn the program over to ITG and walk away. The developer
and the ITG work closely throughout a software project to ensure that thorough tests
will be conducted. While testing is conducted, the developer must be available to
correct errors that are uncovered.
The ITG is part of the software development project team in the sense that it
becomes involved during analysis and design and stays involved (planning and spec-
ifying test procedures) throughout a large project. However, in many cases the ITG
reports to the software quality assurance organization, thereby achieving a degree
of independence that might not be possible if it were a part of the software engi-
neering organization.
17.1.3 Software Testing Strategy—The Big Picture
The software process may be viewed as the spiral illustrated in Figure 17.1. Initially,
system engineering defines the role of software and leads to software requirements
analysis, where the information domain, function, behavior, performance, con-
straints, and validation criteria for software are established. Moving inward along
the spiral, you come to design and finally to coding. To develop computer software,
you spiral inward (counterclockwise) along streamlines that decrease the level of
abstraction on each turn.
452 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
An independent test
group does not have
the “conflict of
interest” that builders
of the software might
experience.
uote:
“The first mistake
that people make
is thinking that the
testing team is
responsible for
assuring quality.”
Brian Marick
pre75977_ch17.qxd 11/27/08 6:09 PM Page 452
A strategy for software testing may also be viewed in the context of the spiral
(Figure 17.1). Unit testing begins at the vortex of the spiral and concentrates on each
unit (e.g., component, class, or WebApp content object) of the software as imple-
mented in source code. Testing progresses by moving outward along the spiral to
integration testing, where the focus is on design and the construction of the software
architecture. Taking another turn outward on the spiral, you encounter validation
testing, where requirements established as part of requirements modeling are vali-
dated against the software that has been constructed. Finally, you arrive at system
testing, where the software and other system elements are tested as a whole. To test
computer software, you spiral out in a clockwise direction along streamlines that
broaden the scope of testing with each turn.
Considering the process from a procedural point of view, testing within the con-
text of software engineering is actually a series of four steps that are implemented
sequentially. The steps are shown in Figure 17.2. Initially, tests focus on each
C H A P T E R 1 7 S O F T W A R E T E S T I N G S T R A T E G I E S 453
System testing
Validation testing
Integration testing
Unit testing
Code
Design
Requirements
System engineering
FIGURE 17.1
Testing
strategy
Unit
testCode
Design
Requirements
Testing
“direction”
Integration test
High-order
tests
FIGURE 17.2
Software
testing steps
What is the
overall
strategy for
software testing?
?
WebRef
Useful resources for
software testers can
be found at www
.SQAtester.com.
pre75977_ch17.qxd 11/27/08 6:09 PM Page 453
component individually, ensuring that it functions properly as a unit. Hence, the
name unit testing. Unit testing makes heavy use of testing techniques that exercise
specific paths in a component’s control structure to ensure complete coverage and
maximum error detection. Next, components must be assembled or integrated to
form the complete software package. Integration testing addresses the issues associ-
ated with the dual problems of verification and program construction. Test case
design techniques that focus on inputs and outputs are more prevalent during
integration, although techniques that exercise specific program paths may be used
to ensure coverage of major control paths. After the software has been integrated
(constructed), a set of high-order tests is conducted. Validation criteria (estab-
lished during requirements analysis) must be evaluated. Validation testing provides
final assurance that software meets all informational, functional, behavioral, and
performance requirements.
The last high-order testing step falls outside the boundary of software engineer-
ing and into the broader context of computer system engineering. Software, once
validated, must be combined with other system elements (e.g., hardware, people,
databases). System testing verifies that all elements mesh properly and that overall
system function/performance is achieved.
454 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
The scene: Doug Miller’s office, as
component-level design continues and
construction of certain components begins.
The players: Doug Miller, software engineering
manager, Vinod, Jamie, Ed, and Shakira—members of
the SafeHome software engineering team.
The conversation:
Doug: It seems to me that we haven’t spent enough time
talking about testing.
Vinod: True, but we’ve all been just a little busy. And
besides, we have been thinking about it . . . in fact, more
than thinking.
Doug (smiling): I know . . . we’re all overloaded, but
we’ve still got to think down the line.
Shakira: I like the idea of designing unit tests before I
begin coding any of my components, so that’s what I’ve
been trying to do. I have a pretty big file of tests to run
once code for my components is complete.
Doug: That’s an Extreme Programming [an agile
software development process, see Chapter 3]
concept, no?
Ed: It is. Even though we’re not using Extreme Programming
per se, we decided that it’d be a good idea to design unit
tests before we build the component—the design gives us
all of the information we need.
Jamie: I’ve been doing the same thing.
Vinod: And I’ve taken on the role of the integrator, so
every time one of the guys passes a component to me,
I’ll integrate it and run a series of regression tests on the
partially integrated program. I’ve been working to
design a set of appropriate tests for each function in the
system.
Doug (to Vinod): How often will you run the tests?
Vinod: Every day . . . until the system is integrated . . .
well, I mean until the software increment we plan to
deliver is integrated.
Doug: You guys are way ahead of me!
Vinod (laughing): Anticipation is everything in the
software biz, Boss.
SAFEHOME
Preparing for Testing
pre75977_ch17.qxd 11/27/08 6:09 PM Page 454
17.1.4 Criteria for Completion of Testing
A classic question arises every time software testing is discussed: “When are we
done testing—how do we know that we’ve tested enough?” Sadly, there is no defin-
itive answer to this question, but there are a few pragmatic responses and early
attempts at empirical guidance.
One response to the question is: “You’re never done testing; the burden simply
shifts from you (the software engineer) to the end user.” Every time the user executes
a computer program, the program is being tested. This sobering fact underlines the
importance of other software quality assurance activities. Another response (some-
what cynical but nonetheless accurate) is: “You’re done testing when you run out of
time or you run out of money.”
Although few practitioners would argue with these responses, you need more rig-
orous criteria for determining when sufficient testing has been conducted. The
cleanroom software engineering approach (Chapter 21) suggests statistical use tech-
niques [Kel00] that execute a series of tests derived from a statistical sample of all
possible program executions by all users from a targeted population. Others (e.g.,
[Sin99]) advocate using statistical modeling and software reliability theory to predict
the completeness of testing.
By collecting metrics during software testing and making use of existing software
reliability models, it is possible to develop meaningful guidelines for answering the
question: “When are we done testing?” There is little debate that further work
remains to be done before quantitative rules for testing can be established, but the
empirical approaches that currently exist are considerably better than raw intuition.
1 7 . 2 S T R AT E G I C I S S U E S
Later in this chapter, I present a systematic strategy for software testing. But even the
best strategy will fail if a series of overriding issues are not addressed. Tom Gilb
[Gil95] argues that a software testing strategy will succeed when software testers:
Specify product requirements in a quantifiable manner long before testing com-
mences. Although the overriding objective of testing is to find errors, a good testing
strategy also assesses other quality characteristics such as portability, maintain-
ability, and usability (Chapter 14). These should be specified in a way that is meas-
urable so that testing results are unambiguous.
State testing objectives explicitly. The specific objectives of testing should be
stated in measurable terms. For example, test effectiveness, test coverage, mean-
time-to-failure, the cost to find and fix defects, remaining defect density or fre-
quency of occurrence, and test work-hours should be stated within the test plan.
Understand the users of the software and develop a profile for each user category.
Use cases that describe the interaction scenario for each class of user can reduce
overall testing effort by focusing testing on actual use of the product.
C H A P T E R 1 7 S O F T W A R E T E S T I N G S T R A T E G I E S 455
When are
we finished
testing?
?
WebRef
A comprehensive
glossary of testing terms
can be found at www
.testingstandards
.co.uk/living_
glossary.htm.
What
guidelines
lead to a
successful
software testing
strategy?
?
WebRef
An excellent list of
testing resources can
be found at www
.io.com/~wazmo/
qa/.
pre75977_ch17.qxd 11/27/08 6:09 PM Page 455
Develop a testing plan that emphasizes “rapid cycle testing.” Gilb [Gil95] recom-
mends that a software team “learn to test in rapid cycles (2 percent of project
effort) of customer-useful, at least field ‘trialable,’ increments of functionality
and/or quality improvement.” The feedback generated from these rapid cycle tests
can be used to control quality levels and the corresponding test strategies.
Build “robust” software that is designed to test itself. Software should be designed
in a manner that uses antibugging (Section 17.3.1) techniques. That is, software
should be capable of diagnosing certain classes of errors. In addition, the design
should accommodate automated testing and regression testing.
Use effective technical reviews as a filter prior to testing. Technical reviews
(Chapter 15) can be as effective as testing in uncovering errors. For this reason,
reviews can reduce the amount of testing effort that is required to produce high-
quality software.
Conduct technical reviews to assess the test strategy and test cases themselves.
Technical reviews can uncover inconsistencies, omissions, and outright errors in
the testing approach. This saves time and also improves product quality.
Develop a continuous improvement approach for the testing process. The test strat-
egy should be measured. The metrics collected during testing should be used as
part of a statistical process control approach for software testing.
1 7 . 3 T E S T S T R AT E G I E S F O R C O N V E N T I O N A L S O F T WA R E 2
There are many strategies that can be used to test software. At one extreme, you can
wait until the system is fully constructed and then conduct tests on the overall sys-
tem in hopes of finding errors. This approach, although appealing, simply does not
work. It will result in buggy software that disappoints all stakeholders. At the other
extreme, you could conduct tests on a daily basis, whenever any part of the system
is constructed. This approach, although less appealing to many, can be very effec-
tive. Unfortunately, some software developers hesitate to use it. What to do?
A testing strategy that is chosen by most software teams falls between the two
extremes. It takes an incremental view of testing, beginning with the testing of indi-
vidual program units, moving to tests designed to facilitate the integration of the
units, and culminating with tests that exercise the constructed system. Each of these
classes of tests is described in the sections that follow.
17.3.1 Unit Testing
Unit testing focuses verification effort on the smallest unit of software design—the
software component or module. Using the component-level design description as a
456 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
uote:
“Testing only
to end-user
requirements is
like inspecting a
building based
on the work done
by the interior
designer at the
expense of the
foundations,
girders, and
plumbing.”
Boris Beizer
2 Throughout this book, I use the terms conventional software or traditional software to refer to com-
mon hierarchical or call-and-return software architectures that are frequently encountered in a
variety of application domains. Traditional software architectures are not object-oriented and do
not encompass WebApps.
pre75977_ch17.qxd 11/27/08 6:09 PM Page 456
guide, important control paths are tested to uncover errors within the boundary of
the module. The relative complexity of tests and the errors those tests uncover is lim-
ited by the constrained scope established for unit testing. The unit test focuses on the
internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.
Unit-test considerations. Unit tests are illustrated schematically in Figure 17.3.
The module interface is tested to ensure that information properly flows into and out
of the program unit under test. Local data structures are examined to ensure that
data stored temporarily maintains its integrity during all steps in an algorithm’s
execution. All independent paths through the control structure are exercised to ensure
that all statements in a module have been executed at least once. Boundary conditions
are tested to ensure that the module operates properly at boundaries established to
limit or restrict processing. And finally, all error-handling paths are tested.
Data flow across a component interface is tested before any other testing is initi-
ated. If data do not enter and exit properly, all other tests are moot. In addition, local
data structures should be exercised and the local impact on global data should be as-
certained (if possible) during unit testing.
Selective testing of execution paths is an essential task during the unit test. Test
cases should be designed to uncover errors due to erroneous computations, incor-
rect comparisons, or improper control flow.
Boundary testing is one of the most important unit testing tasks. Software often
fails at its boundaries. That is, errors often occur when the nth element of an
n-dimensional array is processed, when the i th repetition of a loop with i passes is
invoked, when the maximum or minimum allowable value is encountered. Test
cases that exercise data structure, control flow, and data values just below, at, and
just above maxima and minima are very likely to uncover errors.
C H A P T E R 1 7 S O F T W A R E T E S T I N G S T R A T E G I E S 457
Test
cases
Module Interface
Local data structures
Boundary conditions
Independent paths
Error-handling paths
FIGURE 17.3
Unit test
It’s not a bad idea to
design unit test cases
before you develop
code for a component.
It helps ensure that
you’ll develop code
that will pass the tests.
What errors
are commonly
found during unit
testing?
?
pre75977_ch17.qxd 11/27/08 6:09 PM Page 457
A good design anticipates error conditions and establishes error-handling paths
to reroute or cleanly terminate processing when an error does occur. Yourdon
[You75] calls this approach antibugging. Unfortunately, there is a tendency to incor-
porate error handling into software and then never test it. A true story may serve to
illustrate:
A computer-aided design system was developed under contract. In one transaction pro-
cessing module, a practical joker placed the following error handling message after a
series of conditional tests that invoked various control flow branches: ERROR! THERE IS
NO WAY YOU CAN GET HERE. This “error message” was uncovered by a customer during
user training!
Among the potential errors that should be tested when error handling is evalu-
ated are: (1) error description is unintelligible, (2) error noted does not correspond to
error encountered, (3) error condition causes system intervention prior to error han-
dling, (4) exception-condition processing is incorrect, or (5) error description does
not provide enough information to assist in the location of the cause of the error.
Unit-test procedures. Unit testing is normally considered as an adjunct to the
coding step. The design of unit tests can occur before coding begins or after source
code has been generated. A review of design information provides guidance for
establishing test cases that are likely to uncover errors in each of the categories dis-
cussed earlier. Each test case should be coupled with a set of expected results.
Because a component is not a stand-alone program, driver and/or stub software
must often be developed for each unit test. The unit test environment is illustrated in
Figure 17.4. In most applications a driver is nothing more than a “main program” that
accepts test case data, passes such data to the component (to …
https://doi.org/10.1177/2158244018769125
SAGE Open
April-June 2018: 1 –12
© The Author(s) 2018
DOI: 10.1177/2158244018769125
journals.sagepub.com/home/sgo
Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License
(http://www.creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of
the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
Article
Introduction
User experience (UX) encompasses the concepts of usability
and affective engineering. It broadly explains major interac-
tion aspects between a user and a product such as interface.
Thus, to have better interface experience, several methods
have been proposed, and individuals’ personality characteris-
tics is one of those proposed methods. Users’ personality fea-
tures can be a strategic advantage for the design of adaptive
and personalized user interfaces (UIs; Al-Samarraie,
Eldenfria, & Dawoud, 2017; de Oliveira, Karatzoglou,
Concejero Cerezo, Armenta Lopez de Vicuña, & Oliver,
2011). This can be formed clearly in interface design ele-
ments such as the color element, and previous studies such as
Marcus and Gould (2000); Brazier, Deakin, Cooke, Russell,
and Jones (2001); and Reinecke and Bernstein (2013) high-
lighted the significant role of the color in the interface. In
contrast, many studies have been conducted to clarify gen-
eral theories that characterizes its psychological impact
(Al-Samarraie, Sarsam, Alzahrani, Alalwan, & Masood,
2016; Karsvall, 2002), in which personality has been linked
with technology in several various manners (Svendsen,
Johnsen, Almås-Sørensen, & Vittersø, 2013). Precisely, from
the personality perspectives, users’ differences in personality
dimensions may results in certain preferences and tendencies
to adopt particular habits or pattern when learning (Butt &
Phillips, 2008). Nunes, Cerri, and Blanc (2008) noted that
the designers of an interface are leveraging users’ personality
in the design of interactive environments for the aim of
improving the interaction factors between users and environ-
ment. Thus, we explored the association between personality
profile and mobile user interface design elements (MUIDEs)
to provide an effective experience for learners.
Previous studies also showed how classical concept of
usability (Rudy, 1997) has been extended to involve user
satisfaction in certain context. This is because satisfaction
with a service or technology in general can be obtained
through tailoring the objects that an individual prefers to
use. A study by Oliveira, Cherubini, and Oliver (2013)
addressed the importance of studying users’ different per-
sonalities for promoting user satisfaction with mobile phone
services. This is because individual differences have a con-
siderable impact on user’s overall feelings (Ziemkiewicz
et al., 2011). This led us to say that understanding how to
provide a better UI in a mobile context can help to increase
our satisfaction in a way that objects of presentation are con-
figured to reflect certain usage behavior (mental model; Sun
769125 SGOXXX10.1177/2158244018769125SAGE OpenSarsam and Al-Samarraie
research-article20182018
1Universiti Sains Malaysia, Penang, Malaysia
Corresponding Author:
Hosam Al-Samarraie, Centre for Instructional Technology & Multimedia,
Universiti Sains Malaysia, 11800 Penang, Malaysia.
Email: [email protected]
A First Look at the Effectiveness of
Personality Dimensions in Promoting
Users’ Satisfaction With the System
Samer Muthana Sarsam1 and Hosam Al-Samarraie1
Abstract
Personalization of the user interface (UI) to certain individuals’ characteristics is crucial for ensuring satisfaction with
the service. Unfortunately, the attention in most UI personalization methods have been shifted from being behavioral-
personalization to self-personalization. Practically, we explored the potential of linking users’ personality dimensions with
their design preferences to shape the design of an interface. It is assumed that such design may effectively promote users’
satisfaction with the service. A total of 87 participants were used to design the UI for certain personality types, and 50
students were used to evaluate their satisfaction with the UI. The results that UI designed based on the users’ personality
characteristics helped to stimulate their satisfaction in a mobile learning context. This study offers a new way for customizing
the design of the interface based on the correlational link between individuals’ preferences and the structure of personality
characteristics.
Keywords
personality, satisfaction, mobile UI, HCI, UX
https://journals.sagepub.com/home/sgo
http://crossmark.crossref.org/dialog/?doi=10.1177\%2F2158244018769125&domain=pdf&date_stamp=2018-04-11
2 SAGE Open
& May, 2013). Lan, Jianjun, and Qizhi (2013) have pointed
out that personalized interface design is commonly associ-
ated with user-cantered design to which it provides user a
distinctive visual satisfaction and interaction. Later, studies
like Viveros, Rubio, and Ceballos’ (2014) have asserted that
users’ personality and cognitive abilities could influence the
way user perceive the design of activity in mobile applica-
tions. From this, we assumed that the personality of a person
can play a significant role in his or her learning experience
with mobile applications. Moreover, satisfaction is the
aggregate of individual’s feelings or behavior to the issues
that inspire a certain circumstance (Liaw & Huang, 2013).
When browsing information, it is important to understand
the process involved in designing the interface to accom-
modate the cognitive demands while performing such task.
This would help to attract users’ attention and get them
involved in the task (Bose, Singhai, Patankar, & Kumar,
2016).
Prior study in human–computer interaction (HCI) consid-
ered the use of the psychology in the design of UI; hence,
personalization research of UI design was established under
various frameworks like adaptive UI, user modeling, and
intelligent UIs. Maybury and Wahlster (1998) defined such
adaptive UIs as “human-machine interfaces that aim to
improve the efficiency, effectiveness and naturalness of
human-machine interaction by representing, reasoning and
acting on models of the user, domain, task, discourse and
media (e.g., graphics, natural language, gesture)” (p. 3).
However, in spite of this significant role of psychology in
building the design of the technologies, an evidence from the
literature (like Zhou & Lu, 2011) pointed out that the effects
of personality traits have seldom been examined. Moreover,
according to Agarwal and Prasad’s (1999) personality differ-
ences, which were previously ignored. This forms clear
understanding of personality differences is necessary as vari-
ous personalities are expected to interact differently with
design of UI and this can be due to different personal factors
such as motivation. Arazy, Nov, and Kumar (2015) stated that
UI personalization methods have been divorced from psycho-
logical theories of personality, and the user profiles derived
from the exited personalization approaches may not be related
to the personality traits tested in the prior work of psychology.
Nevertheless, current design of information visualization sys-
tems are still applying one-standard-design format to accom-
modate perceptual needs of all users without considering their
different demands (Steichen, Carenini, & Conati, 2013). This
would negatively affect how learners interact with the dis-
play. Therefore, we designed in this study a UI based on users’
personality types in a mobile learning context.
Assessment of Personality and Design
Preferences
Addressing user preferences is a fundamental issue in devel-
oping successful learning applications (Chen, Conner, &
Yannou, 2015). According to Kujala, Roto, Väänänen-
Vainio-Mattila, Karapanos, and Sinnelä (2011), the purpose
of UX is to produce a general positive utility experience to
the user, usage simplicity and pleasure that can be obtained
through active interaction with the display, which produces
the satisfaction level of utility. Hence, creating a positive
experience becomes necessary demand in retaining a com-
petitive edge (Djamasbi et al., 2014), especially in the design
of mobile UI.
On the contrary, UX with the device or service may vary
from one application to another. UI design preferences can
exist in the design of mobile phone UI and websites. From
the literature designing UI of mobile devices, it can be
observed that, having a particular design format may influ-
ence one’s experience based on their familiarity with the dis-
played objects. For instance, Welch and Kim (2013) found
that increasing the size of menu elements results in a signifi-
cant increase in user’s performance. However, in terms of
designing web page, the behavior of the users can be changed
specially that some users practiced to see web objects such as
search, home button, and navigation at particular location in
the web page (Roth, Schmutz, Pauwels, Bargas-Avila, &
Opwis, 2010; Roth, Tuch, Mekler, Bargas-Avila, & Opwis,
2013) after exploring object placement on different types of
websites (online shops, online newspapers, and company
web pages). Researchers found that placing web objects at
expected locations and designing their display according to
user expectations facilitates orientation that is useful experi-
ence for first impressions and the overall UX as well.
Meanwhile, de Barros, Leitão, and Ribeiro (2014) asserted
the potential of different types of navigations (Panorama or
Panorama along with Pivot controls, and home screen menu)
in regulating UX. They recommend the idea of displaying all
the application’s main functionalities on the start screen to
offer more control of the screen contents.
Method
The process of incorporating the personality features of indi-
viduals into the design of an interface was fully explained in
the work of Sarsam and Al-Samarraie (2018).
Participants
A total of 87 undergraduate students (37 male, and 50 female)
were used to shape the design of the UI for certain personal-
ity characteristics. They were screened in the initial phase to
ensure that they have an acceptable level of experience and
familiarity with mobile applications. Their ages ranged
between 18 and 23 years old.
Design Features
The design phases of UI are represented in Figure 1, where we
firstly assessed learners’ personality characteristics to identify
Sarsam and Al-Samarraie 3
the design preferences for each personality type. The Big Five
model of personality developed by Goldberg (1981) and
Norman (1963) was used to build the main dimensions for
articulating one’s personality (McCrae & Costa, 1985, 1987);
these were Neuroticism, Extraversion, Openness,
Agreeableness, and Conscientiousness. The IPIP-NEO
(International Personality Item Pool Representation of the
NEO PI-R™) designed by Goldberg (1999) was used in this
study to examine the association between different personality
related traits of a person. It is commonly termed as the “Big
Five” which consists of extraversion, agreeableness, conscien-
tiousness, neuroticism, and openness to experience. The IPIP-
NEO scale includes 120-items, and its items can be found at
http://www.personal.psu.edu/~j5j/IPIP/ipipneo120.htm.
Learners were asked to provide their name, sex, age, and
country before start answering the personality questions. Items
of this instrument were designed to ensure covering different
personal aspects where a 5-point Likert-type scale was used
(very inaccurate, moderately inaccurate, neither accurate nor
inaccurate, moderately accurate, and very accurate).
Then, we administrated the second instrument to help us
gain further insights about learners’ preferences of certain
MUIDEs. The MUIDE instrument consists of multiscale
questions with graphs (see Supplementary). It was based on
a 10-point Likert-type scale (low preference to high prefer-
ence). The main MUIDEs were as follows:
1. Information structure: It refers to the organization of
the data. It consists of linear structure, hierarchical
structure, network structure, and matrix structure.
2. Navigation: It refers to the process of controlling the
movement from one page to another. In this study, six
types of navigations were considered, such as drill
down navigation, list navigation, segmented control,
stepping, scroll thumb, and slidable top navigation.
3. Layout: It refers to the arrangement of the interface
components. Linear layout, relative layout, and web
view layout were used in this study.
4. Font style: It refers to the properties applied to change
the appearance of the text. We used the commonly
used font styles of Arial, Times New Roman, Georgia,
and Verdana.
5. Font size: It refers to the size of the text. Four font
sizes were provided (40, 51, 53, and 75 points).
6. Buttons: It refers to the action script for performing
an action. In this study, three types of buttons were
used, such as buttons with name, button with image,
and button with name and image.
7. Color: Different types of color schemes were used.
The selection of colors was in accordance to hue,
saturation, and brightness.
8. List: It refers to the way of listing items on a page. It
helps to divide complex information into chunks.
Three types of lists were used, such as expanding list,
infinite list, and thumbnail list.
9. Information density: It denotes to the volume of
graphical and textual elements in the display. In the
present study, three types of information density were
used (low, medium, and high information density).
10. Support: It indicates the hints that are usually embed-
ded within the design. Two types of support items in
terms of iconic button and short help tips were used
in this study.
11. Alignment: It refers to the arrangement of informa-
tion (i.e., justify, left and center).
Participants’ viewpoints about various design principles
were also determined. This was essential to indicate any pos-
sible differences in users’ familiarity with design principles
(quantity, clarity, simplicity, and affordance of the general
design) with regard to the MUIDEs. These principles were
formed based on recommendations of Hewitt and Scardamalia
(1998) and Al-Samarraie, Selim, and Zaqout (2016).
However, to prepare the content for each design cluster, the
book of “Fundamentals of Multimedia” written by Li, Drew,
and Liu (2004) was used. Furthermore, materials of the book
address various learning aspects related to the design of
effective multimedia content.
Clustering of Personality Characteristics
Clustering is a technique that can be used when there is no
class attribute to be predicted. In cluster method, instances
are divided into natural groups “clusters,” where they reflect
Figure 1. UI design phases.
Note. UI = user interface; MUIDEs = mobile user interface design elements.
4 SAGE Open
certain pattern or profile in accordance to the source of the
instances (Ian, Frank, & Hall, 2011). These instances are
shaped according to their similarities or distances (Das, Sau,
& Panigrahi, 2015). Based on Das et al. (2015), there are two
types of clustering: (a) hierarchical clustering method and (b)
nonhierarchical clustering method. However, since the num-
ber of clusters is unknown yet in our study, we used hierar-
chical clustering algorithm to identify the number of clusters
to be used in K-means algorithm.
1. Hierarchical clustering: It is a technique that cre-
ates a hierarchical decomposition of the data set
(Han, Kamber, & Pei, 2011). According to Han
et al. (2011), the hierarchical method can be catego-
rized into either agglomerative or divisive based on
the formation of the hierarchical decomposition.
Several studies applied the hierarchical clustering
method because of its role in producing classifica-
tion tree and generating similarity scores from dis-
tances of ratio-level variables (Swobodzinski &
Jankowski, 2015). Hence, in this study, hieratical
clustering was applied using Ward’s cluster method
to identify the patterns associated with learners’ per-
sonality in accordance to their MUIDEs prefer-
ences. The clustering result yield two-cluster
solutions at the coefficient value of r2 = .45. The
personality facets for each group are presented in
Figure 2. After obtaining the number of clusters,
K-means algorithm was invoked to identify the
instances of the two clusters.
Figure 2. Personality facets for each group.
Sarsam and Al-Samarraie 5
2. K-means clustering algorithm: It is one of the most
popular unsupervised algorithms used for classifying
instances among clusters (Wang, Wang, Ke, Zeng, &
Li, 2015). It regulates the objects of a specific set into
numerous clusters. It also arranges the objects into K
partitions which used to shape the clusters based on
the similarity and dissimilarity features-function-
based distance (Han et al., 2011) by applying the
centroid-based partitioning approach. The difference
between instance and cluster’s representative was
measured using Euclidean distance. In addition, the
quality of cluster was measured based on the within-
cluster variation. Here, we applied K-means algo-
rithm to the two-cluster solutions from the previous
phase to allocate the MUIDEs in accordance to the
personality profiles (or clusters). Based on this, we
used K-means algorithm to the two clusters from the
previous phase, to match group the personality pro-
files in each cluster with their associated preferences
of MUIDEs.
For the first cluster, we noted that learners in this cluster
scored high in neuroticism (M = 62.75, SD = 20.47) followed
by agreeableness (M = 34.06, SD = 25.60), extraversion (M =
31.00, SD = 15.22), conscientiousness (M = 29.72, SD =
17.768), and openness to experience (M = 20.31, SD = 13.44),
respectively (see Table 1). Thus, for simplicity, we labeled
this cluster as “Neuroticism” it reserves the highest mean.
Participants of the second cluster were found to score high in
both extraversion (M = 67.66, SD = 16.49) and conscientious-
ness (M = 66.04, SD = 20.87). It is also notable that the
dimension of agreeableness (M = 50.57, SD = 22.35) scored
higher than openness to experience (M = 44.23, SD = 19.15)
and neuroticism (M = 39.61, SD = 22.83), respectively. As
extraversion is having the highest mean followed by consci-
entiousness as compared with other traits, we labeled this
cluster as the “Extra-conscientiousness” cluster.
Figure 3 shows a three-dimensional (3D) graphical repre-
sentation of the personality nuances mean for the first and the
second cluster along with Table 1. To validate participants’
personality traits in both clusters is identical, an ANOVA was
used in which a significant difference (p < .05) in all personal-
ity traits among personality traits in the two clusters. This in
turn confirms that the instances of personality traits in the one
cluster differs from other instances in other clusters. After
identifying the personality groups, an association rules method
was used to identify the design preferences for each personal-
ity type. This was described in the following section.
Association Rules Technique
Association rules method was used to predict the associa-
tions between MUIDEs in each personality group.
Association rules is a well-known method in data mining that
is fundamentally used to figure out meaningful and valuable
information from large data sets (Kamsu-Foguem, Rigal, &
Mauget, 2013; Singh, Ram, & Sodhi, 2013). For this pur-
pose, Waikato Environment for Knowledge Analysis (Weka;
Chauhan & Chauhan, 2014; Lekha, Srikrishna, & Vinod,
2013) was used in this study. The Apriori algorithm is the
most common algorithm to help researchers generate and
define patterns within set of items or selections (Ian et al.,
2011). It generates association rules that fulfill minimum
support and confidence thresholds. We configured the
Apriori parameters by setting the delta value to 0.05, and 0.1
as the value of lower bound for minimum support. Table 2
illustrates the constructed rules of MUIDEs in both clusters.
Table 1. Results of K-Means Algorithm.
Personality traits
Cluster 1 Cluster 2
M SD M SD
Extraversion 31.00 15.22 67.66 16.49
Agreeableness 34.06 25.60 50.57 22.35
Conscientiousness 29.72 17.76 66.04 20.87
Neuroticism 62.75 20.47 39.61 22.83
Openness to experience 20.31 13.44 44.23 19.15
Figure 3. 3D graphical representation for the two personality groups.
Note. 3D = three-dimensional.
6 SAGE Open
Table 2. Association Rules Results.
No. Rules for the neuroticism group Confidence (\%)
1 Alignment center → Network structure 100
2 Network structure → Layout relative layout 98
3 Low information density → Buttons photo 100
4 Verdana header 53-point → Segmented control 100
5 Scroll thumb → Layout relative layout 99
6 Font text size 40-point → Layout relative layout 100
7 Buttons photo → Expanding list 100
8 Font header 53-point → Verdana font type 96
9 Font text size 40-point = Verdana font type → Color hue 99
10 Stepping → Expanding list 100
Rules of the extra-conscientiousness group
1 Slidable top navigation → Network structure 100
2 Font size 14 point → layout relative layout 100
3 High information density → Buttons photo 100
4 Slidable top navigation → Scroll thumb 97
5 Buttons name and photo → Scroll thumb 100
6 Font header 75-point → Arial font type 100
7 Font text size 51-point → Arial font type 99
8 Scroll thumb → Color hue 100
9 Align text left → Font text size 51-point 100
10 Scroll thumb → Segmented control 99
11 Relative layout = Font text size 51-point → Align text left 100
Figure 4. Storyboard design for the two personality types.
Sarsam and Al-Samarraie 7
Storyboards is the initial design of major design elements
such as navigation, interface standards guide, and so on. In
this study, there were two mobile UIs based on the identified
personality groups. Figure 4 shows the storyboard design for
the two personality types.
Then, all the design elements for each personality type
were assembled using Java programming tool (see Figure 5).
Evaluation
Fifty undergraduate students (15 male, and 35 female) were
recruited in this study. All the participants were enrolled in
a design course. Their age ranged between 18 and 22 years
(M = 21.66, SD = 0.47). They were familiar with using
mobile device in learning.
Assessing Satisfaction
According to Briggs and Sindhav (2015), “satisfaction is a
key indicator of the system’s success, and so it has been the
subject of much Information System (IS) research” (p. 5). It
is the aggregate of individual’s feelings or behavior to the
issues that inspire a certain circumstance (Liaw & Huang,
Figure 5. Mobile UI for the two personality types.
Note. UI = user interface.
8 SAGE Open
2013). Liaw and Huang (2013) stated that enhancing indi-
viduals’ satisfaction of environmental conditions would sig-
nificantly increase the positive learning behavior. Satisfaction
can be considered as a measure of a learner’s reaction toward
a particular learning context. From the literature, we can see
that considering users’ satisfaction has become a crucial
aspect of the design. This led later studies to consider the
satisfaction when using an application as an important
usability dimension (Long, Karpinsky, Döner, & Still, 2016).
Thus, we considered examining learners’ level of satisfaction
when using the proposed interface using the “User Interface
Satisfaction” or “UIS” questionnaire developed by Chin,
Diehl, and Norman (1988).
Results
Participants’ respond to the UIS questionnaire was analyzed
using SPSS software. Every participant in both groups
responded to all the questions after using the two designs. This
was assumed to provide a better understanding of participants’
behavior when using the design that was shaped according to
their personality characteristics and the one shaped according
to other personality types. Table 4 shows the UIS results for
the neuroticism and the extra-conscientiousness group. The
results showed that the satisfaction of the participants in the
neuroticism group was higher when using their preferred
interface (M = 6.39, SD = 2.60) than when using the design of
Table 3. UIS Questionnaire Result for the Neuroticism Group.
Neuroticism group Extra-conscientiousness group
M SD M SD
6.39 2.60 6.10 2.24
Element M SD Median Mode M SD Median Mode
Overall reaction to the software
Q1 6.23 1.73 6 7 5.92 2.53 7 8
Q2 7.53 1.95 8 9 6.37 2.3 7 9
Q3 6.52 1.51 7 8 6.45 1.32 7 7
Q4 5.81 1.97 5 5 6.69 1.73 7 8
Q5 5.15 2.58 6 6 6.53 1.57 7 8
Q6 5.84 1.9 6 8 6.38 1.65 7 8
Screen
Q1 7.70 1.33 8 9 3.68 1.90 8 9
Q2 7.23 2.32 8 8 5.77 1.45 8 8
O3 7.62 1.31 8 9 4.78 1.51 7 9
Q4 7.05 2.32 7 9 5.97 1.14 7 8
Terminology and system information
Q1 7.30 1.25 8 9 7.92 1.16 8 9
Q2 6.36 2.07 7 7 5.50 2.16 8 9
Q3 6.40 4.95 7 7 7.07 1.80 7 9
Q4 4.40 3.53 6 1 4.84 3.15 7 8
Q5 3.21 2.67 1 1 4.76 3.26 6 1
Q6 4.13 3.29 5 1 3.15 2.93 1 1
Learning
Q1 7.30 1.80 7 7 6.76 2.12 7 8
Q2 7.31 3.70 8 9 9.83 2.59 8 8
Q3 6.80 1.44 7 7 6.80 1.27 7 5
Q4 7.57 1.10 8 8 9.59 1.36 8 8
Q5 5.10 3.44 6 1 5.50 3.10 7 1
Q6 5.52 3.40 7 1 5.92 3.42 8 9
System capabilities
Q1 8.07 5.17 9 9 7.30 2.13 8 7
Q2 8.23 4.14 9 9 7.60 3.51 8 8
Q3 7 2.53 9 9 7.53 3 9 9
Q4 1.92 2.39 1 1 1.71 2.21 1 1
Q5 7.13 4.77 8 9 6.46 4.80 6 6
Note. UIS = User Interface Satisfaction.
Sarsam and Al-Samarraie 9
the extra-conscientiousness group (M = 5.92, SD = 2.57; see
Table 3). In addition, the same was found for the extra-consci-
entiousness group who scored higher satisfaction with the
design that was shaped based on their personality profile
(M = 6.45, SD = 2.61) than when using the neuroticism design
(M = 6.10, SD = 2.24; see Table 4). Thus, it can be stated that
the participants’ level of satisfaction was associated with inter-
face designed based on their personality characteristics in a
mobile learning context.
Discussion
Our results showed that participants had high satisfaction
level when learning with mobile UI design that is associated
with their preferences toward design elements. Participants’
level of satisfaction was reduced when they used a UI design
that did not fit their personality profile. This means that UI
design based on personality has the potential to offer the user
of mobile device with the experience that meets their prefer-
ences. This finding adds to prior work of Huntsinger (2013),
who stated that low motivational intensity (e.g., satisfaction)
is related to the goal task completion, which as a result, leads
to attract a broader attention span. We also noticed that other
aspects related to the regions that the user visited would
properly provide us with the necessary knowledge about user
behavior in behavior aware contexts. This means that both
the design structure and the elements’ location on the UI can
in some fashion influence (either negatively or positively)
Table 4. UIS Questionnaire Result for the Extra-Conscientiousness Group.
Neuroticism group Extra-conscientiousness group
M SD M SD
5.92 2.57 6.45 2.61
Element M SD Median Mode M SD Median Mode
Overall reaction to the software
Q1 3.63 5.43 7 7 6.30 1.78 7 9
Q2 2.43 1.80 7 7 6.72 2.63 7 9
Q3 2.80 3.71 6 7 6.63 2.15 8 8
Q4 6.34 2.35 5 5 5.43 2.94 5 5
Q5 5.62 1.51 6 7 6.27 1.67 5 5
Q6 6.18 1.94 6 6 6.45 1.91 6 9
Screen
Q1 7.10 1.41 7 8 7.51 1.12 7 7
Q2 6.71 1.13 7 7 7 1.60 7 5
Q3 7.27 2.10 7 7 7.29 2.10 8 8
Q4 7.72 1.12 8 8 7.20 2.02 8 8
Terminology and system information
Q1 7.36 2.28 7 7 7.27 1.27 7 7
Q2 7.15 2.60 7 7 7.18 2.21 7 7
Q3 7.36 2.50 7 7 7.34 3.26 7 7
Q4 5.54 3.75 6 7 3.60 3.12 5 1
Q5 4.72 2.10 6 6 5.09 3.30 6 1
Q6 3.63 3.27 1 1 3.72 3.22 1 1
Learning
Q1 7.45 3.48 8 8 7.52 1.64 8 8
Q2 3.70 2.36 8 8 6.27 3.72 8 8
Q3 4.70 2.34 7 7 6.81 1.83 7 7
Q4 6.70 1.13 8 8 7.63 2.70 8 8
Q5 5.80 3.30 7 7 5.9 3.26 7 7
Q6 4.17 3.11 5 1 4.09 3.11 5 1
System capabilities
Q1 8.17 2.47 9 9 8.25 3.15 9 9
Q2 7.90 3.13 8 8 8.18 4.24 9 9
Q3 9 6 9 9 9 3 9 9
Q4 1.72 2.21 1 1 1.72 4.71 1 1
Q5 5.81 2.44 7 7 6 3.5 8 8
Note. UIS = User Interface Satisfaction.
10 SAGE Open
learners’ satisfaction. This is considered reasonable because
it is possible that learners obtain an essential clue about an
element’s functionality (in a learning system) when learning
from a UI design tailored to their personality. In addition, in
our opinion, if learners are in general satisfied with the distri-
bution of elements in the design of a UI, this increases their
interaction, as the focus of their mental model remains on the
task itself, whereas, if items are displayed such that their
location …
Software Unit Test Coverage and Adequacy
HONG ZHU
Nanjing University
PATRICK A. V. HALL AND JOHN H. R. MAY
The Open University, Milton Keynes, UK
Objective measurement of test quality is one of the key issues in software testing.
It has been a major research focus for the last two decades. Many test criteria have
been proposed and studied for this purpose. Various kinds of rationales have been
presented in support of one criterion or another. We survey the research work in
this area. The notion of adequacy criteria is examined together with its role in
software dynamic testing. A review of criteria classification is followed by a
summary of the methods for comparison and assessment of criteria.
Categories and Subject Descriptors: D.2.5 [Software Engineering]: Testing and
Debugging
General Terms: Measurement, Performance, Reliability, Verification
Additional Key Words and Phrases: Comparing testing effectiveness, fault-
detection, software unit test, test adequacy criteria, test coverage, testing methods
1. INTRODUCTION
In 1972, Dijkstra claimed that “program
testing can be used to show the presence
of bugs, but never their absence” to per-
suade us that a testing approach is not
acceptable [Dijkstra 1972]. However, the
last two decades have seen rapid growth
of research in software testing as well as
intensive practice and experiments. It
has been developed into a validation and
verification technique indispensable to
software engineering discipline. Then,
where are we today? What can we claim
about software testing?
In the mid-’70s, in an examination of
the capability of testing for demonstrat-
ing the absence of errors in a program,
Goodenough and Gerhart [1975, 1977]
made an early breakthrough in research
on software testing by pointing out that
the central question of software testing
is “what is a test criterion?”, that is, the
criterion that defines what constitutes
an adequate test. Since then, test crite-
ria have been a major research focus. A
great number of such criteria have been
proposed and investigated. Consider-
able research effort has attempted to
provide support for the use of one crite-
rion or another. How should we under-
stand these different criteria? What are
the future directions for the subject?
In contrast to the constant attention
given to test adequacy criteria by aca-
Authors’ addresses: H. Zhu, Institute of Computer Software, Nanjing University, Nanjing, 210093, P.R.
of China; email: ^[email protected]&; P.A.V. Hall and J.H.R. May, Department of Computing, The
Open University, Walton Hall, Milton Keynes, MK76AA, UK.
Permission to make digital / hard copy of part or all of this work for personal or classroom use is granted
without fee provided that the copies are not made or distributed for profit or commercial advantage, the
copyright notice, the title of the publication, and its date appear, and notice is given that copying is by
permission of the ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to
lists, requires prior specific permission and / or a fee.
© 1997 ACM 0360-0300/97/1200–0366 $03.50
ACM Computing Surveys, Vol. 29, No. 4, December 1997
demics, the software industry has been
slow to accept test adequacy measure-
ment. Few software development stan-
dards require or even recommend the
use of test adequacy criteria [Wichmann
1993; Wichmann and Cox 1992]. Are
test adequacy criteria worth the cost for
practical use?
Addressing these questions, we sur-
vey research on software test criteria in
the past two decades and attempt to put
it into a uniform framework.
1.1 The Notion of Test Adequacy
Let us start with some examples. Here
we seek to illustrate the basic notions
underlying adequacy criteria. Precise
definitions will be given later.
—Statement coverage. In software test-
ing practice, testers are often re-
quired to generate test cases to exe-
cute every statement in the program
at least once. A test case is an input
on which the program under test is
executed during testing. A test set is a
set of test cases for testing a program.
The requirement of executing all the
statements in the program under test
is an adequacy criterion. A test set
that satisfies this requirement is con-
sidered to be adequate according to
the statement coverage criterion.
Sometimes the percentage of executed
statements is calculated to indicate
how adequately the testing has been
performed. The percentage of the
statements exercised by testing is a
measurement of the adequacy.
—Branch coverage. Similarly, the branch
coverage criterion requires that all
control transfers in the program un-
der test are exercised during testing.
The percentage of the control trans-
fers executed during testing is a mea-
surement of test adequacy.
—Path coverage. The path coverage cri-
terion requires that all the execution
paths from the program’s entry to its
exit are executed during testing.
—Mutation adequacy. Software testing
is often aimed at detecting faults in
software. A way to measure how well
this objective has been achieved is to
plant some artificial faults into the
program and check if they are de-
tected by the test. A program with a
planted fault is called a mutant of the
original program. If a mutant and the
original program produce different
outputs on at least one test case, the
fault is detected. In this case, we say
that the mutant is dead or killed by
the test set. Otherwise, the mutant is
still alive. The percentage of dead mu-
tants compared to the mutants that
are not equivalent to the original pro-
gram is an adequacy measurement,
called the mutation score or mutation
adequacy [Budd et al. 1978; DeMillo
et al. 1978; Hamlet 1977].
From Goodenough and Gerhart’s
[1975, 1977] point of view, a software
test adequacy criterion is a predicate
that defines “what properties of a pro-
gram must be exercised to constitute a
‘thorough’ test, i.e., one whose success-
ful execution implies no errors in a
tested program.” To guarantee the cor-
rectness of adequately tested programs,
they proposed reliability and validity
requirements of test criteria. Reliability
requires that a test criterion always
produce consistent test results; that is,
if the program tested successfully on
one test set that satisfies the criterion,
then the program also tested success-
fully on all test sets that satisfies the
criterion. Validity requires that the test
always produce a meaningful result;
that is, for every error in a program,
there exists a test set that satisfies the
criterion and is capable of revealing the
error. But it was soon recognized that
there is no computable criterion that
satisfies the two requirements, and
hence they are not practically applica-
ble [Howden 1976]. Moreover, these two
requirements are not independent since
a criterion is either reliable or valid for
any given software [Weyuker and Os-
trand 1980]. Since then, the focus of
research seems to have shifted from
seeking theoretically ideal criteria to
Test Coverage and Adequacy • 367
ACM Computing Surveys, Vol. 29, No. 4, December 1997
the search for practically applicable ap-
proximations.
Currently, the software testing litera-
ture contains two different, but closely
related, notions associated with the
term test data adequacy criteria. First,
an adequacy criterion is considered to
be a stopping rule that determines
whether sufficient testing has been
done that it can be stopped. For in-
stance, when using the statement cover-
age criterion, we can stop testing if all
the statements of the program have
been executed. Generally speaking,
since software testing involves the pro-
gram under test, the set of test cases,
and the specification of the software, an
adequacy criterion can be formalized as
a function C that takes a program p, a
specification s, and a test set t and gives
a truth value true or false. Formally, let
P be a set of programs, S be a set of
specifications, D be the set of inputs of
the programs in P, T be the class of test
sets, that is, T 5 2D, where 2X denotes
the set of subsets of X.
Definition 1.1 (Test Data Adequacy
Criteria as Stopping Rules). A test
data adequacy criterion C is a function
C: P 3 S 3 T 3 {true, false}. C(p, s, t) 5
true means that t is adequate for testing
program p against specification s accord-
ing to the criterion C, otherwise t is inad-
equate.
Second, test data adequacy criteria
provide measurements of test quality
when a degree of adequacy is associated
with each test set so that it is not sim-
ply classified as good or bad. In practice,
the percentage of code coverage is often
used as an adequacy measurement.
Thus, an adequacy criterion C can be
formally defined to be a function C from
a program p, a specification s, and a test
set t to a real number r 5 C(p, s, t),
the degree of adequacy [Zhu and Hall
1992]. Formally:
Definition 1.2 (Test Data Adequacy
Criteria as Measurements). A test data
adequacy criterion is a function C, C:
P 3 S 3 T 3 [0,1]. C(p, s, t) 5 r means
that the adequacy of testing the pro-
gram p by the test set t with respect to
the specification s is of degree r accord-
ing to the criterion C. The greater the
real number r, the more adequate the
testing.
These two notions of test data ade-
quacy criteria are closely related to one
another. A stopping rule is a special
case of measurement on the continuum
since the actual range of measurement
results is the set {0,1}, where 0 means
false and 1 means true. On the other
hand, given an adequacy measurement
M and a degree r of adequacy, one can
always construct a stopping rule Mr
such that a test set is adequate if and
only if the adequacy degree is greater
than or equal to r; that is, Mr(p, s, t) 5
true N M(p, s, t) $ r. Since a stopping
rule asserts a test set to be either ade-
quate or inadequate, it is also called a
predicate rule in the literature.
An adequacy criterion is an essential
part of any testing method. It plays two
fundamental roles. First, an adequacy
criterion specifies a particular software
testing requirement, and hence deter-
mines test cases to satisfy the require-
ment. It can be defined in one of the
following forms.
(1) It can be an explicit specification for
test case selection, such as a set of
guidelines for the selection of test
cases. Following such rules one can
produce a set of test cases, although
there may be some form of random
selections. Such a rule is usually
referred to as a test case selection
criterion. Using a test case selection
criterion, a testing method may be
defined constructively in the form of
an algorithm which generates a test
set from the software under test and
its own specification. This test set is
then considered adequate. It should
be noticed that for a given test case
selection criterion, there may exist a
number of test case generation algo-
rithms. Such an algorithm may also
involve random sampling among
many adequate test sets.
368 • Zhu et al.
ACM Computing Surveys, Vol. 29, No. 4, December 1997
(2) It can also be in the form of specify-
ing how to decide whether a given
test set is adequate or specifying
how to measure the adequacy of a
test set. A rule that determines
whether a test set is adequate (or
more generally, how adequate) is
usually referred to as a test data
adequacy criterion.
However, the fundamental concept
underlying both test case selection cri-
teria and test data adequacy criteria is
the same, that is, the notion of test
adequacy. In many cases they can be
easily transformed from one form to an-
other. Mathematically speaking, test
case selection criteria are generators,
that is, functions that produce a class of
test sets from the program under test
and the specification (see Definition
1.3). Any test set in this class is ade-
quate, so that we can use any of them
equally.1 Test data adequacy criteria
are acceptors that are functions from
the program under test, the specifica-
tion of the software and the test set to a
characteristic number as defined in Def-
inition 1.1. Generators and acceptors
are mathematically equivalent in the
sense of one-one correspondence. Hence,
we use “test adequacy criteria” to de-
note both of them.
Definition 1.3 (Test Data Adequacy
Criteria as Generators [Budd and An-
gluin 1982]). A test data adequacy cri-
terion C is a function C: P 3 S 3 2T. A
test set t [ C(p, s) means that t satis-
fies C with respect to p and s, and it is
said that t is adequate for (p, s) accord-
ing to C.
The second role that an adequacy cri-
terion plays is to determine the observa-
tions that should be made during the
testing process. For example, statement
coverage requires that the tester, or the
testing system, observe whether each
statement is executed during the pro-
cess of software testing. If path cover-
age is used, then the observation of
whether statements have been executed
is insufficient; execution paths should
be observed and recorded. However, if
mutation score is used, it is unneces-
sary to observe whether a statement is
executed during testing. Instead, the
output of the original program and the
output of the mutants need to be re-
corded and compared.
Although, given an adequacy crite-
rion, different methods could be devel-
oped to generate test sets automatically
or to select test cases systematically
and efficiently, the main features of a
testing method are largely determined
by the adequacy criterion. For example,
as we show later, the adequacy criterion
is related to fault-detecting ability, the
dependability of the program that
passes a successful test and the number
of test cases required. Unfortunately,
the exact relationship between a partic-
ular adequacy criterion and the correct-
ness or reliability of the software that
passes the test remains unclear.
Due to the central role that adequacy
criteria play in software testing, soft-
ware testing methods are often com-
pared in terms of the underlying ade-
quacy criteria. Therefore, subsequently,
we use the name of an adequacy crite-
rion as a synonym of the corresponding
testing method when there is no possi-
bility of confusion.
1.2 The Uses of Test Adequacy Criteria
An important issue in the management
of software testing is to “ensure that
before any testing the objectives of that
testing are known and agreed and that
the objectives are set in terms that can
be measured.” Such objectives “should
be quantified, reasonable and achiev-
able” [Ould and Unwin 1986]. Almost
all test adequacy criteria proposed in
the literature explicitly specify particu-
lar requirements on software testing.
They are objective rules applicable by
project managers for this purpose.
For example, branch coverage is a
1 Test data selection criteria as generators should
not be confused with test case generation software
tools, which may only generate one test set.
Test Coverage and Adequacy • 369
ACM Computing Surveys, Vol. 29, No. 4, December 1997
test requirement that all branches of
the program should be exercised. The
objective of testing is to satisfy this
requirement. The degree to which this
objective is achieved can be measured
quantitatively by the percentage of
branches exercised. The mutation ade-
quacy criterion specifies the testing re-
quirement that a test set should be able
to rule out a particular set of software
faults, that is, those represented by mu-
tants. Mutation score is another kind of
quantitative measurement of test qual-
ity.
Test data adequacy criteria are also
very helpful tools for software testers.
There are two levels of software testing
processes. At the lower level, testing is
a process where a program is tested by
feeding more and more test cases to it.
Here, a test adequacy criterion can be
used as a stopping rule to decide when
this process can stop. Once the mea-
surement of test adequacy indicates
that the test objectives have been
achieved, then no further test case is
needed. Otherwise, when the measure-
ment of test adequacy shows that a test
has not achieved the objectives, more
tests must be made. In this case, the
adequacy criterion also provides a
guideline for the selection of the addi-
tional test cases. In this way, adequacy
criteria help testers to manage the soft-
ware testing process so that software
quality is ensured by performing suffi-
cient tests. At the same time, the cost of
testing is controlled by avoiding redun-
dant and unnecessary tests. This role of
adequacy criteria has been considered
by some computer scientists [Weyuker
1986] to be one of the most important.
At a higher level, the testing proce-
dure can be considered as repeated cy-
cles of testing, debugging, modifying
program code, and then testing again.
Ideally, this process should stop only
when the software has met the required
reliability requirements. Although test
data adequacy criteria do not play the
role of stopping rules at this level, they
make an important contribution to the
assessment of software dependability.
Generally speaking, there are two basic
aspects of software dependability as-
sessment. One is the dependability esti-
mation itself, such as a reliability fig-
ure. The other is the confidence in
estimation, such as the confidence or
the accuracy of the reliability estimate.
The role of test adequacy here is a con-
tributory factor in building confidence
in the integrity estimate. Recent re-
search has shown some positive results
with respect to this role [Tsoukalas
1993].
Although it is common in current soft-
ware testing practice that the test pro-
cesses at both the higher and lower
levels stop when money or time runs
out, there is a tendency towards the use
of systematic testing methods with the
application of test adequacy criteria.
1.3 Categories of Test Data Adequacy
Criteria
There are various ways to classify ade-
quacy criteria. One of the most common
is by the source of information used to
specify testing requirements and in the
measurement of test adequacy. Hence,
an adequacy criterion can be:
—specification-based, which specifies
the required testing in terms of iden-
tified features of the specification or
the requirements of the software, so
that a test set is adequate if all the
identified features have been fully ex-
ercised. In software testing literature
it is fairly common that no distinction
is made between specification and re-
quirements. This tradition is followed
in this article also;
—program-based, which specifies test-
ing requirements in terms of the pro-
gram under test and decides if a test
set is adequate according to whether
the program has been thoroughly ex-
ercised.
It should not be forgotten that for both
specification-based and program-based
testing, the correctness of program out-
puts must be checked against the speci-
fication or the requirements. However,
370 • Zhu et al.
ACM Computing Surveys, Vol. 29, No. 4, December 1997
in both cases, the measurement of test
adequacy does not depend on the results
of this checking. Also, the definition of
specification-based criteria given previ-
ously does not presume the existence of
a formal specification.
It has been widely acknowledged that
software testing should use information
from both specification and program.
Combining these two approaches, we
have:
—combined specification- and program-
based criteria, which use the ideas of
both program-based and specification-
based criteria.
There are also test adequacy criteria
that specify testing requirements with-
out employing any internal information
from the specification or the program.
For example, test adequacy can be mea-
sured according to the prospective us-
age of the software by considering
whether the test cases cover the data
that are most likely to be frequently
used as input in the operation of the
software. Although few criteria are ex-
plicitly proposed in such a way, select-
ing test cases according to the usage of
the software is the idea underlying ran-
dom testing, or statistical testing. In
random testing, test cases are sampled
at random according to a probability
distribution over the input space. Such
a distribution can be the one represent-
ing the operation of the software, and
the random testing is called representa-
tive. It can also be any probability dis-
tribution, such as a uniform distribu-
tion, and the random testing is called
nonrepresentative. Generally speaking,
if a criterion employs only the “inter-
face” information—the type and valid
range for the software input—it can be
called an interface-based criterion:
—interface-based criteria, which specify
testing requirements only in terms of
the type and range of software input
without reference to any internal fea-
tures of the specification or the pro-
gram.
In the software testing literature, peo-
ple often talk about white-box testing
and black-box testing. Black-box testing
treats the program under test as a
“black box.” No knowledge about the
implementation is assumed. In white-
box testing, the tester has access to the
details of the program under test and
performs the testing according to such
details. Therefore, specification-based
criteria and interface-based criteria be-
long to black-box testing. Program-
based criteria and combined specifica-
tion and program-based criteria belong
to white-box testing.
Another classification of test ade-
quacy criteria is by the underlying test-
ing approach. There are three basic ap-
proaches to software testing:
(1) structural testing: specifies testing
requirements in terms of the cover-
age of a particular set of elements in
the structure of the program or the
specification;
(2) fault-based testing: focuses on de-
tecting faults (i.e., defects) in the
software. An adequacy criterion of
this approach is some measurement
of the fault detecting ability of test
sets.2
(3) error-based testing: requires test
cases to check the program on cer-
tain error-prone points according to
our knowledge about how programs
typically depart from their specifica-
tions.
The source of information used in the
adequacy measurement and the under-
lying approach to testing can be consid-
ered as two dimensions of the space of
software test adequacy criteria. A soft-
ware test adequacy criterion can be
classified by these two aspects. The re-
view of adequacy criteria is organized
according to the structure of this space.
2 We use the word fault to denote defects in soft-
ware and the word error to denote defects in the
outputs produced by a program. An execution that
produces an error is called a failure.
Test Coverage and Adequacy • 371
ACM Computing Surveys, Vol. 29, No. 4, December 1997
1.4 Organization of the Article
The remainder of the article consists of
two main parts. The first part surveys
various types of test data adequacy cri-
teria proposed in the literature. It in-
cludes three sections devoted to struc-
tural testing, fault-based testing, and
error-based testing. Each section con-
sists of several subsections covering the
principles of the testing method and
their application to program-based and
specification-based test criteria. The
second part is devoted to the rationale
presented in the literature in support of
the various criteria. It has two sections.
Section 5 discusses the methods of com-
paring adequacy criteria and surveys
the research results in the literature.
Section 6 discusses the axiomatic study
and assessment of adequacy criteria. Fi-
nally, Section 7 concludes the paper.
2. STRUCTURAL TESTING
This section is devoted to adequacy cri-
teria for structural testing. It consists of
two subsections, one for program-based
criteria and the other for specification-
based criteria.
2.1 Program-Based Structural Testing
There are two main groups of program-
based structural test adequacy criteria:
control-flow criteria and data-flow crite-
ria. These two types of adequacy crite-
ria are combined and extended to give
dependence coverage criteria. Most ade-
quacy criteria of these two groups are
based on the flow-graph model of pro-
gram structure. However, a few control-
flow criteria define test requirements in
terms of program text rather than using
an abstract model of software structure.
2.1.1 Control Flow Adequacy Crite-
ria. Before we formally define various
control-flow-based adequacy criteria, we
first give an introduction to the flow
graph model of program structure.
A. The flow graph model of program
structure. The control flow graph
stems from compiler work and has long
been used as a model of program struc-
ture. It is widely used in static analysis
of software [Fenton et al. 1985; Ko-
saraju 1974; McCabe 1976; Paige 1975].
It has also been used to define and
study program-based structural test ad-
equacy criteria [White 1981]. In this
section we give a brief introduction to
the flow-graph model of program struc-
ture. Although we use graph-theory ter-
minology in the following discussion,
readers are required to have only a pre-
liminary knowledge of graph theory. To
help understand the terminology and to
avoid confusion, a glossary is provided
in the Appendix.
A flow graph is a directed graph that
consists of a set N of nodes and a set
E # N 3 N of directed edges between
nodes. Each node represents a linear
sequence of computations. Each edge
representing transfer of control is an
ordered pair ^n1, n2& of nodes, and is
associated with a predicate that repre-
sents the condition of control transfer
from node n1 to node n2. In a flow
graph, there is a begin node and an end
node where the computation starts and
finishes, respectively. The begin node
has no inward edges and the end node
has no outward edges. Every node in a
flow graph must be on a path from the
begin node to the end node. Figure 1 is
an example of flow graph.
Example 2.1 The following program
computes the greatest common divisor
of two natural numbers by Euclid’s al-
gorithm. Figure 1 is the corresponding
flow graph.
Begin
input (x, y);
while (x . 0 and y . 0) do
if (x . y)
then x: 5 x 2 y
else y: 5 y 2 x
endif
endwhile;
output (x 1 y);
end
It should be noted that in the litera-
ture there are a number of conventions
of flow-graph models with subtle differ-
372 • Zhu et al.
ACM Computing Surveys, Vol. 29, No. 4, December 1997
ences, such as whether a node is al-
lowed to be associated with an empty
sequence of statements, the number of
outward edges allowed for a node, and
the number of end nodes allowed in a
flow graph, and the like. Although most
adequacy criteria can be defined inde-
pendently of such conventions, using
different ones may result in different
measures of test adequacy. Moreover,
testing tools may be sensitive to such
conventions. In this article no restric-
tions on the conventions are made.
For programs written in a procedural
programming language, flow-graph
models can be generated automatically.
Figure 2 gives the correspondences be-
tween some structured statements and
their flow-graph structures. Using these
rules, a flow graph, shown in Figure 3,
can be derived from the program given
in Example 2.1. Generally, to construct
a flow graph for a given program, the
program code is decomposed into a set
of disjoint blocks of linear sequences of
statements. A block has the property
that whenever the first statement of the
block is executed, the other statements
are executed in the given order. Fur-
thermore, the first statement of the
block is the only statement that may be
executed directly after the execution of
a statement in another block. Each
block corresponds to a node in the flow
graph. A control transfer from one block
to another is represented by a directed
edge between the nodes such that the
condition of the control transfer is asso-
ciated with it.
B. Control-flow adequacy criteria.
Now, given a flow-graph model of a pro-
gram and a set of test cases, how do we
measure the adequacy of testing for the
program on the test set? First of all,
recall that the execution of the program
on an input datum is modeled as a
traverse in the flow graph. Every execu-
tion corresponds to a path in the flow
graph from the begin node to the end
node. Such a path is called a complete
computation path, or simply a computa-
tion path or an execution path in soft-
ware testing literature.
A very basic requirement of adequate
testing is that all the statements in the
program are covered by test executions.
This is usually called statement cover-
age [Hetzel 1984]. But full statement
coverage cannot always be achieved be-
cause of the possible existence of infea-
sible statements, that is, dead code.
Whether a piece of code is dead code is
undecidable [Weyuker 1979a; Weyuker
1979b; White 1981]. Because state-
ments correspond to nodes in flow-
graph models, this criterion can be de-
fined in terms of flow graphs, as follows.
Definition 2.1 (Statement Coverage
Criterion). A set P of execution paths
Figure 1. Flow graph for program in Example 2.1.
Test Coverage and Adequacy • 373
ACM Computing Surveys, Vol. 29, No. 4, December 1997
satisfies the statement coverage crite-
rion if and only if for all nodes n in the
flow graph, there is at least one path p
in P such that node n is on the path p.
Notice that statement coverage is so
weak that even some control transfers
may be missed from an adequate test.
Hence, we have a slightly stronger re-
quirement of adequate test, called
branch coverage [Hetzel 1984], that all
control transfers must be checked. Since
control transfers correspond to edges in
flow graphs, the branch coverage crite-
rion can be defined as the coverage of
all edges in the flow graph.
Definition 2.2 (Branch Coverage …
CATEGORIES
Economics
Nursing
Applied Sciences
Psychology
Science
Management
Computer Science
Human Resource Management
Accounting
Information Systems
English
Anatomy
Operations Management
Sociology
Literature
Education
Business & Finance
Marketing
Engineering
Statistics
Biology
Political Science
Reading
History
Financial markets
Philosophy
Mathematics
Law
Criminal
Architecture and Design
Government
Social Science
World history
Chemistry
Humanities
Business Finance
Writing
Programming
Telecommunications Engineering
Geography
Physics
Spanish
ach
e. Embedded Entrepreneurship
f. Three Social Entrepreneurship Models
g. Social-Founder Identity
h. Micros-enterprise Development
Outcomes
Subset 2. Indigenous Entrepreneurship Approaches (Outside of Canada)
a. Indigenous Australian Entrepreneurs Exami
Calculus
(people influence of
others) processes that you perceived occurs in this specific Institution Select one of the forms of stratification highlighted (focus on inter the intersectionalities
of these three) to reflect and analyze the potential ways these (
American history
Pharmacology
Ancient history
. Also
Numerical analysis
Environmental science
Electrical Engineering
Precalculus
Physiology
Civil Engineering
Electronic Engineering
ness Horizons
Algebra
Geology
Physical chemistry
nt
When considering both O
lassrooms
Civil
Probability
ions
Identify a specific consumer product that you or your family have used for quite some time. This might be a branded smartphone (if you have used several versions over the years)
or the court to consider in its deliberations. Locard’s exchange principle argues that during the commission of a crime
Chemical Engineering
Ecology
aragraphs (meaning 25 sentences or more). Your assignment may be more than 5 paragraphs but not less.
INSTRUCTIONS:
To access the FNU Online Library for journals and articles you can go the FNU library link here:
https://www.fnu.edu/library/
In order to
n that draws upon the theoretical reading to explain and contextualize the design choices. Be sure to directly quote or paraphrase the reading
ce to the vaccine. Your campaign must educate and inform the audience on the benefits but also create for safe and open dialogue. A key metric of your campaign will be the direct increase in numbers.
Key outcomes: The approach that you take must be clear
Mechanical Engineering
Organic chemistry
Geometry
nment
Topic
You will need to pick one topic for your project (5 pts)
Literature search
You will need to perform a literature search for your topic
Geophysics
you been involved with a company doing a redesign of business processes
Communication on Customer Relations. Discuss how two-way communication on social media channels impacts businesses both positively and negatively. Provide any personal examples from your experience
od pressure and hypertension via a community-wide intervention that targets the problem across the lifespan (i.e. includes all ages).
Develop a community-wide intervention to reduce elevated blood pressure and hypertension in the State of Alabama that in
in body of the report
Conclusions
References (8 References Minimum)
*** Words count = 2000 words.
*** In-Text Citations and References using Harvard style.
*** In Task section I’ve chose (Economic issues in overseas contracting)"
Electromagnetism
w or quality improvement; it was just all part of good nursing care. The goal for quality improvement is to monitor patient outcomes using statistics for comparison to standards of care for different diseases
e a 1 to 2 slide Microsoft PowerPoint presentation on the different models of case management. Include speaker notes... .....Describe three different models of case management.
visual representations of information. They can include numbers
SSAY
ame workbook for all 3 milestones. You do not need to download a new copy for Milestones 2 or 3. When you submit Milestone 3
pages):
Provide a description of an existing intervention in Canada
making the appropriate buying decisions in an ethical and professional manner.
Topic: Purchasing and Technology
You read about blockchain ledger technology. Now do some additional research out on the Internet and share your URL with the rest of the class
be aware of which features their competitors are opting to include so the product development teams can design similar or enhanced features to attract more of the market. The more unique
low (The Top Health Industry Trends to Watch in 2015) to assist you with this discussion.
https://youtu.be/fRym_jyuBc0
Next year the $2.8 trillion U.S. healthcare industry will finally begin to look and feel more like the rest of the business wo
evidence-based primary care curriculum. Throughout your nurse practitioner program
Vignette
Understanding Gender Fluidity
Providing Inclusive Quality Care
Affirming Clinical Encounters
Conclusion
References
Nurse Practitioner Knowledge
Mechanics
and word limit is unit as a guide only.
The assessment may be re-attempted on two further occasions (maximum three attempts in total). All assessments must be resubmitted 3 days within receiving your unsatisfactory grade. You must clearly indicate “Re-su
Trigonometry
Article writing
Other
5. June 29
After the components sending to the manufacturing house
1. In 1972 the Furman v. Georgia case resulted in a decision that would put action into motion. Furman was originally sentenced to death because of a murder he committed in Georgia but the court debated whether or not this was a violation of his 8th amend
One of the first conflicts that would need to be investigated would be whether the human service professional followed the responsibility to client ethical standard. While developing a relationship with client it is important to clarify that if danger or
Ethical behavior is a critical topic in the workplace because the impact of it can make or break a business
No matter which type of health care organization
With a direct sale
During the pandemic
Computers are being used to monitor the spread of outbreaks in different areas of the world and with this record
3. Furman v. Georgia is a U.S Supreme Court case that resolves around the Eighth Amendments ban on cruel and unsual punishment in death penalty cases. The Furman v. Georgia case was based on Furman being convicted of murder in Georgia. Furman was caught i
One major ethical conflict that may arise in my investigation is the Responsibility to Client in both Standard 3 and Standard 4 of the Ethical Standards for Human Service Professionals (2015). Making sure we do not disclose information without consent ev
4. Identify two examples of real world problems that you have observed in your personal
Summary & Evaluation: Reference & 188. Academic Search Ultimate
Ethics
We can mention at least one example of how the violation of ethical standards can be prevented. Many organizations promote ethical self-regulation by creating moral codes to help direct their business activities
*DDB is used for the first three years
For example
The inbound logistics for William Instrument refer to purchase components from various electronic firms. During the purchase process William need to consider the quality and price of the components. In this case
4. A U.S. Supreme Court case known as Furman v. Georgia (1972) is a landmark case that involved Eighth Amendment’s ban of unusual and cruel punishment in death penalty cases (Furman v. Georgia (1972)
With covid coming into place
In my opinion
with
Not necessarily all home buyers are the same! When you choose to work with we buy ugly houses Baltimore & nationwide USA
The ability to view ourselves from an unbiased perspective allows us to critically assess our personal strengths and weaknesses. This is an important step in the process of finding the right resources for our personal learning style. Ego and pride can be
· By Day 1 of this week
While you must form your answers to the questions below from our assigned reading material
CliftonLarsonAllen LLP (2013)
5 The family dynamic is awkward at first since the most outgoing and straight forward person in the family in Linda
Urien
The most important benefit of my statistical analysis would be the accuracy with which I interpret the data. The greatest obstacle
From a similar but larger point of view
4 In order to get the entire family to come back for another session I would suggest coming in on a day the restaurant is not open
When seeking to identify a patient’s health condition
After viewing the you tube videos on prayer
Your paper must be at least two pages in length (not counting the title and reference pages)
The word assimilate is negative to me. I believe everyone should learn about a country that they are going to live in. It doesnt mean that they have to believe that everything in America is better than where they came from. It means that they care enough
Data collection
Single Subject Chris is a social worker in a geriatric case management program located in a midsize Northeastern town. She has an MSW and is part of a team of case managers that likes to continuously improve on its practice. The team is currently using an
I would start off with Linda on repeating her options for the child and going over what she is feeling with each option. I would want to find out what she is afraid of. I would avoid asking her any “why” questions because I want her to be in the here an
Summarize the advantages and disadvantages of using an Internet site as means of collecting data for psychological research (Comp 2.1) 25.0\% Summarization of the advantages and disadvantages of using an Internet site as means of collecting data for psych
Identify the type of research used in a chosen study
Compose a 1
Optics
effect relationship becomes more difficult—as the researcher cannot enact total control of another person even in an experimental environment. Social workers serve clients in highly complex real-world environments. Clients often implement recommended inte
I think knowing more about you will allow you to be able to choose the right resources
Be 4 pages in length
soft MB-920 dumps review and documentation and high-quality listing pdf MB-920 braindumps also recommended and approved by Microsoft experts. The practical test
g
One thing you will need to do in college is learn how to find and use references. References support your ideas. College-level work must be supported by research. You are expected to do that for this paper. You will research
Elaborate on any potential confounds or ethical concerns while participating in the psychological study 20.0\% Elaboration on any potential confounds or ethical concerns while participating in the psychological study is missing. Elaboration on any potenti
3 The first thing I would do in the family’s first session is develop a genogram of the family to get an idea of all the individuals who play a major role in Linda’s life. After establishing where each member is in relation to the family
A Health in All Policies approach
Note: The requirements outlined below correspond to the grading criteria in the scoring guide. At a minimum
Chen
Read Connecting Communities and Complexity: A Case Study in Creating the Conditions for Transformational Change
Read Reflections on Cultural Humility
Read A Basic Guide to ABCD Community Organizing
Use the bolded black section and sub-section titles below to organize your paper. For each section
Losinski forwarded the article on a priority basis to Mary Scott
Losinksi wanted details on use of the ED at CGH. He asked the administrative resident