Project Assignment (1200 - 1500 words) - Reading
Finish it in 3 days. Read the Articles carefully or I will dispute.
Rubric needs to be followed
Mitigating Quality Risks
Overview
This assignment is intended to solidify your understanding of how and when
some of the techniques we are learning about can address specific software
quality risks. You will compare and contrast how scenario-based usability
engineering and architecture-oriented design affect different quality
attributes. The background for this assignment will come from prior reading
assignments and one new resource. The prior readings that are most relevant
to this assignment include:
Pressmans Chapter 14
Chapter 3 in Scenario-Based Usability Engineering
The new resource is:
Air Traffic Control--A Case Study in Designing for High Availability
Assignment
Both Chapter 3 from Scenario-Based Usability Engineering and the case study
from Bass book focus on the design of air traffic control systems. The first
focuses on the use of usability engineering techniques, including participatory
design and scenario-based design, as a requirements gathering technique,
explaining how that played out in an air traffic control project. The case study
from Bass instead focuses on the software architecture that was used in an air
traffic control system as an example of how software architecture choices can
be used to address challenges in complex systems.
The main task of your assignment is to evaluate both scenario-based usability
engineering and software architecture-based design in terms of how they
affect each software quality attribute, using McCalls list of software quality
attributes (see 14.2.2 in Pressmans Chapter 14). Systematically go through
each of the quality attributes in that list and consider whether (and how) both
scenario-based usability engineering and architecture-based design addressed
or helped achieve that specific quality attribute.
Write up your results in a short paper (about 4-5 pages) that you will turn in.
Be sure to include the following elements in your solution:
1. Summary: provide a brief description of the two methods being
compared, using your own words, and provide an introduction to
the remainder of your paper.
2. Comparison: Using a separate subsection for each of the quality
attributes listed in McCalls list, briefly but explicitly describe how
scenario-based usability engineering affects that attribute (or
whether it has no significant effect). You can use (but are not
limited to) the air traffic control examples as source material to
ground your argument. Your goal is to conclude whether scenario-
based usability engineering will help significantly increase the
chances of meeting a projects goal with respect to that quality
attribute. Identify what you believe are the important
characteristics/properties of a software project that make scenario-
based usability engineering an important consideration for that
specific quality attribute on the project.
Then perform the same analysis for architecture-driven design with
respect to that quality (i.e., handle both techniques for one quality
attribute in each subsection, with a separate subsection for each
quality attribute in McCalls list).
3. Conclusions: Look back over the analysis you have performed, and
summarize which group of quality attributes are affected by
scenario-based usability engineering, and what characteristics of a
software project make scenario-based usability engineering relevant
in order to meet desired quality goals. Similarly, summarize which
group of quality attributes are affected by architecture-driven
design, and what characteristics of a software project make
architectural decisions relevant in order to meet desired quality
goals.
Assessment
The following rubric will be used to assess your work:
Criteria Points
Project Summary 10
points
Excellent: Provides a clear explanation of both techniques and gives
an appropriate overview of the comparative analysis you are
performing, including a summary of the quality attributes used as
the basis for the comparison.
10/10
Criteria Points
Good: Provides an explanation of both techniques and lists the
quality attributes used in the analysis, but there is clear room for
improving the overview/introduction so it is more understandable.
8/10
Satisfactory: Names the techniques to be compared, but does not
provide a clear explanation of at least one of the techniques, or fails
to clearly articulate the quality attributes used in the analysis.
6/10
Poor: Provides an introductory summary that does not clearly
decribe either of the techniques analyzed.
3/10
No attempt: Section is missing. 0/10
Comparison 45
points
Excellent: Provides a clear, specific evaluation of how each of the
two techniques helps support (or does not support) each of the
quality attributes in the required list. The analysis is clearly broken
into sections based on the quality attribute list, and both techniques
are directly address in every section. Identifies the properties of a
software system that make each technique appropriate, when the
technique can directly affect software quality.
45/45
Good: Provides a clear evaluation of the two techniques for each
quality attribute, with the analysis broken down into sections based
on the quality attribute list indicated. A few sections may not
provide a clear analysis of one of the techniques, or may fail to
describe the properties of a software system that make the
technique relevant.
36/45
Satisfactory: All quality attributes are addressed, but many sections
do not clearly describe the effects of both techniques on the
attribute, mischaracterize the effects, or fail to clearly identify the
properties of software systems that make the technique relevant.
27/45
Poor: Simply restates basic concepts from the reading without
contributing a useful discussion of when/why a technique
addresses each quality attribute.
18/45
Criteria Points
No attempt: Section is missing. 0/45
Conclusions 35
points
Excellent: Explicitly identifies which quality attributes are
significantly affected by each of the two techniques analyzed, what
characteristics of a software project make the technique important
to use, and generalizes from this a coherent description of when
each of the two techniques should be applied (or at least
considered). Also discusses what kind of project would be
appropriate for applying both techniques.
35/35
Good: Identifies which quality attributes are significantly affected
by each of the two techniques analyzed, and what characteristics of
a software project make the technique important to use..
28/35
Satisfactory: Identifies which quality attributes are significantly
affected by each of the two techniques analyzed. One of more of
the attributes may be glossed over or omitted. Some important
system characteristics necessary for a technique to be important
may be identified, but some may be missing.
22/35
Poor: Attempts to evaluate the four architectures, but without any
clear connection to the presented evaluation critieria or any clear
summary of strengths and weaknesses for each architecture.
16/35
No attempt: Section is missing. 0/10
Writing/Presentation 10
points
Excellent: All writing is clear and readable, without grammar errors,
punctuation errors, or other writing problems. Figures or summary
tables are used appropriately where they clarify presentation. Clear
headings and a cohesive document organization are used. The
whole document looks like a professionally prepared report.
10/10
Criteria Points
Good: All writing is clear and readable, with only occasional minor
writing errors. An appropriate attempt is made to use appropriate
figures or summary tables where they help clarify presentation.
Clear headings and a cohesive document organization are used.
8/10
Satisfactory: The document is readable, but some significant errors
appear in the writing and/or organization. Some parts of the
exposition may not communicate clearly. Summary representations
of key portions of the work may be missing. Clear headings are
used.
6/10
Poor: The document is readable, but includes significant errors in
both writing and organization that make portions of the document
hard to follow.
3/10
No attempt: Section is missing. 0/10
Total 100
points
Mitigating Quality Risks
Overview
Pressmans Chapter 14
Chapter 3 in Scenario-Based Usability Engineering
The new resource is:
Air Traffic Control--A Case Study in Designing for High Availability
Assignment
Assessment
The drumbeat for improved software quality began in earnest as softwarebecame increasingly integrated in every facet of our lives. By the 1990s,major corporations recognized that billions of dollars each year were be-
ing wasted on software that didn’t deliver the features and functionality that were
promised. Worse, both government and industry became increasingly concerned
that a major software fault might cripple important infrastructure, costing tens
of billions more. By the turn of the century, CIO Magazine [Lev01] trumpeted
the headline, “Let’s Stop Wasting $78 Billion a Year,” lamenting the fact that
“American businesses spend billions for software that doesn’t do what it’s sup-
posed to do.” InformationWeek [Ric01] echoed the same concern:
Despite good intentions, defective code remains the hobgoblin of the software indus-
try, accounting for as much as 45\% of computer-system downtime and costing U.S.
companies about $100 billion last year in lost productivity and repairs, says the
Standish Group, a market research firm. That doesn’t include the cost of losing angry
customers. Because IT shops write applications that rely on packaged infrastructure
software, bad code can wreak havoc on custom apps as well. . . .
Just how bad is bad software? Definitions vary, but experts say it takes only three or
four defects per 1,000 lines of code to make a program perform poorly. Factor in that most
programmers inject about one error for every 10 lines of code they write, multiply that by
the millions of lines of code in many commercial products, then figure it costs software
vendors at least half their development budgets to fix errors while testing. Get the picture?
398
C H A P T E R
14 QUALITYCONCEPTS
K E Y
C O N C E P T S
cost of quality . .407
good enough . . .406
liability . . . . . .410
management
actions . . . . . . .411
quality . . . . . . .399
quality
dilemma . . . . . .406
quality
dimensions . . . .401
quality factors .402
quantitative
view . . . . . . . .405
risks . . . . . . . .409
security . . . . . .410
What is it? The answer isn’t as easy
as you might think. You know quality
when you see it, and yet, it can be an
elusive thing to define. But for com-
puter software, quality is something that we must
define, and that’s what I’ll do in this chapter.
Who does it? Everyone—software engineers,
managers, all stakeholders—involved in the
software process is responsible for quality.
Why is it important? You can do it right, or you
can do it over again. If a software team stresses
quality in all software engineering activities, it
reduces the amount of rework that it must do.
That results in lower costs, and more importantly,
improved time-to-market.
Q U I C K
L O O K
What are the steps? To achieve high-quality
software, four activities must occur: proven soft-
ware engineering process and practice, solid
project management, comprehensive quality
control, and the presence of a quality assurance
infrastructure.
What is the work product? Software that
meets its customer’s needs, performs accurately
and reliably, and provides value to all who
use it.
How do I ensure that I’ve done it right? Track
quality by examining the results of all quality
control activities, and measure quality by exam-
ining errors before delivery and defects
released to the field.
pre75977_ch14.qxd 11/27/08 5:51 PM Page 398
C H A P T E R 1 4 Q U A L I T Y C O N C E P T S 399
In 2005, ComputerWorld [Hil05] lamented that “bad software plagues nearly every
organization that uses computers, causing lost work hours during computer down-
time, lost or corrupted data, missed sales opportunities, high IT support and mainte-
nance costs, and low customer satisfaction. A year later, InfoWorld [Fos06] wrote
about the “the sorry state of software quality” reporting that the quality problem had
not gotten any better.
Today, software quality remains an issue, but who is to blame? Customers blame
developers, arguing that sloppy practices lead to low-quality software. Developers
blame customers (and other stakeholders), arguing that irrational delivery dates and
a continuing stream of changes force them to deliver software before it has been fully
validated. Who’s right? Both—and that’s the problem. In this chapter, I consider soft-
ware quality as a concept and examine why it’s worthy of serious consideration
whenever software engineering practices are applied.
1 4 . 1 W H AT I S Q U A L I T Y ?
In his mystical book, Zen and the Art of Motorcycle Maintenance, Robert Persig [Per74]
commented on the thing we call quality:
Quality . . . you know what it is, yet you don’t know what it is. But that’s self-contradictory.
But some things are better than others; that is, they have more quality. But when you try
to say what the quality is, apart from the things that have it, it all goes poof! There’s noth-
ing to talk about. But if you can’t say what Quality is, how do you know what it is, or how
do you know that it even exists? If no one knows what it is, then for all practical purposes
it doesn’t exist at all. But for all practical purposes it really does exist. What else are the
grades based on? Why else would people pay fortunes for some things and throw others
in the trash pile? Obviously some things are better than others . . . but what’s the better-
ness? . . . So round and round you go, spinning mental wheels and nowhere finding any-
place to get traction. What the hell is Quality? What is it?
Indeed—what is it?
At a somewhat more pragmatic level, David Garvin [Gar84] of the Harvard Busi-
ness School suggests that “quality is a complex and multifaceted concept” that can
be described from five different points of view. The transcendental view argues (like
Persig) that quality is something that you immediately recognize, but cannot explic-
itly define. The user view sees quality in terms of an end user’s specific goals. If a
product meets those goals, it exhibits quality. The manufacturer’s view defines qual-
ity in terms of the original specification of the product. If the product conforms to the
spec, it exhibits quality. The product view suggests that quality can be tied to inher-
ent characteristics (e.g., functions and features) of a product. Finally, the value-based
view measures quality based on how much a customer is willing to pay for a prod-
uct. In reality, quality encompasses all of these views and more.
Quality of design refers to the characteristics that designers specify for a product.
The grade of materials, tolerances, and performance specifications all contribute to
the quality of design. As higher-grade materials are used, tighter tolerances and
What are the different
ways in which quality
can be viewed?
pre75977_ch14.qxd 11/27/08 5:51 PM Page 399
greater levels of performance are specified, the design quality of a product increases,
if the product is manufactured according to specifications.
In software development, quality of design encompasses the degree to which the
design meets the functions and features specified in the requirements model. Quality
of conformance focuses on the degree to which the implementation follows the
design and the resulting system meets its requirements and performance goals.
But are quality of design and quality of conformance the only issues that software
engineers must consider? Robert Glass [Gla98] argues that a more “intuitive” rela-
tionship is in order:
user satisfaction ! compliant product good quality delivery within budget and schedule
At the bottom line, Glass contends that quality is important, but if the user isn’t
satisfied, nothing else really matters. DeMarco [DeM98] reinforces this view when he
states: “A product’s quality is a function of how much it changes the world for the
better.” This view of quality contends that if a software product provides substantial
benefit to its end users, they may be willing to tolerate occasional reliability or per-
formance problems.
1 4 . 2 S O F T WA R E Q U A L I T Y
Even the most jaded software developers will agree that high-quality software is an
important goal. But how do we define software quality? In the most general sense,
software quality can be defined1 as: An effective software process applied in a manner
that creates a useful product that provides measurable value for those who produce it
and those who use it.
There is little question that the preceding definition could be modified or extended
and debated endlessly. For the purposes of this book, the definition serves to
emphasize three important points:
1. An effective software process establishes the infrastructure that supports any
effort at building a high-quality software product. The management aspects
of process create the checks and balances that help avoid project chaos—a
key contributor to poor quality. Software engineering practices allow the
developer to analyze the problem and design a solid solution—both critical
to building high-quality software. Finally, umbrella activities such as change
management and technical reviews have as much to do with quality as any
other part of software engineering practice.
2. A useful product delivers the content, functions, and features that the end
user desires, but as important, it delivers these assets in a reliable, error-free
400 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
uote:
“People forget how
fast you did a job—
but they always
remember how
well you did it.”
Howard Newton
1 This definition has been adapted from [Bes04] and replaces a more manufacturing-oriented view
presented in earlier editions of this book.
pre75977_ch14.qxd 11/27/08 5:51 PM Page 400
way. A useful product always satisfies those requirements that have been
explicitly stated by stakeholders. In addition, it satisfies a set of implicit
requirements (e.g., ease of use) that are expected of all high-quality software.
3. By adding value for both the producer and user of a software product, high-
quality software provides benefits for the software organization and the end-
user community. The software organization gains added value because
high-quality software requires less maintenance effort, fewer bug fixes, and
reduced customer support. This enables software engineers to spend more
time creating new applications and less on rework. The user community
gains added value because the application provides a useful capability in
a way that expedites some business process. The end result is (1) greater
software product revenue, (2) better profitability when an application
supports a business process, and/or (3) improved availability of information
that is crucial for the business.
14.2.1 Garvin’s Quality Dimensions
David Garvin [Gar87] suggests that quality should be considered by taking a multidi-
mensional viewpoint that begins with an assessment of conformance and termi-
nates with a transcendental (aesthetic) view. Although Garvin’s eight dimensions of
quality were not developed specifically for software, they can be applied when soft-
ware quality is considered:
Performance quality. Does the software deliver all content, functions, and
features that are specified as part of the requirements model in a way that
provides value to the end user?
Feature quality. Does the software provide features that surprise and
delight first-time end users?
Reliability. Does the software deliver all features and capability without
failure? Is it available when it is needed? Does it deliver functionality that is
error-free?
Conformance. Does the software conform to local and external software
standards that are relevant to the application? Does it conform to de facto
design and coding conventions? For example, does the user interface con-
form to accepted design rules for menu selection or data input?
Durability. Can the software be maintained (changed) or corrected
(debugged) without the inadvertent generation of unintended side effects?
Will changes cause the error rate or reliability to degrade with time?
Serviceability. Can the software be maintained (changed) or corrected
(debugged) in an acceptably short time period? Can support staff acquire all
information they need to make changes or correct defects? Douglas Adams
[Ada93] makes a wry comment that seems appropriate here: “The difference
C H A P T E R 1 4 Q U A L I T Y C O N C E P T S 401
pre75977_ch14.qxd 11/27/08 5:51 PM Page 401
between something that can go wrong and something that can’t possibly go
wrong is that when something that can’t possibly go wrong goes wrong it
usually turns out to be impossible to get at or repair.”
Aesthetics. There’s no question that each of us has a different and very
subjective vision of what is aesthetic. And yet, most of us would agree that
an aesthetic entity has a certain elegance, a unique flow, and an obvious
“presence” that are hard to quantify but are evident nonetheless. Aesthetic
software has these characteristics.
Perception. In some situations, you have a set of prejudices that will influ-
ence your perception of quality. For example, if you are introduced to a soft-
ware product that was built by a vendor who has produced poor quality in
the past, your guard will be raised and your perception of the current soft-
ware product quality might be influenced negatively. Similarly, if a vendor
has an excellent reputation, you may perceive quality, even when it does not
really exist.
Garvin’s quality dimensions provide you with a “soft” look at software quality.
Many (but not all) of these dimensions can only be considered subjectively. For this
reason, you also need a set of “hard” quality factors that can be categorized in two
broad groups: (1) factors that can be directly measured (e.g., defects uncovered dur-
ing testing) and (2) factors that can be measured only indirectly (e.g., usability or
maintainability). In each case measurement must occur. You should compare the
software to some datum and arrive at an indication of quality.
14.2.2 McCall’s Quality Factors
McCall, Richards, and Walters [McC77] propose a useful categorization of factors
that affect software quality. These software quality factors, shown in Figure 14.1,
focus on three important aspects of a software product: its operational characteris-
tics, its ability to undergo change, and its adaptability to new environments.
Referring to the factors noted in Figure 14.1, McCall and his colleagues provide
the following descriptions:
Correctness. The extent to which a program satisfies its specification and fulfills the
customer’s mission objectives.
Reliability. The extent to which a program can be expected to perform its intended func-
tion with required precision. [It should be noted that other, more complete definitions of
reliability have been proposed (see Chapter 25).]
Efficiency. The amount of computing resources and code required by a program to
perform its function.
Integrity. Extent to which access to software or data by unauthorized persons can be
controlled.
Usability. Effort required to learn, operate, prepare input for, and interpret output of a
program.
402 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
pre75977_ch14.qxd 11/27/08 5:51 PM Page 402
Maintainability. Effort required to locate and fix an error in a program. [This is a very
limited definition.]
Flexibility. Effort required to modify an operational program.
Testability. Effort required to test a program to ensure that it performs its intended
function.
Portability. Effort required to transfer the program from one hardware and/or software
system environment to another.
Reusability. Extent to which a program [or parts of a program] can be reused in other
applications—related to the packaging and scope of the functions that the program
performs.
Interoperability. Effort required to couple one system to another.
It is difficult, and in some cases impossible, to develop direct measures2 of these
quality factors. In fact, many of the metrics defined by McCall et al. can be measured
only indirectly. However, assessing the quality of an application using these factors
will provide you with a solid indication of software quality.
14.2.3 ISO 9126 Quality Factors
The ISO 9126 standard was developed in an attempt to identify the key quality
attributes for computer software. The standard identifies six key quality attributes:
Functionality. The degree to which the software satisfies stated needs as
indicated by the following subattributes: suitability, accuracy, interoperability,
compliance, and security.
Reliability. The amount of time that the software is available for use as indi-
cated by the following subattributes: maturity, fault tolerance, recoverability.
C H A P T E R 1 4 Q U A L I T Y C O N C E P T S 403
PRODUCT OPERATION
PRODUCT TRANSITIONPRODUCT REVISION
Correctness Usability Efficiency
Reliability Integrity
Maintainability
Flexibility
Testability
Portability
Reusability
Interoperability
FIGURE 14.1
McCall’s
software
quality factors
uote:
“The bitterness
of poor quality
remains long after
the sweetness of
meeting the
schedule has been
forgotten.”
Karl Weigers
(unattributed
quote)
2 A direct measure implies that there is a single countable value that provides a direct indication of
the attribute being examined. For example, the “size” of a program can be measured directly by
counting the number of lines of code.
pre75977_ch14.qxd 11/27/08 5:51 PM Page 403
Usability. The degree to which the software is easy to use as indicated by
the following subattributes: understandability, learnability, operability.
Efficiency. The degree to which the software makes optimal use of system
resources as indicated by the following subattributes: time behavior, resource
behavior.
Maintainability. The ease with which repair may be made to the software as
indicated by the following subattributes: analyzability, changeability, stability,
testability.
Portability. The ease with which the software can be transposed from one
environment to another as indicated by the following subattributes: adapt-
ability, installability, conformance, replaceability.
Like other software quality factors discussed in the preceding subsections, the ISO
9126 factors do not necessarily lend themselves to direct measurement. However,
they do provide a worthwhile basis for indirect measures and an excellent checklist
for assessing the quality of a system.
14.2.4 Targeted Quality Factors
The quality dimensions and factors presented in Sections 14.2.1 and 14.2.2 focus on
the software as a whole and can be used as a generic indication of the quality of an
application. A software team can develop a set of quality characteristics and associ-
ated questions that would probe3 the degree to which each factor has been satisfied.
For example, McCall identifies usability as an important quality factor. If you were
asked to review a user interface and assess its usability, how would you proceed?
You might start with the subattributes suggested by McCall—understandability,
learnability, and operability—but what do these mean in a pragmatic sense?
To conduct your assessment, you’ll need to address specific, measurable (or at
least, recognizable) attributes of the interface. For example [Bro03]:
Intuitiveness. The degree to which the interface follows expected usage patterns
so that even a novice can use it without significant training.
• Is the interface layout conducive to easy understanding?
• Are interface operations easy to locate and initiate?
• Does the interface use a recognizable metaphor?
• Is input specified to economize key strokes or mouse clicks?
• Does the interface follow the three golden rules? (Chapter 11)
• Do aesthetics aid in understanding and usage?
404 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
uote:
“Any activity
becomes creative
when the doer
cares about doing
it right, or better.”
John Updike
Although it’s tempting
to develop quantitative
measures for the
quality factors noted
here, you can also
create a simple
checklist of attributes
that provide a solid
indication that the
factor is present.
3 These characteristics and questions would be addressed as part of a software review (Chapter 15).
pre75977_ch14.qxd 11/27/08 5:51 PM Page 404
Efficiency. The degree to which operations and information can be located or
initiated.
• Does the interface layout and style allow a user to locate operations and
information efficiently?
• Can a sequence of operations (or data input) be performed with an economy
of motion?
• Are output data or content presented so that it is understood immediately?
• Have hierarchical operations been organized in a way that minimizes the
depth to which a user must navigate to get something done?
Robustness. The degree to which the software handles bad input data or inap-
propriate user interaction.
• Will the software recognize the error if data at or just outside prescribed
boundaries is input? More importantly, will the software continue to operate
without failure or degradation?
• Will the interface recognize common cognitive or manipulative mistakes and
explicitly guide the user back on the right track?
• Does the interface provide useful diagnosis and guidance when an error
condition (associated with software functionality) is uncovered?
Richness. The degree to which the interface provides a rich feature set.
• Can the interface be customized to the specific needs of a user?
• Does the interface provide a macro capability that enables a user to identify a
sequence of common operations with a single action or command?
As the interface design is developed, the software team would review the design
prototype and ask the questions noted. If the answer to most of these questions is
“yes,” it is likely that the user interface exhibits high quality. A collection of questions
similar to these would be developed for each quality factor to be assessed.
14.2.5 The Transition to a Quantitative View
In the preceding subsections, I have presented a variety of qualitative factors for the
“measurement” of software quality. The software engineering community strives to
develop precise measures for software quality and is sometimes frustrated by the
subjective nature of the activity. Cavano and McCall [Cav78] discuss this situation:
The determination of quality is a key factor in every day events—wine tasting contests,
sporting events [e.g., gymnastics], talent contests, etc. In these situations, quality is
judged in the most fundamental and direct manner: side by side comparison of objects
under identical conditions and with predetermined concepts. The wine may be judged
according to clarity, color, bouquet, taste, etc. However, this type of judgment is very sub-
jective; to have any value at all, it must be made by an expert.
C H A P T E R 1 4 Q U A L I T Y C O N C E P T S 405
pre75977_ch14.qxd 11/27/08 5:51 PM Page 405
Subjectivity and specialization also apply to determining software quality. To help
solve this problem, a more precise definition of software quality is needed as well as a
way to derive quantitative measurements of software quality for objective analysis. . . .
Since there is no such thing as absolute knowledge, one should not expect to measure
software quality exactly, for every measurement is partially imperfect. Jacob Bronkowski
described this paradox of knowledge in this way: “Year by year we devise more precise
instruments with which to observe nature with more fineness. And when we look at the
observations we are discomfited to see that they are still fuzzy, and we feel that they are
as uncertain as ever.”
In Chapter 23, I’ll present a set of software metrics that can be applied to the quan-
titative assessment of software quality. In all cases, the metrics represent indirect
measures; that is, we never really measure quality but rather some manifestation of
quality. The complicating factor is the precise relationship between the variable that
is measured and the quality of software.
1 4 . 3 T H E S O F T WA R E Q U A L I T Y D I L E M M A
In an interview [Ven03] published on the Web, Bertrand Meyer discusses what I call
the quality dilemma:
If you produce a software system that has terrible quality, you lose because no one will
want to buy it. If on the other hand you spend infinite time, extremely large effort, and
huge sums of money to build the absolutely perfect piece of software, then it’s going to
take so long to complete and it will be so expensive to produce that you’ll be out of busi-
ness anyway. Either you missed the market window, or you simply exhausted all your
resources. So people in industry try to get to that magical middle ground where the prod-
uct is good enough not to be rejected right away, such as during evaluation, but also not
the object of so much perfectionism and so much work that it would take too long or cost
too much to complete.
It’s fine to state that software engineers should strive to produce high-quality
systems. It’s even better to apply good practices in your attempt to do so. But the
situation discussed by Meyer is real life and represents a dilemma for even the best
software engineering organizations.
14.3.1 “Good Enough” Software
Stated bluntly, if we are to accept the argument made by Meyer, is it acceptable
to produce “good enough” software? The answer to this question must be “yes,”
because major software companies do it every day. They create software with known
bugs and deliver it to a broad population of end users. They recognize that some of
the functions and features delivered in Version 1.0 may not be of the highest quality
and plan for improvements in Version 2.0. They do this knowing that some cus-
tomers will complain, but they recognize that time-to-market may trump better qual-
ity as long as the delivered product is “good enough.”
406 P A R T T H R E E Q U A L I T Y M A N A G E M E N T
When you’re faced
with the quality
dilemma (and
everyone is faced
with it at one time
or another), try to
achieve balance—
enough effort to
produce acceptable
quality without burying
the project.
pre75977_ch14.qxd 11/27/08 5:51 PM Page 406
Exactly what is “good enough”? Good enough software delivers high-quality func-
tions and features that users desire, but at the same time it delivers other more
obscure or specialized functions and features that contain known bugs. The soft-
ware vendor hopes that the vast majority of end users will overlook the bugs because
they are so happy with other application functionality.
This idea may resonate with many readers. If you’re one of them, I can only ask
you to consider some of the arguments against “good enough.”
It is true that “good enough” may work in some application domains and for a few
major software companies. After all, if a company has a large marketing budget and
can convince enough people to buy version 1.0, it has succeeded in locking them in.
As I noted earlier, it can argue that it will improve quality in subsequent versions. By
delivering a good enough version 1.0, it has cornered the market.
If you work for a small company be wary of this philosophy. When you deliver a
good enough (buggy) product, you risk permanent damage to your company’s repu-
tation. You may never get a chance to deliver version 2.0 because bad buzz may
cause your sales to plummet and your company to fold.
If you work in certain application domains (e.g., real-time embedded software) or
build application software that is integrated with hardware (e.g., automotive soft-
ware, telecommunications software), delivering software with known bugs can be
negligent and open your company to expensive litigation. In some cases, it can even
be criminal. No one wants good enough aircraft avionics software!
So, proceed with caution if you believe that “good enough” is a short cut that can
solve your software quality problems. It can work, but only for a few and only in a
limited set of application domains.4
14.3.2 The Cost of Quality
The argument goes something like this—we know that quality is important, but it costs
us time and money—too much time and money to get the level of software quality we
really want. On its face, this argument seems reasonable (see Meyer’s comments ear-
lier in this section). There is no question that quality has a cost, but lack of quality also
has a cost—not only to end users who must live with buggy software, but also to the
software organization that has built and must maintain it. The real question is this:
which cost should we be worried about? To answer this question, you must understand
both the cost of achieving quality and the cost of low-quality software.
The cost of quality includes all costs incurred in the pursuit of quality or in per-
forming quality-related activities and the downstream costs of lack of quality. To
understand these …
Copyright 1999 by Mary Beth Rosson and John M. Carroll
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
Scenario-Based Usability Engineering
Mary Beth Rosson and John M. Carroll
Department of Computer Science
Virginia Tech
Fall 1999
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 1
Copyright 1999 by Mary Beth Rosson and John M. Carroll
Chapter 3
Analyzing Requirements
Making work visible. The end goal of requirements analysis can be elusive when work is not
understood in the same way by all participants. Blomberg, Suchman, and Trigg describe this
problem in their exploration of image-processing services for a law firm. Initial studies of
attorneys produced a rich analysis of their document processing needs—for any legal proceeding,
documents often numbering in the thousands are identified as “responsive” (relevant to the case) by
junior attorneys, in order to be submitted for review by the opposing side. Each page of these
documents is given a unique number for subsequent retrieval. An online retrieval index is created
by litigation support workers; the index encodes document attributes such as date, sender,
recipient, and type. The attorneys assumed that their job (making the subjective relevance
decisions) would be facilitated by image processing that encodes a documents’s objective attributes
(e.g., date, sender). However, studies of actual document processing revealed activities that were
not objective at all, but rather relied on the informed judgment of the support staff. Something as
simple as a document date was often ambiguous, because it might display the date it was written,
signed, and/or delivered; the date encoded required understanding the document’s content and role
in a case. Even determining what constituted a document required judgment, as papers came with
attachments and no indication of beginning or end. Taking the perspective of the support staff
revealed knowledge-based activities that were invisible to the attorneys, but that had critical limiting
implications for the role of image-processing technologies (see Blomberg, 1995).
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 2
Copyright 1999 by Mary Beth Rosson and John M. Carroll
What is Requirements Analysis?
The purpose of requirements analysis is to expose the needs of the current situation with
respect to a proposed system or technology. The analysis begins with a mission statement or
orienting goals, and produces a rich description of current activities that will motivate and guide
subsequent development. In the legal office case described above, the orienting mission was
possible applications of image processing technology; the rich description included a view of case
processing from both the lawyers’ and the support staffs’ perspectives. Usability engineers
contribute to this process by analyzing what and how features of workers’ tasks and their work
situation are contributing to problems or successes1. This analysis of the difficulties or
opportunities forms a central piece of the requirements for the system under development: at the
minimum, a project team expects to enhance existing work practices. Other requirements may arise
from issues unrelated to use, for example hardware cost, development schedule, or marketing
strategies. However these pragmatic issues are beyond the scope of this textbook. Our focus is on
analyzing the requirements of an existing work setting and of the workers who populate it.
Understanding Work
What is work? If you were to query a banker about her work, you would probably get a
list of things she does on a typical day, perhaps a description of relevant information or tools, and
maybe a summary of other individuals she answers to or makes requests of. At the least,
describing work means describing the activities, artifacts (data, documents, tools), and social
context (organization, roles, dependencies) of a workplace. No single observation or interview
technique will be sufficient to develop a complete analysis; different methods will be useful for
different purposes.
Tradeoff 3.1: Analyzing tasks into hierarchies of sub-tasks and decision rules brings order
to a problem domain, BUT tasks are meaningful only in light of organizational goals and
activities.
A popular approach to analyzing the complex activities that comprise work is to enumerate
and organize tasks and subtasks within a hierarchy (Johnson, 1995). A banker might indicate that
the task of “reviewing my accounts” consists of the subtasks “looking over the account list”,
“noting accounts with recent activity”, and “opening and reviewing active accounts”. Each of these
sub-tasks in turn can decomposed more finely, perhaps to the level of individual actions such as
picking up or filing a particular document. Some of the tasks will include decision-making, such
1 In this discussion we use “work” to refer broadly to the goal-directed activities that take place in the
problem domain. In some cases, this may involve leisure or educational activities, but in general the same methods
can be applied to any situation with established practices.
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 3
Copyright 1999 by Mary Beth Rosson and John M. Carroll
as when the banker decides whether or not to open up a specific account based on its level of
activity.
A strength of task analysis is its step-by-step transformation of a complex space of
activities into an organized set of choices and actions. This allows a requirements analyst to
examine the task’s structure for completeness, complexity, inconsistencies, and so on. However
the goal of systematic decomposition can also be problematic, if analysts become consumed by
representing task elements, step sequences, and decision rules. Individual tasks must be
understood within the larger context of work; over-emphasizing the steps of a task can cause
analysts to miss the forest for the trees. To truly understand the task of reviewing accounts a
usability engineer must learn who is responsible for ensuring that accounts are up to date, how
account access is authorized and managed, and so on.
The context of work includes the physical, organizational, social, and cultural relationships
that make up the work environment. Actions in a workplace do not take place in a vacuum;
individual tasks are motivated by goals, which in turn are part of larger activities motivated by the
organizations and cultures in which the work takes place (see Activities of a Health Care Center,
below). A banker may report that she is reviewing accounts, but from the perspective of the
banking organization she is “providing customer service” or perhaps “increasing return on
investment”. Many individuals — secretaries, data-entry personnel, database programmers,
executives — work with the banker to achieve these high-level objectives. They collaborate
though interactions with shared tools and information; this collaboration is shaped not only by the
tools that they use, but also by the participants’ shared understanding of the bank’s business
practice — its goals, policies, and procedures.
Tradeoff 3.2: Task information and procedures are externalized in artifacts, BUT the impact
of these artifacts on work is apparent only in studying their use.
A valuable source of information about work practices is the artifacts used to support task
goals (Carroll & Campbell, 1989). An artifact is simply a designed object — in an office setting, it
might be a paper form, a pencil, an in-basket, or a piece of computer software. It is simple and fun
to collect artifacts and analyze their characteristics (Norman, 1990). Consider the shape of a
pencil: it conveys a great deal about the size and grasping features of the humans who use it;
pencil designers will succeed to a great extent by giving their new designs the physical
characteristics of pencils that have been used for years. But artifacts are just part of the picture.
Even an object as simple as a pencil must be analyzed as part of a real world activity, an activity
that may introduce concerns such as erasability (elementary school use), sharpness (architecture
firm drawings), name-brands (pre-teen status brokering), cost (office supplies accounting), and so
on.
Usability engineers have adapted ethnographic techniques to analyze the diverse factors
influencing work. Ethnography refers to methods developed within anthropology for gaining
insights into the life experiences of individuals whose everyday reality is vastly different from the
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 4
Copyright 1999 by Mary Beth Rosson and John M. Carroll
analyst’s (Blomberg, 1990). Ethnographers typically become intensely involved in their study of a
group’s culture and activities, often to the point of becoming members themselves. As used by
HCI and system design communities, ethnography involves observations and interviews of work
groups in their natural setting, as well as collection and analysis of work artifacts (see Team Work
in Air Traffic Control, below). These studies are often carried out in an iterative fashion, where
the interpretation of one set of data raises questions or possibilities that may be pursued more
directly in follow-up observations and interviews.
Figure 3.1: Activity Theory Analysis of a Health Care Center
(after Kuuiti and Arvonen, 1992)
Activities of a Health Care Center: Activity Theory (AT) offers a view of individual
work that grounds it in the goals and practices of the community within which the work takes
place. Engeström (1987) describes how an individual (the subject) works on a problem (the
object) to achieve a result (the outcome), but that the work on the problem is mediated by the tools
available (see Figure 3.2m). An individual’s work is also mediated by the rules of practice shared
within her community; the object of her work is mediated by that same communities division of
labor.
Kuutti and Arvonen (1992; see also Engeström 1990; 1991; 1993) applied this framework
to their studies of a health care organization in Espoo, Finland. This organization wished to evolve
Tools Supporting Activity:
Subject Involved in Activity:
Community sponsoring Activity:
Object of Activity:
Activity Outcome:
Division of LaborRules of Practice
patient record, medicines, etc.
one physician in a health care unit
all personnel of the health care unit
the complex, multi-dimensional
problem of a patient
patient problem resolved
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 5
Copyright 1999 by Mary Beth Rosson and John M. Carroll
from a rather bureaucratic organization with strong separations between its various units (e.g.,
social work, clinics, hospital) to a more service-oriented organization. A key assumption in doing
this was that the different units shared a common general object of work—the “life processes” of
the town’s citizens. This high-level goal was acknowledged to be a complex problem requiring the
integrated services of complementary health care units.
The diagram in Figure 3.1 summarizes an AT analysis developed for one physician in a
clinic. The analysis records the shared object (the health conditions of a patient). At the same time
it shows this physician’s membership in a subcommunity, specifically the personnel at her clinic.
This clinic is both geographically and functionally separated from other health care units, such as
the hospital or the social work office. The tools that the physician uses in her work, the rules that
govern her actions, and her understanding of her goals are mediated by her clinic. As a result, she
has no way of analyzing or finding out about other dimensions of this patient’s problems, for
example the home life problems being followed by a social worker, or emotional problems under
treatment by psychiatric personnel. In AT such obstacles are identified as contradictions which
must be resolved before the activity can be successful.
In this case, a new view of community was developed for the activity. For each patient,
email or telephone was used to instantiate a new community, comprised of individuals as relevant
from different health units. Of course the creation of a more differentiated community required
negotiation concerning the division of labor (e.g. who will contact whom and for what purpose),
and rules of action (e.g., what should be done and in what order). Finally, new tools (composite
records, a “master plan”) were constructed that better supported the redefined activity.
Figure 3.2 will appear here, a copy of the figure provided by Hughes et al. in their
ethnographic report. Need to get copyright permission.
Team Work in Air Traffic Control: An ethnographic study of British air traffic
control rooms by Hughes, Randall and Shapiro (CSCW’92) highlighted the central role played by
the paper strips used to chart the progress of individual flights. In this study the field workers
immersed themselves in the work of air traffic controllers for several months. During this time
they observed the activity in the control rooms and talked to the staff; they also discussed with the
staff the observations they were collecting and their interpretation of these data.
The general goal of the ethnography was to analyze the social organization of the work in
the air traffic control rooms. In this the researchers showed how the flight progress strips
supported “individuation”, such that each controller knew what their job was in any given
situation, but also how their tasks were interdependent with the tasks of others. The resulting
division of labor was accomplished in a smooth fashion because the controllers had shared
knowledge of what the strips indicated, and were able to take on and hand off tasks as needed, and
to recognize and address problems that arose.
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 6
Copyright 1999 by Mary Beth Rosson and John M. Carroll
Each strip displays an airplane’s ID and aircraft type; its current level, heading, and
airspeed; its planned flight path, navigation points on route, estimated arrival at these points; and
departure and destination airports (see Figure 3.2). However a strip is more than an information
display. The strips are work sites, used to initiate and perform control tasks. Strips are printed
from the online database, but then annotated as flight events transpire. This creates a public
history; any controller can use a strip to reconstruct a “trajectory” of what the team has done with a
flight. The strips are used in conjunction with the overview offered by radar to spot exceptions or
problems to standard ordering and arrangement of traffic. An individual strip gets “messy” to the
extent it has deviated from the norm, so a set of strips serves as a sort of proxy for the orderliness
of the skies.
The team interacts through the strips. Once a strip is printed and its initial data verified, it is
placed in a holder color-coded for its direction. It may then be marked up by different controllers,
each using a different ink color; problems or deviations are signaled by moving a strip out of
alignment, so that visual scanning detects problem flights. This has important social consequences
for the active controller responsible for a flight. She knows that other team members are aware of
the flight’s situation and can be consulted; who if anyone has noted specific issues with the flight;
if a particularly difficult problem arises it can be passed on to the team leader without a lot of
explanation; and so on.
The ethnographic analysis documented the complex tasks that revolved around the flight
control strips. At the same time it made clear the constraints of these manually-created and
maintained records. However a particularly compelling element of the situation was the
controllers’ trust in the information on the strips. This was due not to the strips’ physical
characteristics, but rather to the social process they enable—the strips are public, and staying on
top of each others’ problem flights, discussing them informally while working or during breaks, is
taken for granted. Any computerized replacement of the strips must support not just management
of flight information, but also the social fabric of the work that engenders confidence in the
information displayed.
User Involvement
Who are a system’s target users? Clearly this is a critical question for a user-centered
development process. It first comes up during requirements analysis, when the team is seeking to
identify a target population(s), so as to focus in on the activities that will suggest problems and
concerns. Managers or corporation executives are a good source of high-level needs statements
(e.g., reduce data-processing errors, integrate billing and accounting). Such individuals also have
a well-organized view of their subordinates’ responsibilities , and of the conditions under which
various tasks are completed. Because of the hierarchical nature of most organizations, such
individuals are usually easily to identify and comprise a relatively small set. Unfortunately if a
requirements team accepts these requirements too readily, they may miss the more detailed and
situation-specific needs of the individuals who will use a new system in their daily work.
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 7
Copyright 1999 by Mary Beth Rosson and John M. Carroll
Tradeoff 3.3: Management understands the high-level requirements for a system, BUT is
often unaware of workers’ detailed needs and preferences.
Every system development situation includes multiple stakeholders (Checklund, 1981).
Individuals in management positions may have authorized a system’s purchase or development;
workers with a range of job responsibilities will actually use the system; others may benefit only
indirectly from the tasks a system supports. Each set of stakeholders has its own set of
motivations and problems that the new system might address (e.g., productivity, satisfaction, ease
of learning). What’s more, none of them can adequately communicate the perspectives of the
others — despite the best of intentions, many details of a subordinate’s work activities and
concerns are invisible to those in supervisory roles. Clearly what is needed in requirements
analysis is a broad-based approach that incorporates diverse stakeholder groups into the
observation and interviewing activities.
Tradeoff 3.4: Workers can describe their tasks, BUT work is full of exceptions, and the
knowledge for managing exceptions is often tacit and difficult to externalize.
But do users really understand their own work? We made the point above that a narrow
focus on the steps of a task might cause analysts to miss important workplace context factors. An
analogous point holds with respect to interviews or discussions with users. Humans are
remarkably good (and reliable) at “rationalizing” their behaivor (Ericsson & Simon, 1992).
Reports of work practices are no exception — when asked workers will usually first describe a
most-likely version of a task. If an established “procedures manual” or other policy document
exists, the activities described by experienced workers will mirror the official procedures and
policies. However this officially-blessed knowledge is only part of the picture. An experienced
worker will also have considerable “unofficial” knowledge acquired through years of encountering
and dealing with the specific needs of different situations, with exceptions, with particular
individuals who are part of the process, and so on. This expertise is often tacit, in that the
knowledgeable individuals often don’t even realize what they “know” until confronted with their
own behavior or interviewed with situation-specific probes (see Tacit Knowledge in Telephone
Trouble-Shooting, below). From the perspective of requirements analysis, however, tacit
knowledge about work can be critical, as it often contains the “fixes” or “enhancements” that have
developed informally to address the problems or opportunities of day-to-day work.
One effective technique for probing workers’ conscious and unconscious knowledge is
contextual inquiry (Beyers & Holtzblatt, 1994). This analysis method is similar to ethnography, in
that it involves the observation of individuals in the context of their normal work environment.
However it includes the perogative to interrupt an observed activity at points that seem informative
(e.g., when a problematic situation arises) and to interview the affected individual(s) on the spot
concerning the events that have been observed, to better understand causal factors and options for
continuing the activity. For example, a usability engineer who saw a secretary stop working on a
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 8
Copyright 1999 by Mary Beth Rosson and John M. Carroll
memo to make a phone call to another secretary, might ask her afterwards to explain what had just
happened between her and her co-worker.
Tacit Knowledge in Telephone Trouble-Shooting: It is common for workers to
see their conversations and interactions with each other as a social aspect of work that is enjoyable
but unrelated to work goals. Sachs (199x) observed this in her case study of telephony workers in
a phone company. The study analyzed the work processes related to detecting, submitting, and
resolving problems on telephone lines; the focus of the study was the Trouble Ticketing System
(TTS), a large database used to record telephone line problems, assign problems (tickets) to
engineers for correction, and keep records of problems detected and resolved.
Sachs argues that TTS takes an organizational view of work, treating work tasks as
modular and well-defined: one worker finds a problem, submits it to the database, TTS assigns it
to the engineer at the relevant site, that engineer picks up the ticket, fixes the problem, and moves
on. The original worker is freed from the problem analysis task once the original ticket, and the
second worker can move on once the problem has been addressed. TTS replaced a manual system
in which workers contacted each other directly over the phone, often working together to resolve a
problem. TTS was designed to make work more efficient by eliminating unnecessary phone
conversations.
In her interviews with telephony veterans, Sachs discovered that the phone conversations
were far from unnecessary. The initiation, conduct, and consequences of these conversations
reflected a wealth of tacit knowledge on the part of the worker--selecting the right person to call
(one known to have relevant expertise for this apparent problem), the “filling in” on what the first
worker had or had not determined or tried to this point, sharing of hypotheses and testing methods,
iterating together through tests and results, and carrying the results of this informal analysis into
other possibly related problem areas. In fact, TTS had made work less efficient in many cases,
because in order to do a competent job, engineers developed “workarounds” wherein they used
phone conversations as they had in the past, then used TTS to document the process afterwards.
Of interest was that the telephony workers were not at first aware of how much knowledge
of trouble-shooting they were applying to their jobs. They described the tasks as they understood
them from company policy and procedures. Only after considerable data collection and discussion
did they recognize that their jobs included the skills to navigate and draw upon a rich organizational
network of colleagues. In further work Sachs helped the phone company to develop a fix for the
observed workarounds in the form of a new organizational role: a “turf coordinator”, a senior
engineer responsible for identifying and coordinating the temporary network of workers needed to
collaborate on trouble-shooting a problem. As a result of Sach’s analysis, work that had been tacit
and informal was elevated to an explicit business responsibility.
Requirements Analysis with Scenarios
As introduced in Chapter 2, requirements refers to the first phase of SBUE. As we also
have emphasized, requirements cannot be analyzed all at once in waterfall fashion. However some
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 9
Copyright 1999 by Mary Beth Rosson and John M. Carroll
analysis must happen early on to get the ball rolling. User interaction scenarios play an important
role in these early analysis activities. When analysts are observing workers in the world, they are
collecting observed scenarios, episodes of actual interaction among workers that may or may not
involve technology. The analysis goal is to produce a summary that captures the critical aspects of
the observed activities. A central piece of this summary analysis is a set of requirements scenarios.
The development of requirements scenarios begins with determining who are the
stakeholders in a work situation — what their roles and motivations are, what characteristics they
possess that might influence reactions to new technology. A description of these stakeholders’
work practice is then created, through a combination of workplace observation and generation of
hypothetical situations. These sources of data are summarized and combined to generate the
requirements scenarios. A final step is to call out the most critical features of the scenarios, along
with hypotheses about the positive or negative consequences that these features seem to be having
on the work setting.
Introducing the Virtual Science Fair Example Case
The methods of SBUE will be introduced with reference to a single open-ended example
problem, the design of a virtual science fair (VSF). The high-level concept is to use computer-
mediated communication technology (e.g., email, online chat, discussion forums,
videoconferencing) and online archives (e.g., databases, digital libraries) to supplement the
traditional physical science fairs. Such fairs typically involve student creation of science projects
over a period of months. The projects are then exhibited and judged at the science fair event. We
begin with a very loose concept of what a virtual version of such a fair might be — not a
replacement of current fairs, but rather a supplement that expands the boundaries of what might
constitute participation, project construction, project exhibits, judging, and so on.
Stakeholder Analysis
Checklund (1981) offers a mnemonic for guiding development of an early shared vision of
a system’s goals — CATWOE analysis. CATWOE elements include Clients (those people who
will benefit or suffer from the system), Actors (those who interact with the system), a
Transformation (the basic purpose of the system), a Weltanschauung (the world view promoted by
the system), Owners (the individuals commissioning or authorizing the system), and the
Environment (physical constraints on the system). SBUE adapts Checklund’s technique as an aid
in identifying and organizing the concerns of various stakeholders during requirements
analysis.The SBUE adaptation of Checklund’s technique includes the development of thumbnail
scenarios for each element identified. The table includes just one example for each VSF element
called out in the analysis; for a complex situation multiple thumbnails might be needed. Each
scenario sketch is a usage-oriented elaboration of the element itself; the sketch is points to a future
situation in which a possible benefit, interaction, environmental constraint, etc., is realized. Thus
the client thumbnails emphasize hoped-for benefits of the VSF; the actor thumbnails suggest a few
interaction variations anticipated for different stakeholders. The thumbnail scenarios generated in
DRAFT: PLEASE DO NOT CITE OR CIRCULATE WITHOUT PERMISSION
SBUE—Chapter 3 10
Copyright 1999 by Mary Beth Rosson and John M. Carroll
this analysis are not yet design scenarios, they simply allow the analyst to begin to explore the
space of user groups, motivations, and pragmatic constraints.
The CATWOE thumbnail scenarios begin the iterative process of identifying and analyzing
the background, motivations, and preferences that different user groups will bring to the use of the
target system. This initial picture will be elaborated throughout the development process, through
analysis of both existing and envisioned usage situations.
CATWOE
Element
V S F
Element
Thumbnail
Scenarios
Clients Students
Community members
A high school student learns about road-bed coatings from a
retired civil engineer.
A busy housewife helps a middle school student organize her
bibliographic information.
Actors Students
Teachers
Community members
A …
129
6
Air Traffic Control
A Case Study in Designing
for High Availability
The FAA has faced this problem [of complexity] throughout its decade-
old attempt to replace the nation’s increasingly obsolete air traffic
control system. The replacement, called Advanced Automation System,
combines all the challenges of computing in the 1990s. A program that
is more than a million lines in size is distributed across hundreds of
computers and embedded into new and sophisticated hardware, all of
which must respond around the clock to unpredictable real-time
events. Even a small glitch potentially threatens public safety.
— W. Wayt Gibbs [Gibbs 94]
Air traffic control (ATC) is among the most demanding of all software applica-
tions. It is
hard real time
, meaning that timing deadlines must be met absolutely;
it is
safety critical
, meaning that human lives may be lost if the system does not
perform correctly; and it is
highly distributed
, requiring dozens of controllers to
work cooperatively to guide aircraft through the airways system. In the United
States, whose skies are filled with more commercial, private, and military aircraft
than any other part of the world, ATC is an area of intense public scrutiny. Aside
from the obvious safety issues, building and maintaining a safe, reliable airways
system requires enormous expenditures of public money. ATC is a multibillion-
dollar undertaking.
This chapter is a case study of one part of a once-planned, next-generation
ATC system for the United States. We will see how its architecture—in particu-
lar, a set of carefully chosen views (as in Chapter 2) coupled with the right tactics
(as in Chapter 5)—held the key to achieving its demanding and wide-ranging
requirements. Although this system was never put into operation because of bud-
getary constraints, it was implemented and demonstrated that the system could
meet its quality goals.
In the United States, air traffic is controlled by the Federal Aviation Admin-
istration (FAA), a government agency responsible for aviation safety in general.
Bass.book Page 129 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
130
Part Two Creating an Architecture 6
—Air Traffic Control
The FAA is the customer for the system we will describe. As a flight progresses
from its departure airport to its arrival airport, it deals with several ATC entities
that guide it safely through each portion of the airways (and ground facilities) it is
using.
Ground
control
coordinates the movement of aircraft on the ground at an air-
port. Towers control aircraft flying within an airport’s
terminal control area
, a
cylindrical section of airspace centered at an airport. Finally,
en route centers
divide the skies over the country into 22 large sections of responsibility.
Consider an airline flight from Key West, Florida, to Washington, D.C.’s
Dulles Airport. The crew of the flight will communicate with Key West ground
control to taxi from the gate to the end of the runway, Key West tower during
takeoff and climb-out, and then Miami Center (the en route center whose airspace
covers Key West) once it leaves the Key West terminal control area. From there
the flight will be handed off to Jacksonville Center, Atlanta Center, and so forth,
until it enters the airspace controlled by Washington Center. From Washington
Center, it will be handed off to the Dulles tower, which will guide its approach
and landing. When it leaves the runway, the flight will communicate with Dulles
ground control for its taxi to the gate. This is an oversimplified view of ATC in
the United States, but it suffices for our case study. Figure 6.1 shows the hand-off
process, and Figure 6.2 shows the 22 en route centers.
The system we will study is called the Initial Sector Suite System (ISSS),
which was intended to be an upgraded hardware and software system for the 22
en route centers in the United States. It was part of a much larger government
procurement that would have, in stages, installed similar upgraded systems in the
towers and ground control facilities, as well as the transoceanic ATC facilities.
FIGURE 6.1
Flying from point A to point B in the U.S. air traffic control system.
Courtesy of Ian Worpole/
Scientific American
, 1994.
Bass.book Page 130 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
131
The fact that ISSS was to be procured as only one of a set of strongly related
systems had a profound effect on its architecture. In particular, there was great
incentive to adopt common designs and elements where possible because the
ISSS developer also intended to bid on the other systems. After all, these differ-
ent systems (en route center, tower, ground control) share many elements: inter-
faces to radio systems, interfaces to flight plan databases, interfaces to each other,
interpreting radar data, requirements for reliability and performance, and so on.
Thus, the ISSS design was influenced broadly by the requirements for all of the
upgraded systems, not just the ISSS-specific ones. The complete set of upgraded
systems was to be called the Advanced Automation System (AAS).
Ultimately, the AAS program was canceled in favor of a less ambitious, less
costly, more staged upgrade plan. Nevertheless, ISSS is still an illuminating case
study because, when the program was canceled, the design and most of the code
were actually already completed. Furthermore, the architecture of the system (as
well as most other aspects) was studied by an independent audit team and found
to be well suited to its requirements. Finally, the system that was deployed
instead of ISSS borrowed heavily from the ISSS architecture. For these reasons,
we will present the ISSS architecture as an actual solution to an extremely diffi-
cult problem.
FIGURE 6.2
En route centers in the United States
ZLC
ZDV
ZOA
ZLA ZAB
ZKO
ZMP
ZAU
ZFW
ZHU
ZMA
ZMA
ZME
ZOB
ZID
ZDC
ZTL
ZJX
ZNY
ZBW
ZBW
ZSE
Introduction
Bass.book Page 131 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
132
Part Two Creating an Architecture 6
—Air Traffic Control
6.1 Relationship to the Architecture Business Cycle
Figure 6.3 shows how the air traffic control system relates to the Architecture
Business Cycle (ABC). The end users are federal air traffic controllers; the cus-
tomer is the Federal Aviation Administration; and the developing organization is
a large corporation that supplies many other important software-intensive sys-
tems to the U.S. government. Factors in the technical environment include the
mandated use of Ada as the language of implementation for large government
software systems and the emergence of distributed computing as a routine way to
build systems and approach fault tolerance.
6.2 Requirements and Qualities
Given that air traffic control is highly visible, with huge amounts of commercial,
government, and civilian interest, and given that it involves the potential loss of
human life if it fails, its two most important quality requirements are as follows:
1.
Ultrahigh availability, meaning that the system is absolutely prohibited from
being inoperative for longer than very short periods. The actual availability
FIGURE 6.3
The ABC applied to the ATC system
Requirements
(Qualities)
High Availability
High
Performance
Openness
Subsets
Interoperability
Architect’s Influences
Stakeholders
FAA and Air Traffic
Controllers
Developing Organization
Government Contractors
Technical Environment
Ada
Distributed Computing
Architect’s Experience
Fault Tolerant
Message-Based Systems
Architect(s)
System
ISSS
Architecture
Bass.book Page 132 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
Requirements and Qualities
133
requirement for ISSS is targeted at 0.99999, meaning that the system should
be unavailable for less than 5 minutes a year. (However, if the system is able
to recover from a failure and resume operating within 10 seconds, that fail-
ure is not counted as unavailable time.)
2.
High performance, meaning that the system has to be able to process large
numbers of aircraft—as many as 2,440—without “losing” any of them. Net-
works have to be able to carry the communication loads, and the software
has to be able to perform its computations quickly and predictably.
In addition, the following requirements, although not as critical to the safety of
the aircraft and their passengers, are major drivers in the shape of the architecture
and the principles behind that shape:
�
Openness, meaning that the system has to be able to incorporate commer-
cially developed software components, including ATC functions and basic
computing services such as graphics display packages
�
The ability to field subsets of the system, to handle the case in which the
billion-dollar project falls victim to reductions in budget (and hence func-
tionality)—as indeed happened
�
The ability to make modifications to the functionality and handle upgrades
in hardware and software (new processors, new I/O devices and drivers,
new versions of the Ada compiler)
�
The ability to operate with and interface to a bewildering set of external systems,
both hardware and software, some decades old, others not yet implemented
Finally, this system is unusual in that is must satisfy a great many stakeholders,
particularly the controllers, who are the system’s end users. While this does not
sound unusual, the difference is that controllers have the ability to reject the sys-
tem if it is not to their liking, even if it meets all its operational requirements. The
implications of this situation were profound for the processes of determining
requirements and designing the system, and slowed it down substantially.
The term
sector suite
refers to a suite of controllers (each sitting at a control
console like the one in Figure 6.4) that together control all of the aircraft in a par-
ticular sector of the en route center’s airspace. Our oversimplified view of ATC is
now enhanced by the fact that aircraft are handed off not only from center to cen-
ter but also from sector to sector within each center. Sectors are defined in ways
unique to each center. They may be defined to balance the load among the cen-
ter’s controllers; for instance, less-traveled sectors may be larger than densely
flown areas.
The ISSS design calls for flexibility in how many control stations are
assigned to each sector; anywhere from one to four are allowed, and the number
can be changed administratively while the system is in operation. Each sector is
required to have at least two controllers assigned to it. The first is the radar con-
troller, who monitors the radar surveillance data, communicates with the aircraft,
and is responsible for maintaining safe separations. The controller is responsible
for managing the tactical situation in the sector. The second controller is the data
Bass.book Page 133 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
134
Part Two Creating an Architecture 6
—Air Traffic Control
controller, who retrieves information (such as flight plans) about each aircraft
that is either in the sector or soon will be. The data controller provides the radar
controller with the information needed about the aircraft’s intentions in order to
safely and efficiently guide it through the sector.
ISSS is a large system. Here are some numbers to convey a sense of scale:
�
ISSS is designed to support up to 210 consoles per en route center. Each
console contains its own workstation-class processor; the CPU is an IBM
RS/6000.
�
ISSS requirements call for a center to control from 400 to 2,440 aircraft
tracks simultaneously.
�
There may be 16 to 40 radars to support a single facility.
�
A center may have from 60 to 90 control positions (each with one or several
consoles devoted to it).
�
The code to implement ISSS contains about 1 million lines of Ada.
In summary, the ISSS system must do the following:
�
Acquire radar target reports that are stored in an existing ATC system called
the Host Computer System.
�
Convert the radar reports for display and broadcast them to all of the con-
soles. Each console chooses the reports that it needs to display; any console
is capable of displaying any area.
FIGURE 6.4
Controllers at a sector suite.
Courtesy of William J. Hughes Technical
Center; FAA public domain photo.
Bass.book Page 134 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
Architectural Solution
135
�
Handle conflict alerts (potential aircraft collisions) or other data transmitted
by the host computer.
�
Interface to the Host for input and retrieval of flight plans.
�
Provide extensive monitoring and control information, such as network
management, to allow site administrators to reconfigure the installation in
real time.
�
Provide a recording capability for later playback.
�
Provide graphical user interface facilities, such as windowing, on the con-
soles. Special safety-related provisions are necessary, such as window trans-
parency to keep potentially crucial data from being obscured.
�
Provide reduced backup capability in the event of failure of the Host, the
primary communications network, or the primary radar sensors.
In the next section, we will explore the architecture that fulfilled these
requirements.
6.3 Architectural Solution
Just as an architecture affects behavior, performance, fault tolerance, and main-
tainability, so it is shaped by stringent requirements in any of these areas. In the case
of ISSS, by far the most important driving force is the extraordinarily high require-
ment for system availability: less than 5 minutes per year of downtime. This
requirement, more than any other, motivated architectural decisions for ISSS.
We begin our depiction of the ISSS architecture by describing the physical
environment hosting the software. Then we give a number of software architec-
ture views (as described in Chapter 2), highlighting the tactics (as described in
Chapter 5) employed by each. During this discussion, we introduce a new view
not previously discussed: fault tolerance. After discussing the relationships
among views, we conclude the architecture picture for ISSS by introducing a
refinement of the “abstract common services” tactic for modifiability and extensi-
bility, namely, code templates.
ISSS PHYSICAL VIEW
ISSS is a distributed system, consisting of a number of elements connected by
local area networks. Figure 6.5 shows a physical view of the ISSS system. It does
not show any of the support systems or their interfaces to the ISSS equipment.
Neither does it show any structure of the software. The major elements of the
physical view and the roles its elements play are as follows:
�
The Host Computer System
is the heart of the en route automation system.
At each en route center there are two host computers, one primary and the
Bass.book Page 135 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
136
Part Two Creating an Architecture 6
—Air Traffic Control
other ready to take over should there be some problem with the primary one.
The Host provides processing of both surveillance and flight plan data. Sur-
veillance data is displayed on the en route display consoles used by control-
lers. Flight data is printed as necessary on flight strip printers, and some
flight data elements are displayed on the data tags associated with the radar
surveillance information.
�
Common consoles are the air traffic controller’s workstations. They provide
displays of aircraft position information and associated data tags in a plan
view format (the radar display), displays of flight plan data in the form of
FIGURE 6.5
ISSS physical view
HCS A
HCS B
EDARC
Dual
LIU-Hs
Dual
LIU-Hs
ESI
3 LIU-Cs
Dual BCN
M&C
Consoles
Dual BCN
M&C
Consoles
Central
Processor
3 LIU-Cs
3 LIU-Cs
Central
Processor
3 LIU-Cs
Test and
Training
Subsystem
Bridge Bridge Bridge Bridge
Common
Consoles
(up to
210)
<<LCN>>
<<LCN>>
<<LCN>>
<<BCN>>
<<BCN>>
systems
external to ISS
multiple
common console
access rings
Key: UML
Bass.book Page 136 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
Architectural Solution
137
electronic flight strips,
1
and a variety of other information displays. They
also allow controllers to modify the flight data and to control the informa-
tion being displayed and its format. Common consoles are grouped in sector
suites of one to four consoles, with each sector suite serving the controller
team for one airspace control sector.
�
The common consoles are connected to the Host computers by means of the
Local Communications Network (LCN), the primary network of ISSS. Each
Host is interfaced to the LCN via dual LCN interface units (each called LIU-
H), which act as a fault-tolerant redundant pair.
�
The LCN is composed of four parallel token ring networks for redundancy
and for balancing overall loading. One network supports the broadcast of
surveillance data to all processors. One processor is used for point-to-point
communications between pairs of processors; one provides a channel for dis-
play data to be sent from the common consoles to recording units for layer
playback; and one is a spare. Bridges provide connections between the net-
works of the access rings and those of the backbone. The bridges also pro-
vide the ability to substitute the spare ring for a failed ring and to make other
alternative routings.
�
The Enhanced Direct Access Radar Channel (EDARC) provides a backup
display of aircraft position and limited flight data block information to the
en route display consoles. EDARC is used in the event of a loss of the dis-
play data provided by the host. It provides essentially raw unprocessed radar
data and interfaces to an ESI (External System Interface) processor.
�
The Backup Communications Network (BCN) is an Ethernet network using
TCP/IP protocols. It is used for other system functions besides the EDARC
interface and is also used as a backup network in some LCN failure conditions.
�
Both the LCN and the BCN have associated
Monitor-and-Control
(M&C)
consoles. These give system maintenance personnel an overview of the state
of the system and allow them to control its operation. M&C consoles are
ordinary consoles that contain special software to support M&C functions
and also provide the top-level or global availability management functions.
�
The Test and Training subsystem provides the capability to test new hardware
and software and to train users without interfering with the ATC mission.
�
The central processors are mainframe-class processors that provide the data
recording and playback functions for the system in an early version of ISSS.
1
A flight strip is a strip of paper, printed by the system that contains flight plan data about
an aircraft currently in or about to arrive in a sector. Before ISSS, these flight strips were
annotated by hand in pencil. ISSS was to provide the capability to manipulate strips
onscreen.
Bass.book Page 137 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
138
Part Two Creating an Architecture 6
—Air Traffic Control
Each common console is connected to both the LCN and the BCN. Because
of the large number of common consoles that may be present at a facility (up to
210), multiple LCN access rings are used to support all of them. This, then, is the
physical view for ISSS, highlighting the hardware in which the software resides.
MODULE DECOMPOSITION VIEW
The module elements of the ISSS operational software are called Computer Soft-
ware Configuration Items (CSCIs), defined in the government software development
standard whose use was mandated by the customer. CSCIs correspond largely to
work assignments; large teams are devoted to designing, building, and testing
them. There is usually some coherent theme associated with each CSCI—some
rationale for grouping all of the small software elements (such as packages, pro-
cesses, etc.) that it contains.
There are five CSCIs in ISSS, as follows:
1.
Display Management, responsible for producing and maintaining displays
on the common consoles.
2.
Common System Services, responsible for providing utilities generally use-
ful in air traffic control software—recall that the developer was planning to
build other systems under the larger AAS program.
3.
Recording, Analysis, and Playback, responsible for capturing ATC sessions
for later analysis.
4.
National Airspace System Modification, entailing a modification of the soft-
ware that resides on the Host (outside the scope of this chapter).
5.
The IBM AIX operating system, providing the underlying operating system
environment for the operational software.
These CSCIs form units of deliverable documentation and software, they appear
in schedule milestones, and each is responsible for a logically related segment of
ISSS functionality.
The module decomposition view reflects several modifiability tactics, as dis-
cussed in Chapter 5. “Semantic coherence” is the overarching tactic for allocat-
ing well-defined and nonoverlapping responsibilities to each CSCI. The Common
System Services Module reflects the tactic of “abstract common services.” The
Recording, Analysis, and Playback CSCI reflects the “record/playback” tactic for
testability. The resources of each CSCI are made available through carefully
designed software interfaces, reflecting “anticipation of expected changes,” “gen-
eralizing the module,” and “maintaining interface stability
.”
PROCESS VIEW
The basis of concurrency in ISSS resides in elements called
applications
. An
application corresponds roughly to a process, in the sense of Dijkstra’s cooperat-
ing sequential processes, and is at the core of the approach the ISSS designers
Bass.book Page 138 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
Architectural Solution
139
adopted for fault tolerance. An application is implemented as an Ada “main” unit
(a process schedulable by the operating system) and forms part of a CSCI (which
helps us define a mapping between the module decomposition view and this one).
Applications communicate by message passing, which is the connector in this
component-and-connector view.
ISSS is constructed to operate on a plurality of processors. Processors (as
described in the physical view) are logically combined to form a
processor group
,
the purpose of which is to host separate copies of one or more applications. This
concept is critical to fault tolerance and (therefore) availability. One executing
copy is primary, and the others are secondary; hence, the different application
copies are referred to as
primary address space
(PAS) or
standby address space
(SAS). The collection of one primary address space and its attendant standby
address spaces is called an
operational unit
. A given operational unit resides
entirely within the processors of a single processor group, which can consist of
up to four processors. Those parts of the ISSS that are not constructed in this
fault-tolerant manner (i.e., of coexisting primary and standby versions) simply
run independently on different processors. These are called
functional groups
and
they are present on each processor as needed, with each copy a separate instance
of the program, maintaining its own state.
In summary, an application may be either an operating unit or a functional
group. The two differ in whether the application’s functionality is backed up by
one or more secondary copies, which keep up with the state and data of the pri-
mary copy and wait to take over in case the primary copy fails. Operational units
have this fault-tolerant design; functional groups do not. An application is imple-
mented as an operational unit if its availability requirements dictate it; otherwise,
it is implemented as a functional group.
Applications interact in a client-server fashion. The client of the transaction
sends the server a
service request message
, and the server replies with an acknowl-
edgment. (As in all client-server schemes, a particular participant—or application
in this case—can be the client in one transaction and the server in another.)
Within an operational unit, the PAS sends state change notifications to each of its
SASs, which look for time-outs or other signs that they should take over and
become primary if the PAS or its processor fails. Figure 6.6 summarizes how the
primary and secondary address spaces of an application coordinate with each other
to provide backup capability and give their relationship to processor groups.
When a functional group receives a message, it need only respond and
update its own state as appropriate. Typically, the PAS of an operational unit
receives and responds to messages on behalf of the entire operational unit. It then
must update both its own state and the state of its SASs, which involves sending
the SASs additional messages.
In the event of a PAS failure, a switchover occurs as follows:
1.
A SAS is promoted to the new PAS.
2.
The new PAS reconstitutes with the clients of that operational unit (a fixed
list for each operational unit) by sending them a message that means,
Bass.book Page 139 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
140
Part Two Creating an Architecture 6
—Air Traffic Control
essentially: The operational unit that was serving you has had a failure. Were
you waiting for anything from us at the time? It then proceeds to service any
requests received in response.
3.
A new SAS is started to replace the previous PAS.
4.
The newly started SAS announces itself to the new PAS, which starts send-
ing it messages as appropriate to keep it up to date.
If failure is detected within a SAS, a new one is started on some other processor.
It coordinates with its PAS and starts receiving state data.
To add a new operational unit, the following step-by-step process is employed:
�
Identify the necessary input data and where it resides.
�
Identify which operational units require output data from the new opera-
tional unit.
�
Fit this operational unit’s communication patterns into a systemwide acyclic
graph in such a way that the graph remains acyclic so that deadlocks will
not occur.
�
Design the messages to achieve the required data flows.
FIGURE 6.6
Functional groups (FG), operational units, processor groups, and
primary/standby address spaces
Processor
Group
Processor 3
<<FG>>
<<SAS>>
<<SAS>>
Processor 2
<<FG>>
<<PAS>>
<<PAS>>
Processor 1
<<FG>>
<<SAS>>
<<SAS>>
Operational
Unit
Key: UML
Bass.book Page 140 Thursday, March 20, 2003 7:21 PM
Excepted from Bass et al., Software Architecture in Practice,
Second Edition (ISBN-13: 9780321154958)
Copyright © 2003 Pearson Education, Inc. Do not redistribute.
…
CATEGORIES
Economics
Nursing
Applied Sciences
Psychology
Science
Management
Computer Science
Human Resource Management
Accounting
Information Systems
English
Anatomy
Operations Management
Sociology
Literature
Education
Business & Finance
Marketing
Engineering
Statistics
Biology
Political Science
Reading
History
Financial markets
Philosophy
Mathematics
Law
Criminal
Architecture and Design
Government
Social Science
World history
Chemistry
Humanities
Business Finance
Writing
Programming
Telecommunications Engineering
Geography
Physics
Spanish
ach
e. Embedded Entrepreneurship
f. Three Social Entrepreneurship Models
g. Social-Founder Identity
h. Micros-enterprise Development
Outcomes
Subset 2. Indigenous Entrepreneurship Approaches (Outside of Canada)
a. Indigenous Australian Entrepreneurs Exami
Calculus
(people influence of
others) processes that you perceived occurs in this specific Institution Select one of the forms of stratification highlighted (focus on inter the intersectionalities
of these three) to reflect and analyze the potential ways these (
American history
Pharmacology
Ancient history
. Also
Numerical analysis
Environmental science
Electrical Engineering
Precalculus
Physiology
Civil Engineering
Electronic Engineering
ness Horizons
Algebra
Geology
Physical chemistry
nt
When considering both O
lassrooms
Civil
Probability
ions
Identify a specific consumer product that you or your family have used for quite some time. This might be a branded smartphone (if you have used several versions over the years)
or the court to consider in its deliberations. Locard’s exchange principle argues that during the commission of a crime
Chemical Engineering
Ecology
aragraphs (meaning 25 sentences or more). Your assignment may be more than 5 paragraphs but not less.
INSTRUCTIONS:
To access the FNU Online Library for journals and articles you can go the FNU library link here:
https://www.fnu.edu/library/
In order to
n that draws upon the theoretical reading to explain and contextualize the design choices. Be sure to directly quote or paraphrase the reading
ce to the vaccine. Your campaign must educate and inform the audience on the benefits but also create for safe and open dialogue. A key metric of your campaign will be the direct increase in numbers.
Key outcomes: The approach that you take must be clear
Mechanical Engineering
Organic chemistry
Geometry
nment
Topic
You will need to pick one topic for your project (5 pts)
Literature search
You will need to perform a literature search for your topic
Geophysics
you been involved with a company doing a redesign of business processes
Communication on Customer Relations. Discuss how two-way communication on social media channels impacts businesses both positively and negatively. Provide any personal examples from your experience
od pressure and hypertension via a community-wide intervention that targets the problem across the lifespan (i.e. includes all ages).
Develop a community-wide intervention to reduce elevated blood pressure and hypertension in the State of Alabama that in
in body of the report
Conclusions
References (8 References Minimum)
*** Words count = 2000 words.
*** In-Text Citations and References using Harvard style.
*** In Task section I’ve chose (Economic issues in overseas contracting)"
Electromagnetism
w or quality improvement; it was just all part of good nursing care. The goal for quality improvement is to monitor patient outcomes using statistics for comparison to standards of care for different diseases
e a 1 to 2 slide Microsoft PowerPoint presentation on the different models of case management. Include speaker notes... .....Describe three different models of case management.
visual representations of information. They can include numbers
SSAY
ame workbook for all 3 milestones. You do not need to download a new copy for Milestones 2 or 3. When you submit Milestone 3
pages):
Provide a description of an existing intervention in Canada
making the appropriate buying decisions in an ethical and professional manner.
Topic: Purchasing and Technology
You read about blockchain ledger technology. Now do some additional research out on the Internet and share your URL with the rest of the class
be aware of which features their competitors are opting to include so the product development teams can design similar or enhanced features to attract more of the market. The more unique
low (The Top Health Industry Trends to Watch in 2015) to assist you with this discussion.
https://youtu.be/fRym_jyuBc0
Next year the $2.8 trillion U.S. healthcare industry will finally begin to look and feel more like the rest of the business wo
evidence-based primary care curriculum. Throughout your nurse practitioner program
Vignette
Understanding Gender Fluidity
Providing Inclusive Quality Care
Affirming Clinical Encounters
Conclusion
References
Nurse Practitioner Knowledge
Mechanics
and word limit is unit as a guide only.
The assessment may be re-attempted on two further occasions (maximum three attempts in total). All assessments must be resubmitted 3 days within receiving your unsatisfactory grade. You must clearly indicate “Re-su
Trigonometry
Article writing
Other
5. June 29
After the components sending to the manufacturing house
1. In 1972 the Furman v. Georgia case resulted in a decision that would put action into motion. Furman was originally sentenced to death because of a murder he committed in Georgia but the court debated whether or not this was a violation of his 8th amend
One of the first conflicts that would need to be investigated would be whether the human service professional followed the responsibility to client ethical standard. While developing a relationship with client it is important to clarify that if danger or
Ethical behavior is a critical topic in the workplace because the impact of it can make or break a business
No matter which type of health care organization
With a direct sale
During the pandemic
Computers are being used to monitor the spread of outbreaks in different areas of the world and with this record
3. Furman v. Georgia is a U.S Supreme Court case that resolves around the Eighth Amendments ban on cruel and unsual punishment in death penalty cases. The Furman v. Georgia case was based on Furman being convicted of murder in Georgia. Furman was caught i
One major ethical conflict that may arise in my investigation is the Responsibility to Client in both Standard 3 and Standard 4 of the Ethical Standards for Human Service Professionals (2015). Making sure we do not disclose information without consent ev
4. Identify two examples of real world problems that you have observed in your personal
Summary & Evaluation: Reference & 188. Academic Search Ultimate
Ethics
We can mention at least one example of how the violation of ethical standards can be prevented. Many organizations promote ethical self-regulation by creating moral codes to help direct their business activities
*DDB is used for the first three years
For example
The inbound logistics for William Instrument refer to purchase components from various electronic firms. During the purchase process William need to consider the quality and price of the components. In this case
4. A U.S. Supreme Court case known as Furman v. Georgia (1972) is a landmark case that involved Eighth Amendment’s ban of unusual and cruel punishment in death penalty cases (Furman v. Georgia (1972)
With covid coming into place
In my opinion
with
Not necessarily all home buyers are the same! When you choose to work with we buy ugly houses Baltimore & nationwide USA
The ability to view ourselves from an unbiased perspective allows us to critically assess our personal strengths and weaknesses. This is an important step in the process of finding the right resources for our personal learning style. Ego and pride can be
· By Day 1 of this week
While you must form your answers to the questions below from our assigned reading material
CliftonLarsonAllen LLP (2013)
5 The family dynamic is awkward at first since the most outgoing and straight forward person in the family in Linda
Urien
The most important benefit of my statistical analysis would be the accuracy with which I interpret the data. The greatest obstacle
From a similar but larger point of view
4 In order to get the entire family to come back for another session I would suggest coming in on a day the restaurant is not open
When seeking to identify a patient’s health condition
After viewing the you tube videos on prayer
Your paper must be at least two pages in length (not counting the title and reference pages)
The word assimilate is negative to me. I believe everyone should learn about a country that they are going to live in. It doesnt mean that they have to believe that everything in America is better than where they came from. It means that they care enough
Data collection
Single Subject Chris is a social worker in a geriatric case management program located in a midsize Northeastern town. She has an MSW and is part of a team of case managers that likes to continuously improve on its practice. The team is currently using an
I would start off with Linda on repeating her options for the child and going over what she is feeling with each option. I would want to find out what she is afraid of. I would avoid asking her any “why” questions because I want her to be in the here an
Summarize the advantages and disadvantages of using an Internet site as means of collecting data for psychological research (Comp 2.1) 25.0\% Summarization of the advantages and disadvantages of using an Internet site as means of collecting data for psych
Identify the type of research used in a chosen study
Compose a 1
Optics
effect relationship becomes more difficult—as the researcher cannot enact total control of another person even in an experimental environment. Social workers serve clients in highly complex real-world environments. Clients often implement recommended inte
I think knowing more about you will allow you to be able to choose the right resources
Be 4 pages in length
soft MB-920 dumps review and documentation and high-quality listing pdf MB-920 braindumps also recommended and approved by Microsoft experts. The practical test
g
One thing you will need to do in college is learn how to find and use references. References support your ideas. College-level work must be supported by research. You are expected to do that for this paper. You will research
Elaborate on any potential confounds or ethical concerns while participating in the psychological study 20.0\% Elaboration on any potential confounds or ethical concerns while participating in the psychological study is missing. Elaboration on any potenti
3 The first thing I would do in the family’s first session is develop a genogram of the family to get an idea of all the individuals who play a major role in Linda’s life. After establishing where each member is in relation to the family
A Health in All Policies approach
Note: The requirements outlined below correspond to the grading criteria in the scoring guide. At a minimum
Chen
Read Connecting Communities and Complexity: A Case Study in Creating the Conditions for Transformational Change
Read Reflections on Cultural Humility
Read A Basic Guide to ABCD Community Organizing
Use the bolded black section and sub-section titles below to organize your paper. For each section
Losinski forwarded the article on a priority basis to Mary Scott
Losinksi wanted details on use of the ED at CGH. He asked the administrative resident