Pedagogy - Engineering
Argue points you think makes sense and those you have different opinions about
What were the clearest takeaways from the articles?
What were the most confusing parts of the articles?
What new concepts or theories did you encounter?
What was the most unexpected thing you learned from the week’s readings?
Read the 3 articles below and answer the questions above.
Note please make sure to argue points.
April 2006 Journal of Engineering Education 123
MICHAEL J. PRINCE
Department of Chemical Engineering
Bucknell University
RICHARD M. FELDER
Department of Chemical Engineering
North Carolina State University
To state a theorem and then to show examples of it is literally to
teach backwards.
(E. Kim Nebeuts)
ABSTRACT
Traditional engineering instruction is deductive, beginning with
theories and progressing to the applications of those theories.
Alternative teaching approaches are more inductive. Topics are
introduced by presenting specific observations, case studies or
problems, and theories are taught or the students are helped to
discover them only after the need to know them has been estab-
lished. This study reviews several of the most commonly used
inductive teaching methods, including inquiry learning, problem-
based learning, project-based learning, case-based teaching, dis-
covery learning, and just-in-time teaching. The paper defines
each method, highlights commonalities and specific differences,
and reviews research on the effectiveness of the methods. While
the strength of the evidence varies from one method to another,
inductive methods are consistently found to be at least equal to,
and in general more effective than, traditional deductive methods
for achieving a broad range of learning outcomes.
Keywords: inductive, teaching, learning
I. INTRODUCTION
A. Two Approaches to Education
Engineering and science are traditionally taught deductively.
The instructor introduces a topic by lecturing on general princi-
ples, then uses the principles to derive mathematical models,
shows illustrative applications of the models, gives students prac-
tice in similar derivations and applications in homework, and
finally tests their ability to do the same sorts of things on exams.
Little or no attention is initially paid to the question of why any of
that is being done. What real-world phenomena can the models
explain? What practical problems can they be used to solve, and
why should the students care about any of it? The only motivation
that students get—if any—is that the material will be important
later in the curriculum or in their careers.
A well-established precept of educational psychology is that
people are most strongly motivated to learn things they clearly
perceive a need to know [1]. Simply telling students that they will
need certain knowledge and skills some day is not a particularly
effective motivator. A preferable alternative is inductive teaching
and learning. Instead of beginning with general principles and
eventually getting to applications, the instruction begins with
specifics—a set of observations or experimental data to interpret,
a case study to analyze, or a complex real-world problem to solve.
As the students attempt to analyze the data or scenario and solve
the problem, they generate a need for facts, rules, procedures, and
guiding principles, at which point they are either presented with
the needed information or helped to discover it for themselves.
Inductive teaching and learning is an umbrella term that en-
compasses a range of instructional methods, including inquiry
learning, problem-based learning, project-based learning, case-
based teaching, discovery learning, and just-in-time teaching.
These methods have many features in common, besides the fact
that they all qualify as inductive. They are all learner-centered (also
known as student-centered), meaning that they impose more re-
sponsibility on students for their own learning than the traditional
lecture-based deductive approach. They are all supported by re-
search findings that students learn by fitting new information into
existing cognitive structures and are unlikely to learn if the infor-
mation has few apparent connections to what they already know
and believe. They can all be characterized as constructivist meth-
ods, building on the widely accepted principle that students con-
struct their own versions of reality rather than simply absorbing
versions presented by their teachers. The methods almost always
involve students discussing questions and solving problems in
class (active learning), with much of the work in and out of class
being done by students working in groups (collaborative or cooper-
ative learning). The defining characteristics of the methods and
features that most of them share are summarized in Table 1.
There are also differences among the different inductive meth-
ods. The end product of a project-based assignment is typically a
formal written and/or oral report, while the end product of a
guided inquiry may simply be the answer to an interesting ques-
tion, such as why an egg takes longer to boil at a ski resort than at
the beach and how frost can form on a night when the tempera-
ture does not drop below freezing. Case-based instruction and
problem-based learning involve extensive analyses of real or hypo-
thetical scenarios while just-in-time teaching may simply call on
students to answer questions about readings prior to hearing
Inductive Teaching and Learning Methods:
Definitions, Comparisons, and Research Bases
about the content of the readings in lectures. However, the
similarities trump the differences, and when variations in the im-
plementation of the methods are taken into account, many of the
differences disappear altogether.
Although we just claimed that inductive methods are essentially
variations on a theme, they do not appear that way in the litera-
ture. Each method has its own history, research base, guidebooks,
proponents, and detractors, and a great deal of confusion exists re-
garding what the methods are and how they are interrelated. Our
objective in this paper is to summarize the definitions, founda-
tions, similarities, and differences among inductive learning
methods and to review the existing research evidence regarding
their effectiveness.
Before we begin our review, we will attempt to clarify two
points of confusion that commonly arise in discussions of induc-
tive methods.
Is inductive learning really inductive? In practice, neither teach-
ing nor learning is ever purely inductive or deductive. Like the sci-
entific method, learning invariably involves movement in both di-
rections, with the student using new observations to infer rules
and theories (induction) and then testing the theories by using
them to deduce consequences and applications that can be verified
experimentally (deduction). Good teaching helps students learn
to do both. When we speak of inductive methods, we therefore do
not mean total avoidance of lecturing and complete reliance on
self-discovery, but simply teaching in which induction precedes
deduction. Except in the most extreme forms of discovery learn-
ing (which we do not advocate for undergraduate instruction), the
instructor still has important roles to play in facilitating learning—
guiding, encouraging, clarifying, mediating, and sometimes even
lecturing. We agree with Bransford: “There are times, usually
after people have first grappled with issues on their own, that
‘teaching by telling’ can work extremely well” [2, p. 11].
Are we talking about inductive learning or inductive teaching?
Is there a difference? A common point of semantic confusion associ-
ated with inductive methods has to do with the distinction between
teaching and learning. Thus, for example, one hears about prob-
lem-based learning but just-in-time teaching, and both inquiry
learning and inquiry-based teaching are commonly encountered in
the literature. There is, of course, a difference between learning
(what students do) and teaching (what teachers do), but in this
paper we will never examine one without explicitly or implicitly
considering the other. The reader should therefore understand that
when we refer to “inductive learning” or to an inductive instruction-
al method with either teaching or learning in its name, we are talk-
ing about both strategies that an instructor might use (teaching) and
experiences the students might subsequently undergo (learning).
II. FOUNDATIONS OF
INDUCTIVE TEACHING AND LEARNING
A. Constructivism
According to the model that has dominated higher education
for centuries (positivism), absolute knowledge (“objective reality”)
exists independently of human perception. The teacher’s job is to
transmit this knowledge to the students—lecturing being the nat-
ural method for doing so—and the students’ job is to absorb it. An
alternative model, constructivism, holds that whether or not there is
an objective reality (different constructivist theories take opposing
views on that issue), individuals actively construct and reconstruct
their own reality in an effort to make sense of their experience.
New information is filtered through mental structures (schemata)
that incorporate the student’s prior knowledge, beliefs, preconcep-
tions and misconceptions, prejudices, and fears. If the new infor-
mation is consistent with those structures it may be integrated into
124 Journal of Engineering Education April 2006
Table 1. Features of common inductive instructional methods.
them, but if it is contradictory, it may be memorized for the exam
but is unlikely to be truly incorporated into the individual’s belief
system—which is to say, it will not be learned.
Constructivism has its roots in the eighteenth-century philoso-
phies of Immanuel Kant and Giambattista Vico, although some
have traced it as far back as the fourth to sixth centuries B.C. in the
works of Lao Tzu, Buddha, and Heraclitus. The constructivist view
of learning is reflected in the developmental theories of
Piaget [3], Dewey [4], Bruner [5], and Vygotsky [6], among others.
In cognitive constructivism, which originated primarily in the work of
Piaget, an individual’s reactions to experiences lead to (or fail to lead
to) learning. In social constructivism, whose principal proponent is
Vygotsky, language and interactions with others—family, peers,
teachers—play a primary role in the construction of meaning from
experience. Meaning is not simply constructed, it is co-constructed.
Proponents of constructivism (e.g., Biggs [7]) offer variations
of the following principles for effective instruction:
● Instruction should begin with content and experiences likely
to be familiar to the students, so they can make connections
to their existing knowledge structures. New material should
be presented in the context of its intended real-world appli-
cations and its relationship to other areas of knowledge,
rather than being taught abstractly and out of context.
● Material should not be presented in a manner that requires
students to alter their cognitive models abruptly and drasti-
cally. In Vygotsky’s terminology, the students should not be
forced outside their “zone of proximal development,” the re-
gion between what they are capable of doing independently
and what they have the potential to do under adult guidance
or in collaboration with more capable peers [6]. They should
also be directed to continually revisit critical concepts, im-
proving their cognitive models with each visit. As Bruner [5]
puts it, instruction should be “spirally organized.”
● Instruction should require students to fill in gaps and ex-
trapolate material presented by the instructor. The goal
should be to wean the students away from dependence on
instructors as primary sources of required information,
helping them to become self-learners.
● Instruction should involve students working together in
small groups. This attribute—which is considered desirable
in all forms of constructivism and essential in social
constructivism—supports the use of collaborative and
cooperative learning.
The traditional lecture-based teaching approach is incompati-
ble with all of these principles. If the constructivist model of learn-
ing is accepted—and compelling research evidence supports it—
then to be effective instruction must set up experiences that
induce students to construct knowledge for themselves, when
necessary adjusting or rejecting their prior beliefs and misconcep-
tions in light of the evidence provided by the experiences. This
description might serve as a definition of inductive learning.
B. Cognition Research
Bransford et al. [2] offer a comprehensive survey of neurologi-
cal and psychological research that provides strong support for
constructivism and inductive methods. Here are some of their
findings:
● “All new learning involves transfer of information based on
previous learning” [2, p. 53].
Traditional instruction in engineering and science frequently
treats new courses and new topics within courses as self-contained
bodies of knowledge, presenting theories and formulas with mini-
mal grounding in students’ prior knowledge and little or no
grounding in their experience. Inductive instruction, on the other
hand, presents new information in the context of situations, is-
sues, and problems to which students can relate, so there is a much
greater chance that the information can be linked to their existing
cognitive structures.
Since learning is strongly influenced by prior knowledge, if new
information is fully consistent with prior knowledge it may be
learned with relative ease, but if it involves a contradiction several
things may happen. If the contradiction is perceived and understood,
it may initially cause confusion but the resolution of the contradic-
tion can lead to elimination of misconceptions and greater under-
standing. However, if learners fail to understand the contradiction or
if they can construct coherent (to them) representations of the new
material based on existing misconceptions, deeper misunderstand-
ing may follow [2, p. 70]. Traditional teaching generally does little to
force students to identify and challenge their misconceptions, lead-
ing to the latter situation. The most effective implementations of in-
ductive learning involve diagnostic teaching, with lessons being de-
signed to “discover what students think in relation to the problems
on hand, discussing their misconceptions sensitively, and giving
them situations to go on thinking about which will enable them to
readjust their ideas” [2, p. 134]. The proper choice of focus questions
and problems in inquiry-based, problem-based, and discovery learn-
ing methods can serve this function.
● Motivation to learn affects the amount of time students are will-
ing to devote to learning. Learners are more motivated when they
can see the usefulness of what they are learning and when they can
use it to do something that has an impact on others [2, p. 61].
This finding supports techniques that use authentic (real-
world, professionally relevant) situations and problems to provide
contexts for learning the content and skills a course is intended to
teach. Inductive methods such as problem-based learning and
case-based teaching do this.
● The likelihood that knowledge and skills acquired in one course
will transfer to real work settings is a function of the similarity
of the two environments [2, p. 73].
School often emphasizes abstract reasoning while work focuses
almost exclusively on contextualized reasoning. Organizing learning
around authentic problems, projects, and cases helps to overcome
these disparities and improves the likelihood of subsequent transfer,
in addition to increasing motivation to learn as noted in the previous
item. Moreover, traditional schools differ from most work environ-
ments in that school heavily emphasizes individual work while most
work involves extensive collaboration. Assigning teams to perform
most required tasks (as most inductive methods do) thus further pro-
motes transfer, provided that the students are helped to develop
teamwork skills and the work is organized in a way that assures indi-
vidual accountability for all of the learning that takes place [8–12].
● Helping students develop metacognition—knowledge of how
they learn—improves the likelihood of their transferring infor-
mation learned in one context to another one [2, p. 67].
Methods that train students in systematic problem-solving
methods (generating and evaluating alternative solutions, periodi-
cally assessing progress toward the solution, extracting general
principles from specific solutions, etc.) and call on them to make
April 2006 Journal of Engineering Education 125
sense of new information, to raise questions when they cannot,
and to regularly assess their own knowledge and skill levels pro-
mote the development of metacognitive skills. Most variants of
problem-based learning include such steps.
C. Intellectual Development and Approaches to Learning
Most college students undergo a developmental progression from
a belief in the certainty of knowledge and the omniscience of author-
ities to an acknowledgment of the uncertainty and contextual nature
of knowledge, acceptance of personal responsibility for determining
truth, inclination and ability to gather supporting evidence for judg-
ments, and openness to change if new evidence is forthcoming [13,
14]. At the highest developmental level normally seen in college stu-
dents (termed “contextual relativism” by Perry [13]), individuals dis-
play thinking patterns resembling those of expert scientists and engi-
neers. A goal of science and engineering instruction should be to
advance students to that level by the time they graduate.
In their courses, students may be inclined to approach learning
in one of three ways [15]. Some take a surface approach, relying on
rote memorization and mechanical formula substitution, making
little or no effort to understand the material being taught. Others
may adopt a deep approach, probing and questioning and exploring
the limits of applicability of new material. Still others use a strate-
gic approach, doing whatever is necessary to get the highest grade
they can, taking a surface approach if that suffices and a deep ap-
proach when necessary. Another goal of instruction should be to
induce students to adopt a deep approach to subjects that are im-
portant for their professional or personal development.
Felder and Brent [16] observe that the characteristics of high
levels of intellectual development and of a deep approach to learn-
ing are essentially the same. Both contextual relativism and a deep
approach involve taking responsibility for one’s own learning,
questioning authorities rather than accepting their statements at
face value, and attempting to understand new knowledge in the
context of prior knowledge and experience. It is reasonable to as-
sume that instructional conditions that induce students to adopt a
deep approach should also promote intellectual growth.
Several conditions of instruction have been shown to promote
a deep approach, including interest and background knowledge of
the subject, use of teaching methods that foster active and long-
term engagement with learning tasks, and assessment that em-
phasizes conceptual understanding as opposed to recall or the ap-
plication of routine procedural knowledge [17]. Well
implemented inductive teaching methods serve all of these func-
tions. Authentic problems and case studies can motivate students
by helping to make the subject matter relevant, and they also tend
to keep the students interested and actively engaged in their learn-
ing tasks. Having to analyze complex situations also promotes the
students’ adoption of a deep approach to learning, as rote memo-
rization and simple algorithmic substitution are clearly inadequate
strategies for dealing with such situations. Moreover, open-ended
problems that do not have unique well-defined solutions pose se-
rious challenges to students’ low-level beliefs in the certainty of
knowledge and the role of instructors as providers of knowledge.
Such challenges serve as precursors to intellectual growth [14].
D. Learning Cycle-Based Instruction
Several well-known instructional models involve learning cycles,
wherein students work through sequences of activities that involve
complementary thinking and problem-solving approaches. In
most of these cycles, the different activities are designed to appeal
to different learning style preferences (concrete and abstract, ac-
tive and reflective, etc.) [18]. When instructors teach around the
cycle in this manner, all students are taught partly in a manner they
prefer, which leads to an increased comfort level and willingness
to learn, and partly in a less preferred manner, which provides
practice and feedback in ways of thinking they might be inclined
to avoid but which they will have to use to be fully effective profes-
sionals. Teaching around the best known of such cycles—that
associated with Kolb’s experiential learning model [19]—involves:
(1) introducing a problem and providing motivation for solving it
by relating it to students’ interests and experience (the focal ques-
tion is why?); (2) presenting pertinent facts, experimental observa-
tions, principles and theories, problem-solving methods, etc., and
opportunities for the students to reflect on them (what?); (3) pro-
viding guided hands-on practice in the methods and types of
thinking the lessons are intended to teach (how?); and (4) allow-
ing and encouraging exploration of consequences and applications
of the newly learned material (what if ?).
A learning cycle developed at the Vanderbilt University Learn-
ing Technology Center is the STAR Legacy module (Software
Technology for Action and Reflection) [20], which consists of the
following steps:
1. Students are presented with a challenge (problem, scenario,
case, news event, or common misconception presenting the
targeted content in a realistic context) that establishes a
need to know the content and master the skills included in
the learning objectives for the module.
2. The students then formulate their initial thoughts, reflecting
on what they already know and think about the context of
the challenge and generating ideas about how they might
address the challenge.
3. Perspectives and resources are provided next. Perspectives are
statements by experts that offer insights into various dimen-
sions of the challenge without providing a direct solution to
it, and resources may include lectures, reading materials,
videos, simulations, homework problems, links to websites,
and other materials relevant to the challenge.
4. Assessment activities are then carried out in which the stu-
dents apply what they know, and identify what they still
need to learn to address the challenge. The activities may
include engaging in self-assessments and discussions, com-
pleting homework assignments, writing essays or reports,
and taking on-line quizzes or exams. Multiple iterations be-
tween Steps 3 and 4 would normally be required to fully
meet the challenge.
5. In the final wrap-up, an expert may present a model solu-
tion to the challenge, or the students may present a report
and/or complete an examination showing that they have
met the challenge and demonstrated their mastery of the
knowledge and skills specified in the learning objectives.
The Star Legacy module is a clear exemplar of an inductive ap-
proach to teaching and learning. Depending on the nature and
scope of the challenge, instruction based on such a module would
qualify as inquiry learning, project-based learning, or problem-
based learning. Similarly, learning cycles based on learning styles
that begin with the presentation of a realistic problem or a challenge
of some sort are inductive. Instruction based on learning cycles is
126 Journal of Engineering Education April 2006
consistent with accepted principles of cognitive science [2] and its
effectiveness has been repeatedly demonstrated empirically [21].
In summary, inductive approaches to teaching and learning
have much in their favor. They are supported by the best research
on learning currently available, compatible with the currently
most widely accepted theories of learning, and promote problem-
solving skills and attitudes to learning that most instructors would
say they desire for their students. Following a brief section on as-
sessment, we will examine the individual inductive methods—
what they are, what they have in common and how they differ,
and what is known about how well they succeed in achieving de-
sired educational outcomes.
III. ASSESSMENT AND EVALUATION OF
INDUCTIVE METHODS
Rigorous comparisons of inductive methods with traditional
expository methods are not easy to design for several reasons [22].
● There are many varieties of inductive approaches, each of
which can be implemented in many ways—with greater or
lesser instructor involvement, with or without formal facili-
tation of teamwork, with most of the work being done in or
out of class, and so on. Two articles may claim to be studies
of, say, problem-based learning, but they could involve dra-
matically different forms of instruction and may well pro-
duce different learning outcomes.
● Instructors may have varying degrees of experience and skill
with whichever method they adopt. Two different instruc-
tors using the same method in the same class could get dif-
ferent results.
● Student populations also vary considerably in distributions of
gender and ethnicity, age, experience, motivation to learn,
learning styles, and levels of intellectual development (among
others) [21]. The same instructor could use the same method
in two different classes and get different outcomes.
● The conclusions drawn from a study may depend strongly
on the learning outcome investigated—acquisition of factual
knowledge, development of a problem-solving or interper-
sonal skill, retention in a curriculum, self-confidence level,
attitude, or any combination of these. An inductive method
may be superior with respect to one outcome and inferior
with respect to another. (We will shortly see an example of
this phenomenon in the case of problem-based learning,
which has frequently been found to lead to superior high-
level skills and attitudes but inferior short-term acquisition
of factual knowledge.) Moreover, reliable and valid assess-
ments of high-level skills such as critical or creative think-
ing or attributes such as lifelong learning skills are difficult
to obtain, and two studies that use different assessment
methods could arrive at different conclusions.
● Finally, as Prince [22] points out, implementations of in-
ductive approaches such as problem-based learning nor-
mally involve active and collaborative learning methods,
both of which are known to have positive effects on many
learning outcomes. If an inductive method is found to have
a positive effect, sorting out how much of it can be attrib-
uted to the method itself and how much to other methods
imbedded in it can be a formidable challenge.
Considering these difficulties, it is not surprising that pub-
lished studies report both positive and negative outcomes for in-
ductive learning relative to conventional instruction. Given the
difficulty (if not impossibility) of carrying out a clean and conclu-
sive comparative study, the best we can do is to look at results
from a number of studies with different instructors, implementa-
tions, learning outcomes, and student populations, to see if any
robust generalizations can be inferred. The sections that follow
summarize results of such meta-analyses.
IV. INQUIRY LEARNING
A. Definition and Applications
Inquiry learning begins when students are presented with
questions to be answered, problems to be solved, or a set of obser-
vations to be explained [23]. If the method is implemented effec-
tively, the students should learn to “formulate good questions,
identify and collect appropriate evidence, present results systemat-
ically, analyze and interpret results, formulate conclusions, and
evaluate the worth and importance of those conclusions [24].”
The same statements could also be made about problem-based
learning, project-based learning, discovery learning, certain forms
of case-based instruction, and student research, so that inquiry
learning may be considered an umbrella category that encompasses
several other inductive teaching methods. Lee makes this point,
observing that inquiry is also consistent with interactive lecture,
discussion, simulation, service learning, and independent study,
and in fact “probably the only strategy that is not consistent with
inquiry-guided learning is the exclusive use of traditional lectur-
ing” [24, p. 10]. In this paper we will use the term inquiry learning
to refer to instruction that uses questions and problems to provide
contexts for learning and does not fall into another more restric-
tive inductive learning category.
Besides overlapping with other inductive methods, inquiry
learning encompasses a variety of techniques that differ from one
another in significant ways. Staver and Bay [25] differentiate be-
tween structured inquiry (students are given a problem and an
outline for how to solve it), guided inquiry (students must also fig-
ure out the solution method) and open inquiry (students must for-
mulate the problem for themselves). …
Active learning increases student performance in
science, engineering, and mathematics
Scott Freemana,1, Sarah L. Eddya, Miles McDonougha, Michelle K. Smithb, Nnadozie Okoroafora, Hannah Jordta,
and Mary Pat Wenderotha
aDepartment of Biology, University of Washington, Seattle, WA 98195; and bSchool of Biology and Ecology, University of Maine, Orono, ME 04469
Edited* by Bruce Alberts, University of California, San Francisco, CA, and approved April 15, 2014 (received for review October 8, 2013)
To test the hypothesis that lecturing maximizes learning and
course performance, we metaanalyzed 225 studies that reported
data on examination scores or failure rates when comparing student
performance in undergraduate science, technology, engineer-
ing, and mathematics (STEM) courses under traditional lecturing
versus active learning. The effect sizes indicate that on average,
student performance on examinations and concept inventories in-
creased by 0.47 SDs under active learning (n = 158 studies), and
that the odds ratio for failing was 1.95 under traditional lecturing
(n = 67 studies). These results indicate that average examination
scores improved by about 6\% in active learning sections, and that
students in classes with traditional lecturing were 1.5 times more
likely to fail than were students in classes with active learning.
Heterogeneity analyses indicated that both results hold across
the STEM disciplines, that active learning increases scores on con-
cept inventories more than on course examinations, and that ac-
tive learning appears effective across all class sizes—although the
greatest effects are in small (n ≤ 50) classes. Trim and fill analyses
and fail-safe n calculations suggest that the results are not due to
publication bias. The results also appear robust to variation in the
methodological rigor of the included studies, based on the quality
of controls over student quality and instructor identity. This is the
largest and most comprehensive metaanalysis of undergraduate
STEM education published to date. The results raise questions about
the continued use of traditional lecturing as a control in research
studies, and support active learning as the preferred, empirically
validated teaching practice in regular classrooms.
constructivism | undergraduate education | evidence-based teaching |
scientific teaching
Lecturing has been the predominant mode of instruction sinceuniversities were founded in Western Europe over 900 y ago
(1). Although theories of learning that emphasize the need for
students to construct their own understanding have challenged
the theoretical underpinnings of the traditional, instructor-
focused, “teaching by telling” approach (2, 3), to date there has
been no quantitative analysis of how constructivist versus expo-
sition-centered methods impact student performance in un-
dergraduate courses across the science, technology, engineering,
and mathematics (STEM) disciplines. In the STEM classroom,
should we ask or should we tell?
Addressing this question is essential if scientists are committed
to teaching based on evidence rather than tradition (4). The
answer could also be part of a solution to the “pipeline problem”
that some countries are experiencing in STEM education: For
example, the observation that less than 40\% of US students who
enter university with an interest in STEM, and just 20\% of
STEM-interested underrepresented minority students, finish with
a STEM degree (5).
To test the efficacy of constructivist versus exposition-centered
course designs, we focused on the design of class sessions—as
opposed to laboratories, homework assignments, or other exer-
cises. More specifically, we compared the results of experiments
that documented student performance in courses with at least
some active learning versus traditional lecturing, by metaanalyzing
225 studies in the published and unpublished literature. The active
learning interventions varied widely in intensity and implementa-
tion, and included approaches as diverse as occasional group
problem-solving, worksheets or tutorials completed during class,
use of personal response systems with or without peer instruction,
and studio or workshop course designs. We followed guidelines for
best practice in quantitative reviews (SI Materials and Methods),
and evaluated student performance using two outcome variables:
(i) scores on identical or formally equivalent examinations, concept
inventories, or other assessments; or (ii) failure rates, usually
measured as the percentage of students receiving a D or F grade
or withdrawing from the course in question (DFW rate).
The analysis, then, focused on two related questions. Does ac-
tive learning boost examination scores? Does it lower failure rates?
Results
The overall mean effect size for performance on identical or
equivalent examinations, concept inventories, and other assess-
ments was a weighted standardized mean difference of 0.47 (Z =
9.781, P << 0.001)—meaning that on average, student perfor-
mance increased by just under half a SD with active learning
compared with lecturing. The overall mean effect size for failure
rate was an odds ratio of 1.95 (Z = 10.4, P << 0.001). This odds
ratio is equivalent to a risk ratio of 1.5, meaning that on average,
students in traditional lecture courses are 1.5 times more likely to
fail than students in courses with active learning. Average failure
rates were 21.8\% under active learning but 33.8\% under tradi-
tional lecturing—a difference that represents a 55\% increase
(Fig. 1 and Fig. S1).
Significance
The President’s Council of Advisors on Science and Technology
has called for a 33\% increase in the number of science, tech-
nology, engineering, and mathematics (STEM) bachelor’s degrees
completed per year and recommended adoption of empirically
validated teaching practices as critical to achieving that goal. The
studies analyzed here document that active learning leads to
increases in examination performance that would raise average
grades by a half a letter, and that failure rates under traditional
lecturing increase by 55\% over the rates observed under active
learning. The analysis supports theory claiming that calls to in-
crease the number of students receiving STEM degrees could be
answered, at least in part, by abandoning traditional lecturing in
favor of active learning.
Author contributions: S.F. and M.P.W. designed research; S.F., M.M., M.K.S., N.O., H.J.,
and M.P.W. performed research; S.F. and S.L.E. analyzed data; and S.F., S.L.E., M.M.,
M.K.S., N.O., H.J., and M.P.W. wrote the paper.
The authors declare no conflict of interest.
*This Direct Submission article had a prearranged editor.
Freely available online through the PNAS open access option.
See Commentary on page 8319.
1To whom correspondence should be addressed. E-mail: [email protected]
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.
1073/pnas.1319030111/-/DCSupplemental.
8410–8415 | PNAS | June 10, 2014 | vol. 111 | no. 23 www.pnas.org/cgi/doi/10.1073/pnas.1319030111
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.sfig01.pdf
http://crossmark.crossref.org/dialog/?doi=10.1073/pnas.1319030111&domain=pdf&date_stamp=2014-05-29
mailto:[email protected]
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental
www.pnas.org/cgi/doi/10.1073/pnas.1319030111
Heterogeneity analyses indicated no statistically significant
variation among experiments based on the STEM discipline of
the course in question, with respect to either examination scores
(Fig. 2A; Q = 910.537, df = 7, P = 0.160) or failure rates (Fig. 2B;
Q = 11.73, df = 6, P = 0.068). In every discipline with more than
10 experiments that met the admission criteria for the meta-
analysis, average effect sizes were statistically significant for
either examination scores or failure rates or both (Fig. 2, Figs.
S2 and S3, and Tables S1A and S2A). Thus, the data indicate
that active learning increases student performance across the
STEM disciplines.
For the data on examinations and other assessments, a het-
erogeneity analysis indicated that average effect sizes were lower
when the outcome variable was an instructor-written course ex-
amination as opposed to performance on a concept inventory
(Fig. 3A and Table S1B; Q = 10.731, df = 1, P << 0.001). Al-
though student achievement was higher under active learning for
both types of assessments, we hypothesize that the difference in
gains for examinations versus concept inventories may be due to
the two types of assessments testing qualitatively different cogni-
tive skills. This explanation is consistent with previous research
indicating that active learning has a greater impact on student
mastery of higher- versus lower-level cognitive skills (6–9), and
the recognition that most concept inventories are designed to
diagnose known misconceptions, in contrast to course examinations
that emphasize content mastery or the ability to solve quantitative
problems (10). Most concept inventories also undergo testing for
validity, reliability, and readability.
Heterogeneity analyses indicated significant variation in terms
of course size, with active learning having the highest impact
on courses with 50 or fewer students (Fig. 3B and Table S1C;
Q = 6.726, df = 2, P = 0.035; Fig. S4). Effect sizes were sta-
tistically significant for all three categories of class size, how-
ever, indicating that active learning benefitted students in
medium (51–110 students) or large (>110 students) class sizes
as well.
When we metaanalyzed the data by course type and course
level, we found no statistically significant difference in active
learning’s effect size when comparing (i) courses for majors
versus nonmajors (Q = 0.045, df = 1, P = 0.883; Table S1D), or
(ii) introductory versus upper-division courses (Q = 0.046, df = 1,
P = 0.829; Tables S1E and S2D).
Fig. 1. Changes in failure rate. (A) Data plotted as percent change in failure rate in the same course, under active learning versus lecturing. The mean change
(12\%) is indicated by the dashed vertical line. (B) Kernel density plots of failure rates under active learning and under lecturing. The mean failure rates under
each classroom type (21.8\% and 33.8\%) are shown by dashed vertical lines.
Fig. 2. Effect sizes by discipline. (A) Data on examination scores, concept inventories, or other assessments. (B) Data on failure rates. Numbers below data
points indicate the number of independent studies; horizontal lines are 95\% confidence intervals.
Freeman et al. PNAS | June 10, 2014 | vol. 111 | no. 23 | 8411
P
SY
C
H
O
LO
G
IC
A
L
A
N
D
C
O
G
N
IT
IV
E
SC
IE
N
C
ES
SE
E
C
O
M
M
EN
TA
R
Y
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.sfig02.pdf
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.sfig02.pdf
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.sfig03.pdf
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st01.docx
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st02.docx
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st01.docx
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st01.docx
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.sfig04.pdf
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st01.docx
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st01.docx
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.st02.docx
To evaluate how confident practitioners can be about these
conclusions, we performed two types of analyses to assess
whether the results were compromised by publication bias, i.e.,
the tendency for studies with low effect sizes to remain un-
published. We calculated fail-safe numbers indicating how many
missing studies with an effect size of 0 would have to be pub-
lished to reduce the overall effect sizes of 0.47 for examination
performance and 1.95 for failure rate to preset levels that would
be considered small or moderate—in this case, 0.20 and 1.1, re-
spectively. The fail-safe numbers were high: 114 studies on exam-
ination performance and 438 studies on failure rate (SI Materials
and Methods). Analyses of funnel plots (Fig. S5) also support a
lack of publication bias (SI Materials and Methods).
To assess criticisms that the literature on undergraduate
STEM education is difficult to interpret because of methodo-
logical shortcomings (e.g., ref. 11), we looked for heterogeneity
in effect sizes for the examination score data, based on whether
experiments did or did not meet our most stringent criteria for
student and instructor equivalence. We created four categories
to characterize the quality of the controls over student equivalence
in the active learning versus lecture treatments (SI Materials and
Methods), and found that there was no heterogeneity based on
methodological quality (Q = 2.097, df = 3, P = 0.553): Experi-
ments where students were assigned to treatments at random
produced results that were indistinguishable from three types
of quasirandomized designs (Table 1). Analyzing variation with
respect to controls over instructor identity also produced no
evidence of heterogeneity (Q = 0.007, df = 1, P = 0.934): More
poorly controlled studies, with different instructors in the two
treatment groups or with no data provided on instructor equiv-
alence, gave equivalent results to studies with identical or ran-
domized instructors in the two treatments (Table 1). Thus, the
overall effect size for examination data appears robust to variation
in the methodological rigor of published studies.
Discussion
The data reported here indicate that active learning increases
examination performance by just under half a SD and that lec-
turing increases failure rates by 55\%. The heterogeneity analyses
indicate that (i) these increases in achievement hold across all of the
STEM disciplines and occur in all class sizes, course types, and
course levels; and (ii) active learning is particularly beneficial in
small classes and at increasing performance on concept inventories.
Although this is the largest and most comprehensive meta-
analysis of the undergraduate STEM education literature to
date, the weighted, grand mean effect size of 0.47 reported here
is almost identical to the weighted, grand-mean effect sizes of
0.50 and 0.51 published in earlier metaanalyses of how alter-
natives to traditional lecturing impact undergraduate course
performance in subsets of STEM disciplines (11, 12). Thus, our
results are consistent with previous work by other investigators.
The grand mean effect sizes reported here are subject to im-
portant qualifications, however. For example, because struggling
students are more likely to drop courses than high-achieving
students, the reductions in withdrawal rates under active learn-
ing that are documented here should depress average scores on
assessments—meaning that the effect size of 0.47 for examina-
tion and concept inventory scores may underestimate active
learning’s actual impact in the studies performed to date (SI
Materials and Methods). In contrast, it is not clear whether effect
sizes of this magnitude would be observed if active learning
approaches were to become universal. The instructors who
implemented active learning in these studies did so as volunteers.
It is an open question whether student performance would in-
crease as much if all faculty were required to implement active
learning approaches.
Assuming that other instructors implement active learning and
achieve the average effect size documented here, what would
Fig. 3. Heterogeneity analyses for data on examination scores, concept inventories, or other assessments. (A) By assessment type—concept inventories versus
examinations. (B) By class size. Numbers below data points indicate the number of independent studies; horizontal lines are 95\% confidence intervals.
Table 1. Comparing effect sizes estimated from well-controlled versus less-well-controlled studies
95\% confidence interval
Type of control n Hedges’s g SE Lower limit Upper limit
For student equivalence
Quasirandom—no data on student equivalence 39 0.467 0.102 0.268 0.666
Quasirandom—no statistical difference in prescores
on assessment used for effect size
51 0.534 0.089 0.359 0.709
Quasirandom—no statistical difference on metrics
of academic ability/preparedness
51 0.362 0.092 0.181 0.542
Randomized assignment or crossover design 16 0.514 0.098 0.322 0.706
For instructor equivalence
No data, or different instructors 59 0.472 0.081 0.313 0.631
Identical instructor, randomized assignment,
or ≥3 instructors in each treatment
99 0.492 0.071 0.347 0.580
8412 | www.pnas.org/cgi/doi/10.1073/pnas.1319030111 Freeman et al.
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.1319030111.sfig05.pdf
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
www.pnas.org/cgi/doi/10.1073/pnas.1319030111
a shift of 0.47 SDs in examination and concept inventory scores
mean to their students?
i) Students performing in the 50th percentile of a class based on
traditional lecturing would, under active learning, move to
the 68th percentile of that class (13)—meaning that instead
of scoring better than 50\% of the students in the class, the
same individual taught with active learning would score better
than 68\% of the students being lectured to.
ii) According to an analysis of examination scores in three intro-
ductory STEM courses (SI Materials and Methods), a change of
0.47 SDs would produce an increase of about 6\% in average
examination scores and would translate to a 0.3 point in-
crease in average final grade. On a letter-based system, medians
in the courses analyzed would rise from a B− to a B or from
a B to a B+.
The result for undergraduate STEM courses can also be
compared with the impact of educational interventions at the
precollege level. A recent review of educational interventions
in the K–12 literature reports a mean effect size of 0.39 when
impacts are measured with researcher-developed tests, analo-
gous to the examination scores analyzed here, and a mean effect
size of 0.24 for narrow-scope standardized tests, analogous to the
concept inventories analyzed here (14). Thus, the effect size of
active learning at the undergraduate level appears greater than
the effect sizes of educational innovations in the K–12 setting,
where effect sizes of 0.20 or even smaller may be considered of
policy interest (14).
There are also at least two ways to view an odds ratio of 1.95
for the risk of failing a STEM course:
i) If the experiments analyzed here had been conducted as ran-
domized controlled trials of medical interventions, they may
have been stopped for benefit—meaning that enrolling
patients in the control condition might be discontinued be-
cause the treatment being tested was clearly more beneficial.
For example, a recent analysis of 143 randomized controlled
medical trials that were stopped for benefit found that they
had a median relative risk of 0.52, with a range of 0.22 to 0.66
(15). In addition, best-practice directives suggest that data
management committees may allow such studies to stop for
benefit if interim analyses have large sample sizes and P val-
ues under 0.001 (16). Both criteria were met for failure rates
in the education studies we analyzed: The average relative
risk was 0.64 and the P value on the overall odds ratio
was << 0.001. Any analogy with biomedical trials is qual-
ified, however, by the lack of randomized designs in studies
that included data on failure rates.
ii) There were 29,300 students in the 67 lecturing treatments
with data on failure rates. Given that the raw failure rate in
this sample averaged 33.8\% under traditional lecturing and
21.8\% under active learning, the data suggest that 3,516 fewer
students would have failed these STEM courses under active
learning. Based on conservative assumptions (SI Materials and
Methods), this translates into over US$3,500,000 in saved tuition
dollars for the study population, had all students been exposed
to active learning. If active learning were implemented widely,
the total tuition dollars saved would be orders of magnitude
larger, given that there were 21 million students enrolled in
US colleges and universities alone in 2010, and that about a
third of these students intended to major in STEM fields as
entering freshmen (17, 18).
Finally, increased grades and fewer failures should make a
significant impact on the pipeline problem. For example, the
2012 President’s Council of Advisors on Science and Technology
report calls for an additional one million STEM majors in the
United States in the next decade—requiring a 33\% increase
from the current annual total—and notes that simply increasing
the current STEM retention rate of 40\% to 50\% would meet
three-quarters of that goal (5). According to a recent cohort
study from the National Center for Education Statistics (19),
there are gaps of 0.5 and 0.4 in the STEM-course grade point
averages (GPAs) of first-year bachelor’s and associate’s degree
students, respectively, who end up leaving versus persisting in
STEM programs. A 0.3 “bump” in average grades with active
learning would get the “leavers” close to the current perfor-
mance level of “persisters.” Other analyses of students who leave
STEM majors indicate that increased passing rates, higher grades,
and increased engagement in courses all play a positive role in re-
tention (20–22).
In addition to providing evidence that active learning can
improve undergraduate STEM education, the results reported
here have important implications for future research. The studies
we metaanalyzed represent the first-generation of work on un-
dergraduate STEM education, where researchers contrasted a
diverse array of active learning approaches and intensities with
traditional lecturing. Given our results, it is reasonable to raise
concerns about the continued use of traditional lecturing as a
control in future experiments. Instead, it may be more pro-
ductive to focus on what we call “second-generation research”:
using advances in educational psychology and cognitive science
to inspire changes in course design (23, 24), then testing hy-
potheses about which type of active learning is most appropriate
and efficient for certain topics or student populations (25).
Second-generation research could also explore which aspects of
instructor behavior are most important for achieving the greatest
gains with active learning, and elaborate on recent work in-
dicating that underprepared and underrepresented students may
benefit most from active methods. In addition, it will be impor-
tant to address questions about the intensity of active learning:
Is more always better? Although the time devoted to active
learning was highly variable in the studies analyzed here, ranging
from just 10–15\% of class time being devoted to clicker questions
to lecture-free “studio” environments, we were not able to evaluate
the relationship between the intensity (or type) of active learning
and student performance, due to lack of data (SI Materials
and Methods).
As research continues, we predict that course designs inspired
by second-generation studies will result in additional gains in
student achievement, especially when the types of active learning
interventions analyzed here—which focused solely on in-class
innovations—are combined with required exercises that are
completed outside of formal class sessions (26).
Finally, the data suggest that STEM instructors may begin to
question the continued use of traditional lecturing in everyday
practice, especially in light of recent work indicating that active
learning confers disproportionate benefits for STEM students
from disadvantaged backgrounds and for female students in
male-dominated fields (27, 28). Although traditional lecturing
has dominated undergraduate instruction for most of a millen-
nium and continues to have strong advocates (29), current evi-
dence suggests that a constructivist “ask, don’t tell” approach
may lead to strong increases in student performance—amplifying
recent calls from policy makers and researchers to support faculty
who are transforming their undergraduate STEM courses (5, 30).
Materials and Methods
To create a working definition of active learning, we collected written defi-
nitions from 338 audience members, before biology departmental seminars
on active learning, at universities throughout the United States and Canada.
We then coded elements in the responses to create the following con-
sensus definition:
Active learning engages students in the process of learning through
activities and/or discussion in class, as opposed to passively listening
Freeman et al. PNAS | June 10, 2014 | vol. 111 | no. 23 | 8413
P
SY
C
H
O
LO
G
IC
A
L
A
N
D
C
O
G
N
IT
IV
E
SC
IE
N
C
ES
SE
E
C
O
M
M
EN
TA
R
Y
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1319030111/-/DCSupplemental/pnas.201319030SI.pdf?targetid=nameddest=STXT
to an expert. It emphasizes higher-order thinking and often involves
group work. (See also ref. 31, p. iii).
Following Bligh (32), we defined traditional lecturing as “. . .continuous ex-
position by the teacher.” Under this definition, student activity was assumed
to be limited to taking notes and/or asking occasional and unprompted
questions of the instructor.
Literature Search. We searched the gray literature, primarily in the form of
unpublished dissertations and conference proceedings, in addition to peer-
reviewed sources (33, 34) for studies that compared student performance
in undergraduate STEM courses under traditional lecturing versus active
learning. We used four approaches (35) to find papers for consideration:
hand-searching every issue in 55 STEM education journals from June 1, 1998
to January 1, 2010 (Table S3), searching seven online databases using an
array of terms, mining reviews and bibliographies (SI Materials and Methods),
and “snowballing” from references in papers admitted to the study (SI
Materials and Methods). We had no starting time limit for admission to
the study; the ending cutoff for consideration was completion or publication
before January 1, 2010.
Criteria for Admission. As recommended (36), the criteria for admission to the
coding and final data analysis phases of the study were established at the
onset of the work and were not altered. We coded studies that (i) contrasted
traditional lecturing with any active learning intervention, with total class
time devoted to each approach not differing by more than 30 min/wk; (ii)
occurred in the context of a regularly scheduled course for undergraduates;
(iii) were largely or solely limited to changes in the conduct of the regularly
scheduled class or recitation sessions; (iv) involved a course in astronomy,
biology, chemistry, computer science, engineering, geology, mathematics,
natural resources or environmental science, nutrition or food science,
physics, psychology, or statistics; and (v) included data on some aspect of
student academic performance.
Note that criterion i yielded papers representing a wide array of active
learning activities, including vaguely defined “cooperative group activities
in class,” in-class worksheets, clickers, problem-based learning (PBL), and
studio classrooms, with intensities ranging from 10\% to 100\% of class time
(SI Materials and Methods). Thus, this study’s intent was to evaluate the
average …
UNDERSTANDING
~ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
by D E s I G N
Chapter 1
ckward Design
Design, v.,-To have purposes and intentions; to plan and execute
-Oxford English Dictionary
The complexity of design work is often underestimated. Many people
believe they know a good deal about design. What they do not realize is
how much more they need to know to do design well, with
distinction, refinement, and grace.
-John McClean, 20 Considerations That Help a Project Run Smoothly, 2003
ers are designers. An essential act of our profession is the crafting of cur-
m and learning experiences to meet specified purposes. We are also
ers of assessments to diagnose student needs to guide our teaching and
ble us, our students, and others (parents and administrators) to deter-
whether we have achieved our goals.
U e people in other design professions, such as architecture, engineering,
phic arts, designers in education must be mindful of their audiences.
sionals in these fields are strongly client-centered. The effectiveness of
designs corresponds to whether they have accomplished explicit goals
cific end-users. Clearly, students are our primary clients, given that the
·veness of curriculum, assessment, and instructional designs is ulti-
: determined by their achievement of desired learnings. We can think of
designs, then, as software. Our courseware is designed to make learning
effective, just as computer software is intended to make its users more
ctive.
in all the design professions, standards inform and shape our work. The
rare developer works to maximize user-friendliness and to reduce bugs
impede results. The architect is guided by building codes, customer
t. and neighborhood aesthetics. The teacher as designer is similarly con-
ed. We are not free to teach any topic we choose by any means. Rather,
are guided by national, state, district, or institutional standards that spec-
at students should know and be able to do. These standards provide a
13
14
Understanding by Design 2nd Edition
useful framework to help us identify teaching and learning priorities and guide
our design of curriculum and assessments. In addition to external standards,
we must also factor in the needs of our many and varied students when design-
ing learning experiences. For example, diverse student interests, developmen-
tal levels, large classes, and previous achievements must always shape our
thinking about the learning activities, assignments, and assessments.
Yet, as the old adage reminds us, in the best designs form follows function.
In other words, all the methods and materials we use are shaped by a clear
conception of the vision of desired results. That means that we must be able
to state with clarity what the student should understand and be able to do as
a result of any plan and irrespective of any constraints we face .
You probably know the saying, If you dont know exactly where you are
headed, then any road will get you there. Alas, the point is a serious one in
education. We are quick to say what things we like to teach, what activities we
will do, and what kinds of resources we will use; but without clarifying the
desired results of our teaching, how will we ever know whether our designs are
appropriate or arbitrary? How will we distinguish merely interesting learning
from effective learning? More pointedly, how will we ever meet content stan-
dards or arrive at hard-won student understandings unless we think through
what those goals imply for the learners activities and achievements?
Good design, then, is not so much about gaining a few new technical skills
as it is about learning to be more thoughtful and specific about our purposes
and what they imply.
Why backward is best
How do these general design considerations apply to curriculum planning?
Deliberate and focused instructional design requires us as teachers and cur-
riculum writers to make an important shift in our thinking about the nature of
our job. The shift involves thinking a great deal, first, about the specific learn-
ings sought, and the evidence of such learnings, before thinking about what
we, as the teacher, will do or provide in teaching and learning activities .
Though considerations about what to teach and how to teach it may dominate
our thinking as a matter of habit, the challenge is to focus first on the desired
learnings from which appropriate teaching will logically follow.
Our lessons, units, and courses should be logically inferred from the
results sought, not derived from the methods, books , and activities with which
we are most comfortable. Curriculum should lay out the most effective ways
of achieving specific results. It is analogous to travel planning. Our frameworks
should provide a set of itineraries deliberately designed to meet cultural goals
rather than a purposeless tour of all the major s ites in a foreign country. In
short, the best designs derive backward from the learnings sought.
The appropriateness of this approach becomes clearer when we consider
the educational purpose that is the focus of this book: understanding. We can-
not say how to teach for understanding or wh ich material and activities to use
Backward Design
until we are quite clear about which specific understandings we are after and
what such understandings look like in practice. We can best decide, as guides,
what sites to have our student tourists visit and what specific culture
they should experience in their brief time there only if we are clear about the
particular understandings about the culture we want them to take home. Only
by having specified the desired results can we focus on the content, methods,
and activities most likely to achieve those results.
But many teachers begin with and remain focused on textbooks, favored
lessons, and time-honored activities-the inputs-rather than deriving those
means from what is implied in the desired results-the output. To put it in an
odd way, too many teachers focus on the teaching and not the learning. They
spend most of their time thinking, first, about what they will do, what materi-
als they will use, and what they will ask students to do rather than first con-
sidering what the learner will need in order to accomplish the learning goals.
Consider a typical episode of what might be called content-focused design
instead of results-focused design. The teacher might base a lesson on a par-
ticular topic (e.g., racial prejudice), select a resource (e.g., To Kill a Mocking-
bird), choose specific instructional methods based on the resource and topic
(e.g., Socratic seminar to discuss the book and cooperative groups to ana-
lyze stereotypical images in films and on television), and hope
thereby to cause learning (and meet a few English/language
arts standards). Finally, the teacher might think up a few essay
questions and quizzes for assessing student understanding of
the book.
Design Tip
Consider these questions that arise in the
minds of all readers, the answers to which
This approach is so common that we may well be tempted
to reply, What could be wrong with such an approach? The
short answer lies in the basic questions of purpose: Why are we
asking students to read this particular novel-in other words,
what learnings will we seek from their having read it? Do the
students grasp why and how the purpose should influence
their studying? What should students be expected to under-
stand and do upon reading the book, related to our goals
beyond the book? Unless we begin our design work with a clear
insight into larger purposes-whereby the book is properly
thought of as a means to an educational end, not an end unto
itself-it is unlikely that all students will understand the book
(and their performance obligations). Without being self-
conscious of the specific understandings about prejudice we
seek, and how reading and discussing the book will help
will frame the priorities of coached learn-
ing: How should I read the book? What am
I looking for? What will we discuss? How
should I prepare for those discussions?
How do I know if my reading and discus-
sions are effective? Toward what perfor-
mance goals do this reading and these
discussions head, so that I might focus and
prioritize my studies and note taking? What
big ideas, linked to other readings, are in
play here? These are the students proper
questions about the learning, not the
teaching, and any good educational design
answers them from the start and through-
out a course of study with the use of tools
and strategies such as graphic organizers
and written guidelines.
develop such insights, the goal is far too vague: The approach is more by
hope than by design. Such an approach ends up unwittingly being one that
could be described like this: Throw some content and activities against the
wall and hope some of it sticks.
Answering the why? and so what? questions that older students always
ask ( or want to), and doing so in concrete terms as the focus of curriculum
15
Understanding by Design 2nd Edition
planning, is thus the essence of understanding by design. What is difficult for
many teachers to see (but easier for students to feel!) is that, without such
explicit and transparent priorities, many students find day-to-day work con-
fusing and frustrating.
The twin sins of traditional design
More generally, weak educational design involves two kinds of purposeless-
ness, visible throughout the educational world from kindergarten through
graduate school, as noted in the Introduction. We call these the twin sins
of traditional design. The error of activity-oriented design might be called
hands-on without being minds-on-engaging experiences that lead only
accidentally, if at all, to insight or achievement. The activities, though fun and
interesting, do not lead anywhere intellectually. As typified by the apples
vignette in the Introduction, such activity-oriented curricula lack an explicit
focus on important ideas and appropriate evidence of learning, especially in
the minds of the learners. They think their job is merely to engage; they are led
to think the learning is the activity instead of seeing that the learning comes
from being asked to consider the meaning of the activity.
A second form of aimlessness goes by the name of coverage, an
approach in which students march through a textbook, page by page ( or
teachers through lecture notes) in a valiant attempt to traverse all the factual
material within a prescribed time (as in the world history vignette in the Intro-
duction). Coverage is thus like a whirlwind tour of Europe, perfectly summa-
rized by the old movie title If Its Tuesday, This Must Be Belgium, which properly
suggests that no overarching goals inform the tour.
As a broad generalization, the activity focus is more typical at the elemen-
tary and lower middle school levels, whereas coverage is a prevalent second-
ary school and college problem.
MISCONCEPTION ALERT!
Yet, though the apples and world
history classrooms look quite
different with lots of physical
activity and chatter in the former
versus lecturing and quiet note
taking in the latter, the design
result is the same in both cases:
Coverage is not the same as purposeful survey. Providing students with an
overview of a discipline or a field of study is not inherently wrong. The
question has to do with the transparency of purpose. Coverage is a nega-
tive term (whereas introduction or survey is not) because when content is
covered the student is led through unending facts, ideas, and readings
with little or no sense of the overarching ideas, issues, and learning goals
that might inform study. (See Chapter 10 for more on coverage versus
uncoverage.)
No guiding intellectual purpose
or clear priorities frame the learn-
ing experience. In neither case
can students see and answer such
questions as these: Whats the point? Whats the big idea here? What does this
help us understand or be able to do? To what does this relate? Why should we
learn this? Hence, the students try to engage and folJow as best they can, hop-
ing that meaning will emerge.
16
Backward Design
Students will be unable to give satisfactory responses when the design
does not provide them with clear purposes and explicit performance goals
highlighted throughout their work. Similarly, teachers with an activity or cov-
erage orientation are less likely to have acceptable answers to the key design
questions: What should students understand as a result of the activities or
the content covered? What should the experiences or lectures
equip them to do? How, then, should the activities or class dis-
cussions be shaped and processed to achieve the desired
results? What would be evidence that learners are en route to
the desired abilities and insights? How, then, should all activi-
ties and resources be chosen and used to ensure that the learn-
ing goals are met and the most appropriate evidence produced?
How, in other words, will students be helped to see by design
the purpose of the activity or resource and its helpfulness in
meeting specific performance goals?
We are advocating the reverse of common practice, then.
We ask designers to start with a much more careful statement
of the desired results-the priority learnings-and to derive
the curriculum from the performances called for or implied in
the goals . Then, contrary to much common practice, we ask
Design lip
To test the merits of our claims about pur-
poselessness, we encourage you to sidle
up to a student in the middle of any class
and ask the following questions:
What are you doing?
Why are you being asked to do it?
What will it help you do?
How does it fit with what you have previ-
ously done?
How will you show that you have learned
it?
designers to consider the following questions after framing the goals: What
would count as evidence of such achievement? What does it look like to meet
these goals? What, then, are the implied performances that should make up the
assessment, toward which all teaching and learning should point? Only after
answering these questions can we logically derive the appropriate teaching
and learning experiences so that students might perform successfully to meet
the standard. The shift, therefore, is away from starting with such questions as
What book will we read? or What activities will we do? or What will we dis-
cuss? to What should they walk out the door able to understand, regardless
of what activities or texts we use? and What is evidence of such ability? and,
therefore, What texts , activities, and methods will best enable such a result?
In teaching students for understanding, we must grasp the key idea that we are
coaches of their ability to play the game of performing with understanding, not
tellers of our understanding to them on the sidelines.
The three stages of backward design
We call this three-stage approach to planning backward design. Figure 1.1
depicts the three stages in the simplest terms .
Stage 1: Identify desired results
What should students know, understand, and be able to do? What content
is worthy of understanding? What enduring understandings are desired?
17
i
I,
I, 18
Understanding by Design 2nd Edition
Figure 1.1
UbD: Stages of Backward Design
1. Identify
desired
results.
2. Determine
acceptable
evidence .
.....
3. Plan learning
experiences
and instruction .
...
In Stage 1 we consider our goals, examine established content standards
(national, state, district) , and review curriculum expectations. Because typi-
cally we have more content than we can reasonably address within the avail-
able time, we must make choices. This first stage in the design process calls
for clarity about priorities.
Stage 2: Determine acceptable evidence
How will we know if students have achieved the desired results? What will
we accept as evidence of student understanding and proficiency? The back-
ward design orientation suggests that we think about a unit or course in terms
of the collected assessment evidence needed to document and validate that
the desired learning has been achieved, not simply as content to be covered
or as a series of learning activities. This approach encourages teachers and
curriculum planners to first think like an assessor before designing specific
units and lessons, and thus to consider up front how they will determine if stu-
dents have attained the desired understandings .
Stage 3: Plan learning experiences and instruction
With clearly identified results and appropriate evidence of understanding
in mind, it is now the time to fully think through the most appropriate instruc-
tional activities. Several key questions must be considered at this stage of
backward design: What enabling knowledge (facts, concepts, principles) and
-
,.....
Backward Design
skills (processes, procedures, strategies) will students need in order to per-
form effectively and achieve desired results? What activities will equip stu-
dents with the needed knowledge and skills? What will need to be taught and
coached, and how should it best be taught, in light of performance goals? What
materials and resources are best suited to accomplish these goals?
Note that the specifics of instructional planning-choices about teaching
methods, sequence of lessons, and resource materials-can be successfully
completed only after we identify
MISCONCEPTION ALERT!
desired results and assessments I
and consider what they imply.
Teaching is a means to an end .
Having a clear goal helps to focus
our planning and guide purpose-
ful action toward the intended
results.
When we speak of evidence of desired results , we are referring to evidence
gathered through a variety of formal and informal assessments during a
unit of study or a course. We are not alluding only to end-of-teaching tests
or culminating tasks . Rather, the collected evidence we seek may well
include traditional quizzes and tests , performance tasks and projects ,
Backward design may be
thought of, in other words, as pur-
poseful task analysis: Given a
I
observations and dialogues , as well as students self-assessments gathered
over time.
worthy task to be accomplished, how do we best get everyone equipped? Or
we might think of it as building a wise itinerary, using a map: Given a destina-
tion, whats the most effective and efficient route? Or we might think of it as
planning for coaching, as suggested earlier: What must learners master if they
are to effectively perform? What will count as evidence on the field, not merely
in drills, that they really get it and are ready to perform with understanding,
knowledge, and skill on their own? How will the learning be designed so that
learners capacities are developed through use and feedback?
This is all quite logical when you come to understand it, but backward
from the perspective of much habit and tradition in our field. A major change
from common practice occurs as designers must begin to think about assess-
ment before deciding what and how they will teach. Rather than creating
assessments near the conclusion of a unit of study ( or relying on the tests pro-
vided by textbook publishers, which may not completely or appropriately
assess our standards and goals) , backward design calls for us to make our
goals or standards specific and concrete, in terms of assessment evidence, as
we begin to plan a unit or course.
The logic of backward design applies regardless of the learning goals. For
example, when starting from a state content standard, curriculum designers
need to determine the appropriate assessment evidence stated or implied in
the standard. Likewise, a staff developer should determine what evidence will
indicate that the adults have learned the intended knowledge or skill before
planning the various workshop activities.
The rubber meets the road with assessment. Three different teachers may
all be working toward the same content standards, but if their assessments vary
considerably, how are we to know which students have achieved what? Agree-
ment on needed evidence of learning leads to greater curricular coherence and
19
I
I
20
Understanding by Design 2nd Edition
more reliable evaluation by teachers . Equally important is the long-term gain in
teacher, student, and parent insight about what does and does not count as evi-
dence of meeting complex standards.
This view of focusing intently on the desired learning is hardly radical or
new. Tyler (1949) described the logic of backward design clearly and suc-
cinctly more than 50 years ago:
Educational objectives become the criteria by which materials are selected,
content is outlined, instructional procedures are developed, and tests and
examinations are prepared. .. .
The purpose of a statement of objectives is to indicate the kinds of changes
in the student to be brought about so that instructional activities can be
planned and developed in a way likely to attain these objectives. (pp. 1, 45)
And in his famous book, How to Solve It, originally published in 1945, Polya
specifically discusses thinking backward as a strategy in problem solving
going back to the Greeks:
There is a certain psychological difficulty in turning around, in going away
from the goal, in working backwards . . .. Yet, it does not take a genius to solve
a concrete problem working backwards; anyone can do it with a little com-
mon sense. We concentrate on the desired end, we visualize the final position
in which we would like to be. From what foregoing position could we get
there? (p. 230)
These remarks are old. What is perhaps new is that we offer herein a help-
ful process, a template, a set of tools, and design standards to make the plan and
resultant student performance more likely to be successful by design than by
good fortune. As a 4th grade teacher from Alberta, Canada, put it, Once I had a
way of clearly defining the end in mind, the rest of the unit fell into place.
The twin sins of activity-based and coverage-based design reflect a failure
to think through purpose in this backward-design way. With this in mind, lets
revisit the two fictitious vignettes from the Introduction. In the apples vignette,
the unit seems to focus on a particular theme (harvest time), through a spe-
cific and familiar object (apples) . But as the depiction reveals, the unit has no
real depth because there is no enduring learning for the students to derive.
The work is hands-on without being minds-on, because students do not need to
(and are not really challenged to) extract sophisticated ideas or connections.
They dont have to work at understanding; they need only engage in the activ-
ity. (Alas, it is common to reward students for mere engagement as opposed
to understanding; engagement is necessary, but not sufficient, as an end
result.)
Moreover, when you examine the apples unit it becomes clear that it has no
overt priorities-the activities appear to be of equal value. The students role
is merely to participate in mostly enjoyable activities, without having to demon-
strate that they understand any big ideas at the core of the subject ( excuse
the pun). All activity-based-as opposed to results-based-teaching shares
the weakness of the apples unit: Little in the design asks students to derive
B ackward Design
intellectual fruit from the unit (sorry!). One might characterize this activity-
oriented approach as faith in learning by osmosis. Is it likely that individual
tudents will learn a few interesting things about apples? Of course. But, in the
absence of a learning plan with clear goals, how likely is it that students will
develop shared understandings on which future lessons might build? Not very.
In the world history vignette, the teacher covers vast amounts of content
during the last quarter of the year. However, in his harried march to get
through a textbook, he apparently does not consider what the students will
understand and apply from the material. What kind of intellectual scaffolding
is provided to guide students through the important ideas? How are students
expected to use those ideas to make meaning of the many facts? What per-
formance goals would help students know how to take notes for maximal effec-
tive use by the courses end? Coverage-based instruction amounts to the
eacher merely talking, checking off topics, and moving on, irrespective of
hether students understand or are confused . This approach might be termed
-teaching by mentioning it. Coverage-oriented teaching typically relies on a
extbook, allowing it to define the content and sequence of instruction. In con-
trast, we propose that results-oriented teaching employ the textbook as a
resource but not the syllabus.
A backward design template
Having described the backward design process, we now put it together in a
useful format-a template for teachers to use in the design of units that focus
on understanding.
Many educators have observed that backward design is common sense.
Yet when they first start to apply it, they discover that it feels unnatural. Work-
ing this way may seem a bit awkward and time-consuming until you get the
hang of it. But the effort is worth it-just as the learning curve on good soft-
ware is worth it. We think of Understanding by Design as software, in fact: a set
of tools for making you ultimately more productive. Thus, a practical corner-
stone of Understanding by Design is a design template that is meant to rein-
force the appropriate habits of mind needed to complete designs for student
understanding and to avoid the habits that are at the heart of the twin sins of
activity-based and coverage-based design.
Figure 1.2 provides a preliminary look at the UbD Template in the form of
a one-page version with key planning questions included in the various fields .
This format guides the teacher to the various UbD elements while visually con-
veying the idea of backward design. Later chapters present a more complete
account of the template and each of its fields.
Although this one-page version of the template does not allow for great
detail, it has several virtues. First, it provides a gestalt, an overall view of back-
ward design, without appearing overwhelming. Second, it enables a quick
check of alignment-the extent to which the assessments (Stage 2) and learn-
ing activities (Stage 3) align with identified goals (Stage 1). Third , the template
21
Understanding by Design 2nd Edition
Figure 1.2
1-Page Template with Design Questions for Teachers
Stage 1-Desired Results
Established Goals:
• What relevant goals (e.g., content standards, course or program objectives, learning outcomes) will this
design address?
Understandings: 4D Essential Questions: CD
Students will understand that . ..
• What are the big ideas?
• What specific understandings about ther:i are desired?
• What misunderstandings are predictable?
Students will know . ..
• What key knowledge and skills will students
acquire as a result of this unit?
• What should they eventually be able to do as
a result of such knowledge and skills?
• What provocative questions will foster inquiry, under-
standing, and transfer of learning?
0 Students will be able to . . .
Stage 2-Assessment Evidence
Performance Tasks:
• Through what authentic performance tasks
will students demonstrate the desired
understandings?
• By what criteria will performances of understanding be
judged?
O Other Evidence:
• Through what other evidence (e.g., quizzes, tests, aca-
demic prompts, observations, homework.journals) will
students demonstrate achievement of the desired
results?
• How will students reflect upon and self-assess their
learning?
Stage 3-Learning Plan
0
Learning Activities: G
What learning experiences and instruction will enable students to achieve the desired results? How
the design
W = Help the students know Where the unit is going and What is expected? Help the teacher- W ,ce tile students
are coming from (prior knowledge, interests)?
,-, = Hook all students and Hold their interest?
E = Equip students, help them Experience the key ideas and Explore the issues?
R = Provide opportunities to Rethink and Revise their understandings and work?
E = Allow students to Evaluate their work and its implications?
T = Be Tailored (personalized) to the different needs, interests, and abilities of eaTI..,.-.::;
0 = Be Organized to maximize initial and sustained engagement as well as effect ve e,.
22
Backward Design
can be used to review existing units that teachers or districts have developed.
Finally, the one-page template provides an initial design frame . We also have a
multipage version that allows for more detailed planning, including, for exam-
ple, a Performance Task Blueprint and a day-by-day calendar for listing and
sequencing key learning events . The …
CATEGORIES
Economics
Nursing
Applied Sciences
Psychology
Science
Management
Computer Science
Human Resource Management
Accounting
Information Systems
English
Anatomy
Operations Management
Sociology
Literature
Education
Business & Finance
Marketing
Engineering
Statistics
Biology
Political Science
Reading
History
Financial markets
Philosophy
Mathematics
Law
Criminal
Architecture and Design
Government
Social Science
World history
Chemistry
Humanities
Business Finance
Writing
Programming
Telecommunications Engineering
Geography
Physics
Spanish
ach
e. Embedded Entrepreneurship
f. Three Social Entrepreneurship Models
g. Social-Founder Identity
h. Micros-enterprise Development
Outcomes
Subset 2. Indigenous Entrepreneurship Approaches (Outside of Canada)
a. Indigenous Australian Entrepreneurs Exami
Calculus
(people influence of
others) processes that you perceived occurs in this specific Institution Select one of the forms of stratification highlighted (focus on inter the intersectionalities
of these three) to reflect and analyze the potential ways these (
American history
Pharmacology
Ancient history
. Also
Numerical analysis
Environmental science
Electrical Engineering
Precalculus
Physiology
Civil Engineering
Electronic Engineering
ness Horizons
Algebra
Geology
Physical chemistry
nt
When considering both O
lassrooms
Civil
Probability
ions
Identify a specific consumer product that you or your family have used for quite some time. This might be a branded smartphone (if you have used several versions over the years)
or the court to consider in its deliberations. Locard’s exchange principle argues that during the commission of a crime
Chemical Engineering
Ecology
aragraphs (meaning 25 sentences or more). Your assignment may be more than 5 paragraphs but not less.
INSTRUCTIONS:
To access the FNU Online Library for journals and articles you can go the FNU library link here:
https://www.fnu.edu/library/
In order to
n that draws upon the theoretical reading to explain and contextualize the design choices. Be sure to directly quote or paraphrase the reading
ce to the vaccine. Your campaign must educate and inform the audience on the benefits but also create for safe and open dialogue. A key metric of your campaign will be the direct increase in numbers.
Key outcomes: The approach that you take must be clear
Mechanical Engineering
Organic chemistry
Geometry
nment
Topic
You will need to pick one topic for your project (5 pts)
Literature search
You will need to perform a literature search for your topic
Geophysics
you been involved with a company doing a redesign of business processes
Communication on Customer Relations. Discuss how two-way communication on social media channels impacts businesses both positively and negatively. Provide any personal examples from your experience
od pressure and hypertension via a community-wide intervention that targets the problem across the lifespan (i.e. includes all ages).
Develop a community-wide intervention to reduce elevated blood pressure and hypertension in the State of Alabama that in
in body of the report
Conclusions
References (8 References Minimum)
*** Words count = 2000 words.
*** In-Text Citations and References using Harvard style.
*** In Task section I’ve chose (Economic issues in overseas contracting)"
Electromagnetism
w or quality improvement; it was just all part of good nursing care. The goal for quality improvement is to monitor patient outcomes using statistics for comparison to standards of care for different diseases
e a 1 to 2 slide Microsoft PowerPoint presentation on the different models of case management. Include speaker notes... .....Describe three different models of case management.
visual representations of information. They can include numbers
SSAY
ame workbook for all 3 milestones. You do not need to download a new copy for Milestones 2 or 3. When you submit Milestone 3
pages):
Provide a description of an existing intervention in Canada
making the appropriate buying decisions in an ethical and professional manner.
Topic: Purchasing and Technology
You read about blockchain ledger technology. Now do some additional research out on the Internet and share your URL with the rest of the class
be aware of which features their competitors are opting to include so the product development teams can design similar or enhanced features to attract more of the market. The more unique
low (The Top Health Industry Trends to Watch in 2015) to assist you with this discussion.
https://youtu.be/fRym_jyuBc0
Next year the $2.8 trillion U.S. healthcare industry will finally begin to look and feel more like the rest of the business wo
evidence-based primary care curriculum. Throughout your nurse practitioner program
Vignette
Understanding Gender Fluidity
Providing Inclusive Quality Care
Affirming Clinical Encounters
Conclusion
References
Nurse Practitioner Knowledge
Mechanics
and word limit is unit as a guide only.
The assessment may be re-attempted on two further occasions (maximum three attempts in total). All assessments must be resubmitted 3 days within receiving your unsatisfactory grade. You must clearly indicate “Re-su
Trigonometry
Article writing
Other
5. June 29
After the components sending to the manufacturing house
1. In 1972 the Furman v. Georgia case resulted in a decision that would put action into motion. Furman was originally sentenced to death because of a murder he committed in Georgia but the court debated whether or not this was a violation of his 8th amend
One of the first conflicts that would need to be investigated would be whether the human service professional followed the responsibility to client ethical standard. While developing a relationship with client it is important to clarify that if danger or
Ethical behavior is a critical topic in the workplace because the impact of it can make or break a business
No matter which type of health care organization
With a direct sale
During the pandemic
Computers are being used to monitor the spread of outbreaks in different areas of the world and with this record
3. Furman v. Georgia is a U.S Supreme Court case that resolves around the Eighth Amendments ban on cruel and unsual punishment in death penalty cases. The Furman v. Georgia case was based on Furman being convicted of murder in Georgia. Furman was caught i
One major ethical conflict that may arise in my investigation is the Responsibility to Client in both Standard 3 and Standard 4 of the Ethical Standards for Human Service Professionals (2015). Making sure we do not disclose information without consent ev
4. Identify two examples of real world problems that you have observed in your personal
Summary & Evaluation: Reference & 188. Academic Search Ultimate
Ethics
We can mention at least one example of how the violation of ethical standards can be prevented. Many organizations promote ethical self-regulation by creating moral codes to help direct their business activities
*DDB is used for the first three years
For example
The inbound logistics for William Instrument refer to purchase components from various electronic firms. During the purchase process William need to consider the quality and price of the components. In this case
4. A U.S. Supreme Court case known as Furman v. Georgia (1972) is a landmark case that involved Eighth Amendment’s ban of unusual and cruel punishment in death penalty cases (Furman v. Georgia (1972)
With covid coming into place
In my opinion
with
Not necessarily all home buyers are the same! When you choose to work with we buy ugly houses Baltimore & nationwide USA
The ability to view ourselves from an unbiased perspective allows us to critically assess our personal strengths and weaknesses. This is an important step in the process of finding the right resources for our personal learning style. Ego and pride can be
· By Day 1 of this week
While you must form your answers to the questions below from our assigned reading material
CliftonLarsonAllen LLP (2013)
5 The family dynamic is awkward at first since the most outgoing and straight forward person in the family in Linda
Urien
The most important benefit of my statistical analysis would be the accuracy with which I interpret the data. The greatest obstacle
From a similar but larger point of view
4 In order to get the entire family to come back for another session I would suggest coming in on a day the restaurant is not open
When seeking to identify a patient’s health condition
After viewing the you tube videos on prayer
Your paper must be at least two pages in length (not counting the title and reference pages)
The word assimilate is negative to me. I believe everyone should learn about a country that they are going to live in. It doesnt mean that they have to believe that everything in America is better than where they came from. It means that they care enough
Data collection
Single Subject Chris is a social worker in a geriatric case management program located in a midsize Northeastern town. She has an MSW and is part of a team of case managers that likes to continuously improve on its practice. The team is currently using an
I would start off with Linda on repeating her options for the child and going over what she is feeling with each option. I would want to find out what she is afraid of. I would avoid asking her any “why” questions because I want her to be in the here an
Summarize the advantages and disadvantages of using an Internet site as means of collecting data for psychological research (Comp 2.1) 25.0\% Summarization of the advantages and disadvantages of using an Internet site as means of collecting data for psych
Identify the type of research used in a chosen study
Compose a 1
Optics
effect relationship becomes more difficult—as the researcher cannot enact total control of another person even in an experimental environment. Social workers serve clients in highly complex real-world environments. Clients often implement recommended inte
I think knowing more about you will allow you to be able to choose the right resources
Be 4 pages in length
soft MB-920 dumps review and documentation and high-quality listing pdf MB-920 braindumps also recommended and approved by Microsoft experts. The practical test
g
One thing you will need to do in college is learn how to find and use references. References support your ideas. College-level work must be supported by research. You are expected to do that for this paper. You will research
Elaborate on any potential confounds or ethical concerns while participating in the psychological study 20.0\% Elaboration on any potential confounds or ethical concerns while participating in the psychological study is missing. Elaboration on any potenti
3 The first thing I would do in the family’s first session is develop a genogram of the family to get an idea of all the individuals who play a major role in Linda’s life. After establishing where each member is in relation to the family
A Health in All Policies approach
Note: The requirements outlined below correspond to the grading criteria in the scoring guide. At a minimum
Chen
Read Connecting Communities and Complexity: A Case Study in Creating the Conditions for Transformational Change
Read Reflections on Cultural Humility
Read A Basic Guide to ABCD Community Organizing
Use the bolded black section and sub-section titles below to organize your paper. For each section
Losinski forwarded the article on a priority basis to Mary Scott
Losinksi wanted details on use of the ED at CGH. He asked the administrative resident