Sophie Miles - Psychology
May I get help please?
ALL INSTRUCTIONS AND READINGS ARE ATTACHED. PLEASE FOLLOW THE DIRECTIONS WELL.
Week 5: Ethics
Imagine you are a human services professional for a child welfare program. You are working on a case involving a single mother who is suspected of substance abuse. You happen to know the mother personally and have always believed her to be a kind and loving person. Her children, if put into foster care, would likely be separated, put into different school systems, and live in much poorer conditions. How would you handle this situation?
Ethical dilemmas like this are very common in the human services field. This week, you will analyze ethical issues related to working with at-risk populations for human services delivery. You will also consider the existing codes of ethics created by human services professional organizations to guide actions and help alleviate the pressure of ethical decisions. Your evaluation plan for a homeless population which you developed for your Final Project is due this week.
Objectives
By the end of this week, you should be able to:
· Analyze ethical concerns when working with at-risk populations
· Create an evaluation plan for human services for homeless populations
Learning Resources
Required Readings
Kettner, P. M., Moroney, R. M., & Martin, L. L. (2017). Designing and managing programs: An effectiveness-based approach (5th ed.). Thousand Oaks, CA: Sage.
· Chapter 8, “Designing Effective Programs
· Chapter 9, “Designing Effectiveness-Based Information Systems” (pp. 161–181)
Holosko, M. J., Thyer, B. A., & Danner, J. E. H. (2009). Ethical guidelines for designing and conducting evaluations of social work practice. Journal of Evidence-Based Social Work, 6(4), 348-360.
Martin, J. I., & Meezan, W. (2003). Applying ethical standards to research and evaluations involving lesbian, gay, bisexual, and transgender populations. Journal of Gay & Lesbian Social Services, 15(1/2), 181–201.
Ray, M. (2006). Choosing a truly external evaluator. American Journal of Evaluation, 27(3), 372–377.
Blustein, J. (2005). Toward a more public discussion of the ethics of federal social program evaluation. Journal of Policy Analysis & Management, 24(4), 824–846.
National Organization for Human Services. (2009). Ethical standards for human service professionals.Retrieved from http://www.nationalhumanservices.org/ethical-standards-for-hs-professionals
THIS DISCUSSION IS DUE WEDNESDAY 8/11
Discussion: Ethical Concerns
In the human services field, ethical concerns and dilemmas are very common. Human services professionals interact with a diverse clientele; therefore, they must be aware of a variety of issues including multicultural differences, consent and confidentiality procedures, appropriate relationship boundaries, and conflicts of interest. For each of these considerations and issues, there are related ethical considerations when working with every population. Ethical concerns and considerations are escalated when the target population (clients) for a program is considered at risk or vulnerable in any way. It is the responsibility of human services professionals to protect, inform, and otherwise respect their clients.
Because of the importance of addressing ethical concerns, the National Organization for Human Services (NOHS), the National Association of Social Workers (NASW), and other professional organizations have developed comprehensive codes of ethics. In this Discussion, you will explore ethical issues that may arise in the human services field, and you will consider how the NOHS code of ethics can help to guide the actions of human services professionals experiencing ethical dilemmas.
To prepare for this Discussion:
· Review the selected readings in your course text, focusing on the types of information that might be gathered for measuring needs. Reflect on the potential ethical issues that could arise when gathering information from at-risk populations and how you can guard against unethical behavior or processes.
· Review the article “Ethical Guidelines for Designing and Conducting Evaluations of Social Work Practice,” focusing on the difference between social science research activities and evaluations of social work practice. Think about the important role of human services professionals in designing ethical programs that not only assist clients but also protect them from unethical practices.
· Review the article “Applying Ethical Standards to Research and Evaluations Involving Lesbian, Gay, Bisexual, and Transgender Populations.” Think about how the population you are working with influences what is considered to be ethical.
· Review the article “Choosing a Truly External Evaluator,” focusing on the responsibility of the social service professional to ensure ethical behavior from all parties involved in a program.
· Review the article “Toward a More Public Discussion of the Ethics of Federal Social Program Evaluation.” Consider the potential consequences of failing to outline specific ethical standards.
· Review the information found on the NOHS website. Think about the importance of having such detailed standards and how you could use them in decision making.
· Select an at-risk population that is of interest to you. Reflect on the types of ethical decisions you might face when working with members of this population.
With these thoughts in mind:
By Day 4
Post a brief description of the at-risk population you selected. Then explain at least two ethical concerns you should be aware of when working with this population. Finally, explain how you might address these concerns. Justify your response using the NOHS code of ethics.
Be sure to support your postings and responses with specific references to the Learning Resources.
MY COMMUNITY IS NORFOLK VIRGINIA AND THIS IS DUE BY SATURDAY 8/14
Final Project
For your Final Project, you will apply the skills and approaches to program design and evaluation for human services programs to the homeless population in your community. If you are fortunate enough to live in a community where there is not a pressing homelessness problem, then select a community in a major city near your community, or one with which you are familiar.
First, throughout the course you address the issue of homelessness by analyzing a single case study in your Discussions and Assignments:
· In the Week 1 Discussion, you describe the motivation for the program in the case study and examine whether the program meets the needs of clients.
· In the Week 2 Discussion, you identify the types of needs of the homeless and explain how the identification of those needs might influence responses to homelessness in the case study.
· In the Week 2 Assignment, you explain how perceptions of the needs of the homeless may be different than their real needs, and also consider why it is important to understand the accurate needs of the population.
· In the Week 3 Assignment, you develop a needs assessment for the case study.
· In the Week 4 Discussion, you consider evaluative tools to measure the effectiveness of human services delivery in the case study.
· In the Week 4 Assignment, you use a “logic model” to develop a program design and evaluation.
While the Discussion Questions and Assignments are not meant to provide you with the content of your Final Project, they are meant to provide you with an opportunity to practice each piece of a successful Program Design and Evaluation Narrative to address the social problem of homelessness in your community. There is no specific template for a Program Design and Evaluation Narrative, but you should use the guidelines below to craft a single document package that addresses all of the elements of program design and evaluation covered in this course.
With this in mind, to successfully complete this Final Project you must:
Reflect on the program design and evaluation concepts and practices discussed throughout the course to complete a Program Design and Evaluation Narrative for the homeless population in your community. You should incorporate the feedback you received from your Instructor on the Assignments to refine and revise your Final Project before submitting it to your Instructor.
As you finalize your Program Design and Evaluation Narrative, please do not cut and paste from each week’s separate components. The Assignment is to write an actual narrative that could potentially serve the homeless population in your specific community. Your narrative should be edited, seamless, and pertain to the homeless population in your community—not that of the case study.
Your Program Design and Evaluation Narrative should include the logic model in chart form. The Final Project should be double spaced and APA formatted, use appropriate grammar and spelling, and include citations from the Learning Resources from the course as well as from your own research for the Final Project. Your Program Design and Evaluation Narrativemust include the following components:
· A description of your community, including background and demographic information on the homeless population in your community
· A description of the currently available human services programs offered for the homeless population in your community
· A description of the needs assessment you would implement to understand the motivations and needs of the homeless population in your community (include questions you might ask as part of this assessment) and to understand the effectiveness of the services offered (if more than one, then select one to use for this project)
· An outline of the data collection procedure you might use
· Explain whether the answers to the questions you created would elicit quantitative or qualitative results.
· Explain how the answers to these questions would inform an evaluation of the program.
· The motivations and normative, expressed, perceived, and relative needs of the homeless population in your community (use newspapers written by the homeless where possible)
· A logic model and evaluation strategies to deal with the homelessness problem in your community
· An explanation of the mission, goals, outcomes, and purpose of the program you would implement
· An explanation of how the program you designed would address social change in the community
Note: As you write the narrative, think about how your ideas evolved throughout the course: What have you learned about identifying needs and creating effective programs and program evaluations? Using your Discussion and Assignment responses as a launching point, your narrative should include the motivations, needs, needs assessment, mission, goals, outcomes, purpose, logic model, and evaluation strategies to deal with the homelessness problem in your community.
The Final Project will be evaluated according to the indicators in the Final Project and Writing Rubric located in the Course Information area. Be sure that the Final Project is written using APA format.
Information on scholarly writing may be found in the APA Publication Manual, and at the
Walden Writing Center
website. Also see the Walden University Policies and Information in the Guidelines and Policies area under Policies on Academic Honesty.
824 / Professional Practice
ment scholars in dialogues on important and timely issues involving the role, char-
acter, and contributions of social science research and scholarship in policymak-
ing and public management.
TOWARD A MORE PUBLIC DISCUSSION OF THE ETHICS OF FEDERAL SOCIAL
PROGRAM EVALUATION
Jan Blustein
Abstract
Federal social program evaluation has blossomed over the past quarter century.
Despite this growth, there has been little accompanying public debate on research
ethics. This essay explores the origins and the implications of this relative silence
on ethical matters. It reviews the federal regulations that generally govern research
ethics, and recounts the history whereby the evaluation of federal programs was
specifically exempted from the purview of those regulations. Through a discussion
of a recent evaluation that raised ethical concerns, the essay poses—but does not
answer—three questions: (1) Are there good reasons to hold federal social program
evaluations to different standards than those that apply to other research?; (2) If
so, what ethical standards should be used to assess such evaluations?; and (3)
Should a formal mechanism be developed to ensure that federal social program
evaluations are conducted ethically? © 2005 by the Association for Public Policy
Analysis and Management
Why is there so little public discussion of ethical matters among social program
evaluators? This question first occurred to me five years ago, while I was teaching
a course in program evaluation here at the Wagner School. I wanted to include a
session on the ethics of evaluation, and was having a hard time finding good mate-
rial and good cases.
I knew that ethics were not entirely ignored in the field. A few social program
evaluators such as Robert Boruch have devoted substantial portions of their careers
to ethical issues in experimentation (for example, Boruch, 1997; Boruch & Cecil,
1983). Brief discussions of ethical issues have appeared in textbooks and edited vol-
umes (for instance, Gueron, 2002; Orr, 1999). Some of the relevant professional
associations have issued non-binding guidelines regarding good professional con-
duct, and these pertain in part to ethical matters (for example, American Evalua-
tion Association, 1994). But there has been surprisingly little debate. For instance,
an electronic search of the contents of back issues of JPAM yields very little on
ethics or the protection of research subjects.
I was intrigued by this paucity of public discussion because I am a physician, and
my medical training included a substantial component on research ethics (more on
this later). From the perspective of that training, applied social scientists tread on
ethically shaky ground. For example, most codes of medical ethics include special
protections for research on vulnerable and disempowered populations such as the
DOI: 10.1002/pam.20141
Professional Practice / 825
children and the poor. And the subjects of applied social research are often vulner-
able and disempowered. Ethical issues are often fiercely debated in the medical
community (for example, the debates on the testing of anti-HIV drugs in develop-
ing countries; see Lurie & Wolfe, 1997; Varmus & Satcher, 1997).
In the years since the question occurred to me, I have tried to understand this rel-
ative silence. The more I have learned, the more interesting and complex the story has
become. This essay distills what I have come to understand, beginning with some his-
tory of the debate on scientific research ethics in this country. I describe the so-called
Belmont Report (National Commission for the Protection of Human Subjects of Bio-
medical and Behavioral Research, 1979), which articulates the principles that have
informed researchers’ thinking about ethics for the past quarter-century. Shortly after
the Report was published, its principles were incorporated into regulations governing
federally funded biomedical and behavioral research. But, as I will describe, the eval-
uation of public social programs was explicitly excluded from the purview of Bel-
mont, and subsequently was largely exempted from federal ethical regulation. For
better or worse, the evaluation of federal social programs has therefore proceeded
under a kind of ethical “honor system.” This raises the question of whether self-regu-
lation has been effective, and whether evaluation rests on firm ethical ground.
In this essay, I explore these issues by using as an example a recent evaluation,
the National Job Corps Study (U.S. Department of Labor Employment and Train-
ing Administration, 2001). My analysis suggests that the study might not pass
muster under the criteria that generally govern research involving human subjects.
The essay raises—but does not answer—three questions: (1) Are there good rea-
sons to hold federal social program evaluations to different standards than those
that apply to other research?; (2) If so, what ethical standards should be used to
assess such evaluations?; and (3) Should a formal mechanism be developed to
ensure that federal social program evaluations are conducted ethically?
A CAVEAT
Before I start, I would like to say a bit more about the perspective, training, and
experience that I bring to this enquiry. While I have some formal training in ethics,
I am not a professional ethicist, nor do I have legal training. Perhaps most impor-
tantly, I am not an evaluator. My everyday research is in health policy, and it largely
involves the analysis of survey data. I am based in an academic setting. In other
words, my work experience is fairly distant from the world of the evaluation firms
and federal agencies that undertake much of the work that will be discussed here.
As an interested observer, I bring the handicaps and advantages of the outsider.
THE BELMONT REPORT
I began my enquiry by reviewing what I knew about medical research ethics. That
field blossomed in the middle of the last century, with the abuses of medical science
that occurred during World War II. At the war’s end, the world responded with the
Nuremberg Code. Later, in the U.S., the 1960s and 1970s brought public awareness
of the abuse of vulnerable subjects at the hands of science in such places as
Tuskegee and Willowbrook. Public outcry led to congressional hearings and the pas-
sage of the 1974 National Research Act (PL 93–348), which created the landmark
National Commission for the Protection of Human Subjects of Biomedical and
Behavioral Research. That Commission was charged with identifying the boundary
between practice and research, and with establishing basic ethical principles that
826 / Professional Practice
should underlie the conduct of biomedical and behavioral research. The Commis-
sion’s major written product was the Belmont Report.
The Report begins by making the distinction between the everyday private activ-
ity of providing treatment and the public activity of scientific research. It argues
that a fundamental distinguishing characteristic of research is generalizability. Sci-
entific research is conducted to acquire knowledge that is accessible to all. That
knowledge is a public good. Because research subjects assume risk in order to gen-
erate this good for all, different (and higher) ethical standards apply in science than
do in everyday life.
Belmont proposes three principles to guide the conduct of research: respect for
persons, beneficence, and justice. Because these principles are fundamental to fed-
eral regulation of research involving human subjects, they are described briefly
here.1 Also mentioned are applications of the principles to research, as well as areas
in which interpretation of the principles has proven problematic. Interpretation is
key; as the Belmont commissioners noted, the principles are general guidelines
rather than algorithmic solutions to ethical problems in research.
Respect for Persons
This principle holds that subjects should be treated as autonomous agents, with
their own perspectives, goals, values, and considered opinions. The notion of
informed consent is derived from this principle: The experimenter cannot know
whether a subject should choose to participate in a study; only the subject knows
his/her own set of preferences. The principle reflects the Kantian imperative to
avoid treating people as means to some end.
There has been much discussion of the application of this principle. For example,
what constitutes “informed consent”? Scholars have questioned the concept of
“informed” (What kind of information must be provided to the subject? How
detailed must the disclosure be? How should the facts be presented? How can the
investigator ensure that information has been transmitted?), and “consent” (What
are the elements of consent, and what constitutes undue influence or coercion?
What ethical constraints pertain to experiments with subjects who have limited
autonomy?). Resolution of these issues is of course beyond the scope of this paper.
However, it is notable that voluntary consent dictates that subjects are free to
refuse, and to revert to a state that might be called the “counterfactual”—that is, a
world with the same expected risk and benefit structure that prevailed before the
offer to participate was tendered. Thus, for example, potential subjects in medical
experiments are given assurances that refusal to participate will not jeopardize their
relationships with their physicians, their treating hospitals, and so on.
Beneficence
This is a two-part principle, which the Belmont committee articulated as “(1) do not
harm and (2) maximize possible benefits and minimize possible harms.” The com-
mittee noted that this is not an absolute injunction against exposing subjects to
risk—indeed, the scientific enterprise inevitably exposes subjects to risk—but that
risk is only justified in proportion to the expected benefits.
1 These rich and complex principles can be only summarized briefly here; interested readers may wish
to read the original report (see “References” for the URL), or consult a standard text in the field (the most
widely read is by Beauchamp and Childress [2001], now in its 5th edition).
Professional Practice / 827
The committee also noted that risks and benefits accrue both to individual sub-
jects and to society at-large. They enjoined researchers to view science as a pro-
gressive influence: “In the case of scientific research in general, members of the
larger society are obliged to recognize the longer term benefits and risks that may
result from the improvement of knowledge and from the development of novel med-
ical, psychotherapeutic, and social procedures.”
The principle of beneficence also provides guidance as to when it is ethical to
assign different groups of subjects to different treatments. For the investigator to
avoid harming either group of subjects, there must be uncertainty as to whether the
treatment or the control group will fare better. Medical ethicists use the term
“equipoise” for this uncertainty. The notion of equipoise is often invoked in deter-
mining when it is permissible to withhold treatment in the course of research.
Many hold that placebo-controlled experiments are ethical only when there is no
known effective treatment for the condition in question. In such cases, it is not clear
whether the treatment or placebo group will do better. But if there is a known effec-
tive treatment, an “equivalence” trial must be conducted. To give a medical exam-
ple: To evaluate a new drug for the treatment of early-stage breast cancer, a placebo-
controlled study would not be fair to subjects, because there is a known effective
treatment for early-stage breast cancer. However, it would be ethically justifiable to
evaluate the new drug in an equivalence study, in which one group of patients was
offered the known effective care plus the new drug, and the other group was offered
just the known effective care for cancer.
Some believe that there is an important exception to the rule that placebo-controlled
studies should be undertaken only when there is no known effective treatment. If the
harm (or the “disease”) is minor and short-lived, then placebo-controlled studies may
be undertaken even when known effective treatments exist. For instance, trials of
headache medicines or antihistamines may be placebo-controlled, even though there
are known effective treatments for headaches and allergies. A very few others would
go further and allow placebo-controlled trials even when potential harms are sub-
stantial, if there are large societal benefits at stake (for a sense of this debate, see Roth-
man, Michaels, & Baum, 2000).
As noted above, the principle of beneficence also requires the investigator to max-
imize possible benefits and minimize possible harms. Balancing harms and bene-
fits is one of the most problematic of ethical requirements. There are many kinds
of harms and benefits, and these can be expected to accrue to different parties (indi-
viduals, affected communities, society at-large), with various levels of certainty.
Trade-offs between individual and societal benefits are particularly problematic;
there is no formula for deciding where to strike the balance.
Justice
This refers to distributive justice, or the fair and equitable distribution of the bene-
fits and burdens of research. At the time that the Report was written, there was a
long history of poor patients serving as subjects while wealthy patients benefited
from medical research. Citing such abuses, the Commissioners emphasized the
need to avoid research in which “welfare patients, particular racial and ethnic
minorities, or persons confined to institution are being systematically selected sim-
ply because of their easy availability, their compromised position, or their manipu-
lability, rather than for reasons directly related to the problem being studied.”
It bears emphasis that these principles were meant to apply to both biomedical and
behavioral research. On the biomedical side, applying the principle of justice has lit-
828 / Professional Practice
erally changed the face of clinical research. Disadvantaged subgroups no longer bear
a disproportionate burden of the risk of experimentation. While experiments can be
performed exclusively in vulnerable subpopulations, there is a significant burden on
the experimenter to demonstrate that this is justifiable (for example, when the disease
in question occurs exclusively in that subpopulation, when members of the popula-
tion and their advocates agree that the protocol is just and reasonable, and so on).
INCORPORATION OF THE REPORT INTO FEDERAL POLICY; EXCLUSION OF RESEARCH ON
PUBLIC SOCIAL PROGRAMS
Consistent with the Commission’s recommendations, the Belmont Report was pub-
lished as policy by the Secretary of HEW in 1979. Around the same time, the Com-
mission issued a report recommending an institutional mechanism for ensuring the
protection of human subjects in scientific research. Institutional Review Boards
(IRBs)—composed of scientists working in the institution and “at least one mem-
ber not otherwise affiliated with the institution”—would review research to ensure
that the work was conducted in a fashion consistent with respect for persons, benef-
icence, and justice (Protection of Human Subjects: Institutional Review Board,
1978). Regulations governing research with human subjects, and describing the
nature and scope of IRB activities for HEW-sponsored research, were finalized in
early 1981, as the aforementioned 45 Code of Federal Regulations 46 (Federal Reg-
ister, 1981; for a detailed history of the era, see Gray, 1982). Nearly ten years later,
the regulations were tailored and adopted by 16 federal agencies as a single general
set of provisions known as the “Common Rule” (Federal Register, 1991).
The Common Rule requires that researchers submit detailed work plans to a
committee composed of peers and at least one outside member of the community.
The IRB reviews those plans to ensure that subjects are being treated in a fashion
consistent with the principles articulated in the Belmont Report. The requirement
for IRB review is time consuming and sometimes irksome, and many investigators
view it as no more than a bureaucratic hurdle. Some IRBs function poorly, some-
times in ways that seriously compromise their effectiveness (Department of Health
and Human Services [DHHS], Office of the Inspector General, 1998; Institute of
Medicine, 2002). Nonetheless, the presence of formal regulations with explicit cri-
teria, and the accompanying institutional structure, serves as a constant reminder
of ethical matters. I believe that this has been a critical factor in keeping awareness
of research ethics in the forefront, particularly in the medical research community.
Debate on Applicability to Public Social Program Evaluations
To understand how research on public social programs fits into this picture, we
must return to the congressional hearings of 1974. Among other things, those hear-
ings revealed abuses of subjects in Tuskegee, under the sponsorship of the Public
Health Service, administratively under HEW. As the hearings progressed, it became
clear that unless the Department took initiative, Congress would pass legislation to
govern HEW-sponsored research. In a colorful history of the debate, Richard A.
Tropp, a high-level staffer in HEW at the time, describes how the agency responded
by taking standing NIH human subjects regulations, and applying them to all
research conducted under its purview:
Under the gun of imminent congressional passage of the National Research Act, and in
order to preempt a possible Senate move to include in it mandatory ethical standards
governing federally sponsored research, the secretary of HEW on May 22, 1974, signed
Professional Practice / 829
a regulation on “Protection of Human Subjects” which essentially transformed the
predecessor NIH guidelines into department policy. The regulation was largely a prod-
uct of the “H” part of HEW, drafted without participation by those HEW agencies
which customarily commission social science research products and maintain daily
conduct with the social science research world (Tropp, 1982, pp. 391–392).
But adoption of criteria that worked in the NIH context did not work for many in
the Department who were involved in social programs. They were accustomed to
working without the kind of external oversight that is implied by IRB review. Many
HEW staffers ignored the regulation (Tropp, 1982). In response, the first of a series
of lawsuits was brought on behalf of subjects in HEW-sponsored social experiments.
Crane v. Matthews (417 F. Supp 532), a 1976 class action by Georgia Medicaid bene-
ficiaries, sought to halt a demonstration project assessing the impact of beneficiary
cost sharing on the “overutilization” of “marginally needed” medical care. The plain-
tiffs argued that cost- sharing would put them at risk of foregoing needed care, and
thus poorer health. In its ruling, the court halted the demonstration on the grounds
that the Department had not sought IRB review, as was required in the new regula-
tions. By failing to appeal this decision, the federal government essentially converted
the requirement for IRB review into law. Confusion and conflict over the IRB issue
escalated within HEW in the ensuing years (Tropp, 1982; for a sense of the substance
and tone of the discussion, see Rivlin & Timpane, 1975).
By the time that the Belmont Commission issued its final report in 1979, the topic
was sufficiently controversial that the Commissioners were unable to reach a con-
sensus. A final footnote to the Belmont Report left the problem to posterity:
Because the problems related to social experimentation may differ substantially from
those of biomedical and behavioral research, the Commission specifically declines to
make any policy determination regarding such research at this time. Rather, the Com-
mission believes that the problem ought to be addressed by one of its successor bodies
(emphasis added).
The issue has never been addressed by any of the successor bodies, from the Pres-
ident’s Commission for the Study of Biomedical and Behavioral Research in the
early 1980s, to the current President’s Council on Bioethics.2
Public Social Program Evaluations Exempted
A few years after the Belmont Commission finished its business, the agency—now
the Department of Health and Human Services—attempted to settle the ethical ques-
tion through regulatory means. In a 1982 Notice of Proposed Rulemaking, it identi-
fied categories of research that would be exempt from what would later become the
Common Rule. One exempt category was social program evaluations, namely:
. . . [r]esearch and demonstration projects which are conducted by or subject to the
approval of the Department of Health and Human Services, and which are designed to
study, evaluate, or otherwise examine: (1) public benefit or service programs; (ii) proce-
2 This is not to say that national commissions and other august bodies have ignored research in the social
and behavioral sciences. In fact, debate on the applicability of the “biomedical” ethical model to social
and behavioral research more generally is still very much alive at the national level (for a summary, see
Singer & Levine, 2003). But neither national commissions nor other august bodies have addressed the
issue posed here, namely, the ethical constraints on evaluations of public social programs and the appro-
priateness of the exemption under the Common Rule.
830 / Professional Practice
dures for obtaining benefits under those programs; (iii) possible changes in or alterna-
tives to those programs or procedures; or (iv) possible changes in methods or levels of
payment for benefits or services under those programs (42 FR 12276; March 22, 1982).
In its filing in the Federal Register, the Department did not argue that the exemp-
tion was in the best interest of research subjects. Rather, it emphasized pragmatic
and legal considerations: External review of social experiments was “protracted,
cumbersome and duplicative” (particularly in cases where programs were imple-
mented in multiple states). External review undermined DHHS’s statutory author-
ity to modify program characteristics, and to learn from those modifications. The
medical model was not appropriate in the evaluation context. Finally, in reference
to ethical matters, the Department suggested that it had special expertise that
allowed it to make more informed judgments:
We do not agree with the . . . belief that the “ethical” aspects of research in benefits pro-
grams will go unreviewed unless nongovernmental individuals with expertise in the
ethics of research participate in consideration of proposed studies. The questions raised
by research involving government benefits are significantly different from those raised
by biomedical and behavioral research. IRBs are typically constituted to deal with the
special ethical and other problems involved in biomedical and behavioral research. In
contrast, ethical and other problems raised by research in benefit programs will be
addressed by the officials who are familiar with the programs and responsible for their
successful operation under state and federal laws (48 FR 9266; March 4, 1983).
In the end, decisions about the ethics of DHHS-sponsored social program evalu-
ations were removed from the public realm. In the coming years, similar positions
were adopted by other federal agencies. By 1991, the exemption of social program
evaluations became part of the “Common Rule,” adopted by 16 federal agencies.
For evaluations conducted by those agencies, there would be no mandatory “reality
checks” of ethical issues by outsiders. The protection of human subjects would be
an internal concern, or one shared with the evaluation firms that carried out much
of the government’s social research agenda.
A THOUGHT EXPERIMENT: APPLYING THE BELMONT PRINCIPLES TO A SOCIAL PROGRAM
EVALUATION
As we have seen, the Belmont Commission left open the question of whether assess-
ments of social programs are sufficiently different from other biomedical and behav-
ioral research to warrant different protections for human subjects. The regulations
described above suggested that they are. But those regulations were primarily justi-
fied on pragmatic and procedural grounds, not ethical ones. One way to shed light
on the ethical merits of treating public program evaluations differently is to treat
them the same, and see what happens. For example, we can do a thought experi-
ment: What happens when the Belmont principles are applied to the evaluation of a
public program? Is that evaluation conducted in a fashion that is consistent with jus-
tice, respect for persons, and beneficence, as articulated in Belmont? Were the inves-
tigators uncertain as to whether the treatment or control group would fare better?
Were subjects harmed? Did the investigators take steps to minimize harm and max-
imize benefit? Was informed consent obtained? If not, how did the test case fall
short? If it fell short, are we content with this state of affairs?
Choosing a case for this thought experiment is somewhat arbitrary, since there
are many different contexts and strategies for social program evaluation. As JPAM
Professional Practice / 831
readers well know, some evaluations are experiments and others are not. Some eval-
uate programs by offering more; other evaluate programs that offer less. In some
cases the programs that are being evaluated were formerly legal entitlements; in
other cases they were not.
For this thought experiment, my case is the National Job Corps Study. I selected
it because it was the first (and one of the only) studies that I found when I initially
Google™-searched “ethics of social program evaluation.” While the study is in
many ways atypical (it was an experimental evaluation of an existing program that
involved denial of services), it has features that make it a good case. It is similar to
a medical clinical trial in that it involved random assignment. The evaluation also
had a clear mandate, and it had ethical repercussions.
The National Job Corps Study
Let me begin by providing some of the facts of this case, as I have come to under-
stand them through publicly available documents. I will review the program that
was evaluated, the context and planning for the study, and some of the key issues
that arose in implementation.
The Job Corps Program. In 1964, Job Corps began to offer educational, vocational,
and support services to young people to help them to “become more responsible,
employable and productive citizens.”3 The program was not a categorical entitle-
ment; a fixed number of slots were funded each year. Recruitment was carried out
by independent agencies under contract with Job Corps. Participants were between
the ages of 16 and 24, with approximately 70\% from communities of color. Because
it was a primarily residential program, Job Corps was expensive ($14,000 per par-
ticipant at the time of the evaluation; Burghardt et al., 2001).
Context for the Study. When the study was being planned in 1993, there had been
only one prior evaluation of the program. That non-randomized study had shown
increases in earnings and decreases in criminal activity among Job Corps partici-
pants. However, the study had been conducted in the 1970s. By the mid-eighties,
experimental studies of some job training programs had shown no significant
impact (and even negative impacts) on some subgroups of participants. Doubts
about the internal validity of non-randomized studies of job training programs had
lead several expert panels to specifically endorse the random assignment approach
based upon the enhanced internal validity of the impact estimates (Betsey, Hollis-
ter, & Papageorgiou, 1985; also see Stromsdorfer et al., 1985). The National JPTA
Act amendments of 1992 mandated a new evaluation of Job Corps, with a random
assignment methodology “if feasible” (29 USC §1732(d)(2)(A)).
Planning the Study Design. In 1993, a panel met to consider the design of the eval-
uation. At the meetings were the investigators, Department of Labor and OMB staff,
Job Corps representatives, and several highly regarded researchers who were not
affiliated with the evaluation (Frazer & McConnell, 1993). In weighing the design
options, the group recognized ethical obstacles to the random assignment strategy,
and acknowledged the need to “minimize the potential for damaging [recruiting]
agencies referral relationships and eroding community support for the Job Corps
program,” “deal with politically sensitive and particularly needy subgroups of eligi-
ble Job Corps applicants,” and “minimize the impacts of the study for those placed
in the control group.” However, the group felt that the “problems posed by the ran-
dom assignment design were outweighed by its methodological rigor and the
3 Job Corps still exists despite modifications over the years. It is no longer a free-standing entity, but has
been subsumed and to some extent reorganized under the Workforce Investment Act.
832 / Professional Practice
greater credibility of its results” (Frazer & …
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=wgls20
Journal of Gay & Lesbian Social Services
ISSN: 1053-8720 (Print) 1540-4056 (Online) Journal homepage: https://www.tandfonline.com/loi/wgls20
Applying Ethical Standards to Research and
Evaluations Involving Lesbian, Gay, Bisexual, and
Transgender Populations
James I. Martin MSW, PhD & William Meezan MSW, DSW
To cite this article: James I. Martin MSW, PhD & William Meezan MSW, DSW (2003) Applying
Ethical Standards to Research and Evaluations Involving Lesbian, Gay, Bisexual, and Transgender
Populations, Journal of Gay & Lesbian Social Services, 15:1-2, 181-201, DOI: 10.1300/
J041v15n01_12
To link to this article: https://doi.org/10.1300/J041v15n01_12
Published online: 22 Sep 2008.
Submit your article to this journal
Article views: 1407
View related articles
Citing articles: 3 View citing articles
https://www.tandfonline.com/action/journalInformation?journalCode=wgls20
https://www.tandfonline.com/loi/wgls20
https://www.tandfonline.com/action/showCitFormats?doi=10.1300/J041v15n01_12
https://www.tandfonline.com/action/showCitFormats?doi=10.1300/J041v15n01_12
https://doi.org/10.1300/J041v15n01_12
https://www.tandfonline.com/action/authorSubmission?journalCode=wgls20&show=instructions
https://www.tandfonline.com/action/authorSubmission?journalCode=wgls20&show=instructions
https://www.tandfonline.com/doi/mlt/10.1300/J041v15n01_12
https://www.tandfonline.com/doi/mlt/10.1300/J041v15n01_12
https://www.tandfonline.com/doi/citedby/10.1300/J041v15n01_12#tabModule
https://www.tandfonline.com/doi/citedby/10.1300/J041v15n01_12#tabModule
Applying Ethical Standards
to Research and Evaluations
Involving Lesbian, Gay, Bisexual,
and Transgender Populations
James I. Martin
William Meezan
SUMMARY. This manuscript examines the application of ethical stan-
dards to research on LGBT populations and the evaluation of programs
and practices that impact them. It uses social work’s Code of Ethics (Na-
tional Association of Social Workers, 1996) and psychology’s Ethical
Principles of Psychologists and Code of Conduct (American Psychologi-
cal Association, 1992) to examine specific ethical issues as they pertain
to research involving LGBT populations. It notes that when conducting
studies with these populations, researchers may need to take additional
James I. Martin, MSW, PhD, is Associate Professor, Shirley M. Ehrenkranz School
of Social Work, New York University, New York, NY. William Meezan, MSW, DSW,
is Marion Elizabeth Blue Professor of Children and Families, University of Michigan,
School of Social Work, Ann Arbor, MI.
Address correspondence to: Dr. James I. Martin, Shirley M. Ehrenkranz School of
Social Work, New York University, 1 Washington Square North, New York, NY 10003
(E-mail: [email protected]).
[Haworth co-indexing entry note]: “Applying Ethical Standards to Research and Evaluations Involving
Lesbian, Gay, Bisexual, and Transgender Populations.” Martin, James I., and William Meezan. Co-published
simultaneously in Journal of Gay & Lesbian Social Services (Harrington Park Press, an imprint of The
Haworth Press, Inc.) Vol. 15, No. 1/2, 2003, pp. 181-201; and: Research Methods with Gay, Lesbian, Bisex-
ual, and Transgender Populations (ed: William Meezan, and James I. Martin) Harrington Park Press, an im-
print of The Haworth Press, Inc., 2003, pp. 181-201. Single or multiple copies of this article are available for a
fee from The Haworth Document Delivery Service [1-800-HAWORTH, 9:00 a.m. - 5:00 p.m. (EST). E-mail
address: [email protected]].
2003 by The Haworth Press, Inc. All rights reserved. 181
measures to protect participants from harm and to ensure the relevance
and usefulness of their findings. In addition, heterosexist and genderist
biases are examined as ethical issues, as is the tension between scientific
objectivity and values in research involving LGBT populations. [Article
copies available for a fee from The Haworth Document Delivery Service:
1-800-HAWORTH. E-mail address: <[email protected]> Website:
<http://www.HaworthPress.com> © 2003 by The Haworth Press, Inc. All rights re-
served.]
KEYWORDS. Ethics, research, professional codes of ethics, research
bias, gay, lesbian, bisexual, transgender
Cournoyer and Klein (2000) defined professional ethics as principles of
conduct, based on a specific set of values, that guide appropriate professional
behavior. Because personal and professional values inform nearly every
choice made when engaging in both research and practice, the main profes-
sional organizations for social workers and psychologists have codified their
core values into specific standards of ethical conduct. They are the National
Association of Social Workers’ (NASW) Code of Ethics (1996) and the Amer-
ican Psychological Association’s (APA) Ethical Principles of Psychologists
and Code of Conduct (1992). Although the standards of these two organiza-
tions are not exactly the same, they have many similarities.
Members of NASW and APA are expected to abide by their respective
Codes when engaging in research and practice. Adherence to these Codes, and
the standards that derive from them, are expected to protect the public from po-
tential harm when receiving services and participating in research and evalua-
tion studies.
Hardly any of the numerous elaborations, explanations, and applications of
ethical standards in social work and psychological research (e.g., Kendler,
1993; McHugh, Koeske, & Frieze, 1986; Padgett, 1998; Reamer, 1998; Royse,
Thyer, Padgett, & Logan, 2001) identify the unique ethical dilemmas that may
arise in the conduct of research with lesbian, gay, or bisexual populations or
explain the application of ethical standards in these situations (see Herek,
Kimmel, Amaro, & Melton, 1991; Martin & Knox, 2000; Woodman, Tully, &
Barranti, 1995). None examine the application of ethical standards to research
involving transgender populations.
Lesbian, gay, bisexual, and transgender (LGBT) populations are marginalized in
American society, and their members are at risk for experiencing violence, dis-
crimination, and exploitation in a variety of contexts (Herek, Gillis, Cogan, &
182 Research Methods with Gay, Lesbian, Bisexual, and Transgender Populations
Glunt, 1997; Hunter, Shannon, Knox, & Martin, 1998) and the subsequent nega-
tive effects of these experiences (Clements-Nolle, Marx, Guzman, & Katz,
2001; Diaz, Ayala, Bein, Henne, & Marin, 2001; Herek, Gillis, & Cogan,
1999; Hershberger & D’Augelli, 1995; Meyer, 1995; Savin Williams, 1994).
Because research involving LGBT populations always occurs within this con-
text, there may be greater potential for exploitation and harm to participants
and the communities they represent in these studies than in studies of less vul-
nerable and marginalized populations. These dangers are likely to be magni-
fied in studies of “deviant behaviors” or social problems (e.g., alcoholism and
drug abuse, intimate partner violence, HIV behavioral risk patterns) in LGBT
populations. Therefore, the lack of attention in the literature to the ethical di-
lemmas encountered in research involving LGBT populations, or the elabora-
tion of guidelines for protecting members of these populations in the course of
research, is extremely troubling.
There is an ample history of medical and social science research involv-
ing LGBT populations that have violated contemporary ethical standards.
Murphy’s (1992) review of the strategies used to attempt to change the sex-
ual orientation of men and women includes accounts of numerous studies
that caused physical harm to their participants. For example, Nazi physicians
studied the effectiveness of castration and subsequent hormone injections in
extinguishing homoeroticism among male prisoners (Plant, 1986). Bremer
(1959) reported on castration among 244 men, and concluded that although it
succeeded in reducing sex drive it was not effective in changing homoerotic
orientation. Owensby (1941) studied the use of pharmacologic shock in “cor-
recting” the homosexuality of 15 men and women. Several studies (e.g.,
Callahan & Leitenberg, 1973; McConaghy, 1976; Tanner, 1974) have exam-
ined the effectiveness of behavior therapy in changing sexual orientation, in-
cluding covert sensitization paired with contingent electric shock and
apomorphine therapy, which may cause vomiting or erection depending on
dosage and method of use.
There is no evidence that physical harm was caused to participants in the
Bieber et al. (1962) study of the effectiveness of several years of psychoana-
lytic treatment on changing men’s sexual orientation. However, the treatment
being evaluated might have harmed participants psychologically by encourag-
ing them to maintain futile efforts toward changing their sexual orientation,
and by reinforcing guilt and shame regarding their sexual feelings. For exam-
ple, after many years of psychoanalytic treatment, Duberman (1991) discussed
with his psychiatrist the difficulty he had in accepting the perspective of gay
liberation. He stated “I suspect it’s the extent of my brainwashing–too many
years hearing about my ‘pathology,’ and believing it” (p. 193).
In addition, by claiming success in changing men’s sexual orientation based
on questionable methodology, Bieber et al. (1962) undoubtedly contributed to
James I. Martin and William Meezan 183
discriminatory societal attitudes and public policies. In particular, this study,
which claimed that 27 of 106 participants changed their sexual orientation to
exclusively heterosexual, used a questionnaire that the participant’s psychoan-
alyst completed; there was no self-report of either sexual behavior or fantasy
included in the measurement package.
Perhaps the best known example of research that risked harming members
of an LGBT population is Humphreys’ (1970) Tearoom Trade. This study’s
design involved an elaborate deception in which men’s same-gender sexual
behaviors in a public restroom were observed and recorded. Subsequently, the
auto license plates of these men were used to obtain their home addresses.
They were then asked to participate in an interview in that setting, using false
pretenses to help ensure their cooperation.
Because Humphreys did not obtain informed consent from participants, and
especially because of the extensive deception used in all phases of the re-
search, the men’s participation in this research must be considered to have
been involuntary. In addition, the study participants’ privacy was obviously
invaded. However, the researcher did not breach the participants’ confidential-
ity, and there is no evidence that any of them experienced actual harm. In fact,
the study was noteworthy for bringing same-gender sexual behavior among
men out of the closet in a scientifically neutral, non-condemnatory manner.
In other studies of LGBT populations, the extensive measures taken to
protect confidentiality reflected the magnitude of the danger participants
were thought to face. For example, Hooker (1957) interviewed gay partici-
pants in her home, rather than in her university office, fearing that no one
would take part in her study without such protection–study participants were
well aware that their lives could be seriously damaged if their confidentiality
was breached. Hooker (1993) recalled that one of them “called me long dis-
tance at frequent intervals to ask whether his tapes had been erased” (p. 451).
Some studies have caused harm to LGBT communities, rather than to their
individual participants, because of the way in which their results were used.
For example, Herek (1998) noted that although researchers have generally ig-
nored the studies conducted by Cameron and his colleagues (e.g., Cameron &
Cameron, 1996; Cameron, Proctor, Cobum, & Forde, 1985), these studies
“have had a more substantial impact in the public arena, where they have been
used to promote stigma and to foster unfounded stereotypes of lesbians and
gay men as predatory, dangerous, and diseased” (p. 247). According to Herek,
Cameron’s studies were used to promote and defend Colorado’s Amendment
2, an initiative that would have prevented any level of that state’s government
from prohibiting discrimination against lesbian, gay, and bisexual residents. In
striking down this law, the U.S. Supreme Court ruled that it violated the Equal
184 Research Methods with Gay, Lesbian, Bisexual, and Transgender Populations
Protection Clause of the United States Constitution (FindLaw Resources,
n.d.).
This article will use the standards for evaluation and research (sec. 5.02) of
the NASW Code of Ethics as a framework for examining ethical conduct with
LGBT populations. Only those standards that are particularly applicable to
LGBT populations will be examined. The article will also make reference to
selected standards in the APA Code of Conduct, and it will include ethical is-
sues that are not directly addressed by either ethical code.
ETHICAL STANDARDS FOR RESEARCH
Standard 5.02(a). Social workers should monitor and evaluate policies,
the implementation of programs, and practice interventions.
Standard 5.02(b). Social workers should promote and facilitate evalua-
tion and research to contribute to the development of knowledge.
Since the 1970s, monitoring and evaluating practice interventions and pro-
grams have become increasingly important aspects of social work practice. With
the exception of some AIDS service organizations, LGBT community organiza-
tions tend to be grassroots enterprises that rarely receive funding from govern-
ment agencies. When social workers are involved with such organizations,
either as board members or employees, they should spearhead or encourage pro-
gram monitoring and evaluation efforts. In addition, the lack of outcome infor-
mation for social work interventions, as noted by Reamer (1998), is particularly
serious for interventions with LGBT clients. Evaluations of programs and inter-
ventions, especially if they are published, can help to ensure that LGBT com-
munity organizations are accountable to their members and clients, and that
providers’ services are appropriate, effective, and efficient.
Standard 5.02(c). Social workers should critically examine and keep
current with emerging knowledge relevant to social work and fully use
evaluation and research evidence in their professional practice.
According to several authors, many social workers lack information or have
biases about LGBT populations (Berkman & Zinberg, 1997; Martin & Knox,
2000; Morrow, 1996). Accreditation standards of the Council on Social Work
Education (CSWE) that require curriculum on lesbian and gay persons and
their unique issues are an attempt to rectify this problem (CSWE, 1994). In ad-
dition, there has been a major increase in the amount of published research on
lesbian and gay populations, due in large part to social work faculty who are
James I. Martin and William Meezan 185
openly lesbian or gay and to the nondiscrimination policies of the educational
institutions that employ them. But social work research on bisexual and
transgender populations continues to lag far behind (Martin & Hunter, 2001).
Unfortunately, professional social workers may not keep up with advances
in professional knowledge (Barker, 1990). This situation is particularly dis-
turbing for LGBT persons in need, given the lack of information about LGBT
populations that many social work students report receiving in their graduate
programs (Berkman & Zinberg, 1997). In other words, social workers may not
learn much about these populations while getting their social work degrees,
and they might not learn much more after they graduate.
Standard 5.02(d). Social workers engaged in evaluation or research
should carefully consider possible consequences and should follow
guidelines developed for the protection of evaluation and research par-
ticipants. Appropriate institutional review boards should be consulted.
Research can result in both positive and negative consequences to its partic-
ipants and the populations they represent. Similarly, evaluations can result in
both benefits and costs to any of the program’s stakeholders. Researchers must
think very seriously about these potential consequences, especially when par-
ticipants or stakeholders are members of vulnerable LGBT populations. In
other words, social work research must be socially responsible (Padgett,
1988).
One important consideration in this regard is the way in which others might
use research or evaluation findings. For example, the U.S. Secretary of Health
and Human Services (HHS) recently ordered an audit of HIV prevention
grants and the programs to which they have been awarded. This review was ap-
parently prompted by an HHS report criticizing the San Francisco Stop AIDS
Project, which serves a large gay population, for the use of “obscenity” and for
encouraging sexual activities (Erickson, 2001). One possible use of this audit
would be to withhold or withdraw funds from HIV prevention programs for
gay and bisexual men. Evaluations of such programs are not inherently
heterosexist, but when they are conducted within a hostile political context
there is ample reason for concern. Padgett (1998) suggested that researchers
take care to “frame the presentation of the [study’s] results and discuss their
implications” (p. 43) in ways that minimize the possibility that others could
misuse them to harm the participants, the groups they represent, or those who
serve them.
When psychologist Evelyn Hooker planned her study testing the assump-
tion of psychopathology among gay men (Hooker 1957), she could not have
known that it would eventually lead to the declassification of homosexuality
186 Research Methods with Gay, Lesbian, Bisexual, and Transgender Populations
by the American Psychiatric Association. Surely she considered the possibility
of some benefit to the larger population of gay men if her sample was found to
have no more psychopathology than nongay men (as it did). But if the sample
were found to have more psychopathology, it might have closed off the possi-
bility of other research on “normal male homosexuals” (Hooker, 1993, p. 451),
especially considering the extremely hostile political and social environment
in which the study was conducted.
Since 1974, Institutional Review Boards (IRBs), charged with ensuring the
protection of research subjects from harm, have been mandated for all organi-
zations that receive federal funds and conduct behavioral or biomedical re-
search. The federal regulations that mandate and govern IRBs are detailed in
45 CFR 46 (U.S. Department of Health and Human Services, 1997). Needs as-
sessments, program monitoring and evaluations, and single system studies of
client outcomes are not included in the definition of “research,” according to
the federal regulations.
Thus, agencies engaging in these activities, but not research leading to
generalizable knowledge, are not required to have IRBs (Cournoyer & Klein,
2000). Few LGBT community agencies conduct “research” as defined by fed-
eral regulations, and thus they rarely have their own IRBs. In the event that
such agencies receive federal funds for “research,” they must either create
their own IRB or find an existing one that is willing to review their proposal
and monitor the project. Thus, one benefit of agencies partnering with faculty
members for research projects is the use of a university IRB.
However, best ethical practices suggest that agencies should submit propos-
als for evaluation studies to either an internal or external review committee.
Taking such measures will help to ensure that these studies follow appropriate
treatment protocols and abide by all appropriate ethical standards.
Fundamentally, IRBs base their decisions regarding proposed research ac-
cording to a risk/benefit ratio in which the potential harms or costs to partici-
pants are weighed against the potential benefits to them or to society in general
(Royse, 1999). In cases in which the risks appear to outweigh the potential
benefits, IRBs will typically require a redesign of the study so that the potential
risks are reduced or avoided (Sieber & Stanley, 1988). For example, they
might require a debriefing to alleviate emotional distress engendered by partic-
ipation in the study (Monette, Sullivan, & DeJong, 1998). Or they might re-
quire that the consent form used in the study include a fuller description of the
potential risks of participation.
There are no specific federal regulations for research involving LGBT pop-
ulations. In fact, IRBs might be more likely to reject research proposals deal-
ing with socially sensitive topics (Sieber & Stanley, 1988) such as sexual
orientation or gender expression. According to Ceci, Peters, and Plotkin
James I. Martin and William Meezan 187
(1985), such rejections reflect the sociopolitical ideologies of IRB committee
members. Researchers proposing studies involving LGBT populations must
unfortunately be wary of IRB reviews conducted in institutions that do not for-
bid discrimination on the basis of sexual orientation and gender expression.
Standard 5.02(e). Social workers engaged in evaluation or research
should obtain voluntary and written informed consent from participants,
when appropriate, without any implied or actual deprivation or penalty
for refusal to participate; without undue inducement to participate; and
with due regard for participants’ well-being, privacy, and dignity. In-
formed consent should include information about the nature, extent, and
duration of the participation requested and disclosure of the risks and
benefits of participation in the research.
Standard 5.02(h). Social workers should inform participants of their
right to withdraw from evaluation and research at any time without pen-
alty.
One of the main concerns of IRBs is whether studies have appropriate and
sufficient procedures to ensure participants’ voluntary informed consent. The
Humphreys (1970) study of men’s sexual behavior in public restrooms is often
identified as an example of research that violated the principle of voluntary in-
formed consent. As described above, while the confidentiality of the study’s
participants was never violated, their dignity and self-determination certainly
were, and permission to intrude on their privacy was never obtained through
informed consent procedures.
While, in general, research should be conducted only with people who have
consented to participate, this principle can be ethically circumscribed in ways
that may afford the research participant even greater protection. Martin and
Knox (2000) recommended that gay and lesbian (bisexual and transgender
should be added) research participants should remain anonymous, and noted
that written consent does not afford them this added protection. They sug-
gested that, when possible, anonymous surveys using self-administered ques-
tionnaires or phone interviews, in which consent is implied rather than written,
should be used. In this way, people who wish to be excluded from the study
would not fill out and return the questionnaire or complete the interview. How-
ever, use of this procedure needs to be accompanied by assurances that partici-
pants are not coerced or highly vulnerable to exploitation (Royse et al., 2001).
Thus, given the vulnerability of LGBT populations, the use of implied consent
should be carefully scrutinized to make sure participants are not exploited or
subject to coercion.
188 Research Methods with Gay, Lesbian, Bisexual, and Transgender Populations
In addition to these points, the APA Code also states that informed con-
sent procedures should “use language that is reasonably understandable to
research participants” (Sec. 6.11(a)). Researchers should not assume that
GLBT participants are all well-educated, middle-class, English-speaking in-
dividuals. Forms should be in participants’ native language, and they should
be written at a reading level that is not too high. According to the National In-
stitutes of Health Office of Human Subjects Research (2000), consent forms
should generally be written below a high school graduate’s level of compre-
hension. As such, they should not contain words with more than three sylla-
bles, scientific terms, or lengthy sentences. In addition, consent procedures
should avoid language that is likely to offend or be misinterpreted by prospec-
tive participants. For example, using the term “homosexual” instead of “gay
and lesbian,” or referring to both gay men and lesbians as “gay,” could be of-
fensive to some participants.
The APA Code also warns psychologists not to use inducements for partici-
pation in studies that are excessive or inappropriate. Such inducements could
have a coercive effect on prospective participants (Sec. 6.14(b)), which would
undermine informed consent procedures. Researchers must be particularly
careful about excessive monetary inducements when prospective LGBT par-
ticipants, especially youth, are financially unstable or impoverished.
As indicated by 5.02(h), the consent form must inform participants of their
right to withdraw from the study at any time without penalty. Related to this
point, the form should explain that refusal to participate in a study, or a deci-
sion to withdraw from it, will not result in differential treatment by any agency
associated with the study.
Under these procedures, participants could decide that their discomfort
with, or other negative reactions to, the study outweighs any benefits they had
expected to receive when they first agreed to participate. For example, LGBT
participants could become uncomfortable with what they perceive to be the bi-
ased or heterosexist language, or assumptions used on a questionnaire or in an
interview. Or they might find that a questionnaire takes much longer or is more
difficult to complete than they had expected. An explicit statement regarding
participants’ right to withdraw minimizes the possibility that they will feel co-
erced to remain in the study against their wishes.
Standard 5.02(f). When evaluation or research participants are incapa-
ble of giving informed consent, social workers should provide an appro-
priate explanation to the participants, obtain the participants’ assent to
the extent they are able, and obtain written consent from an appropriate
proxy.
James I. Martin and William Meezan 189
Because people younger than 18 cannot legally give consent in the U.S.,
studies of LGBT youth must involve additional measures to assure that their
participation is voluntary. Participants under the age of consent must give writ-
ten “assent.” But this does not substitute for the written consent of their parents
or guardians, which researchers must also obtain whenever possible. How-
ever, obtaining consent from parents or guardians can be dangerous when the
underaged participants are LGBT. As noted by Elze (this volume), 45 CFR
46.408(c) allows minors to participate in research without parental or guardian
consent if obtaining it would compromise their safety or welfare. This is most
likely to be the case among youths living in abusive households or those who
have not disclosed their sexual orientation to parents or guardians. In such
cases, researchers can use independent advocates to assure participants’ rights
(e.g., D’Augelli & Hershberger, 1993). Alternatively, IRBs can declare spon-
soring agencies to be acting in loco parentis by allowing youths to participate
in a study (e.g., Rosario, Hunter, & Gwadz, 1997).
Standard 5.02(g). Social workers should never design or conduct evalu-
ation or research that does not use consent procedures, such as certain
forms of naturalistic observation and archival research, unless rigorous
and responsible review of the research has found it to be justified be-
cause of its prospective scientific, educational, or applied value and un-
less equally effective alternative procedures that do not involve waiver
of consent are not feasible.
Social work ethics do not prohibit the use of naturalistic or participant ob-
servation. But because these methods involve nondisclosure and perhaps de-
ception, they conflict with the ethical principles of self-determination and
privacy. Researchers planning to use these methods must justify them on the
grounds of the expected value of the study and its findings, the lack of alterna-
tive methods (Cournoyer & Klein, 2000), and the assurance that no harm will
be caused to the unsuspecting participants. As suggested by Padgett (1998), re-
searchers should obtain permission from the appropriate gatekeepers before
engaging in research in agency settings, particularly when using these meth-
ods. Doing so is especially important when the research sites are LGBT agen-
cies, since many of their stakeholders are likely to be suspicious of being
exploited by researchers or harmed by research findings. Failing to do so
would probably doom the project and make it more difficult for other research
to occur in that setting. The APA Code is more explicit about this point, stating
that prior to conducting a …
Journal of Evidence-Based Social Work, 6:348–360, 2009
Copyright © Taylor & Francis Group, LLC
ISSN: 1543-3714 print/1543-3722 online
DOI: 10.1080/15433710903126778
Ethical Guidelines for Designing and
Conducting Evaluations of Social
Work Practice
MICHAEL J. HOLOSKO
University of Georgia School of Social Work, Athens, Georgia, USA
BRUCE A. THYER
Florida State University School of Social Work, Tallahassee, Florida, USA
J. ELAINE HOWSE DANNER
University of Georgia School of Social Work, Athens, Georgia, USA
We review selected aspects of current ethical guidelines pertaining
to the design and conduct of social work evaluation and research
studies. We contend that there are significant differences between
social science research and evaluation studies, and that the uncrit-
ical application of ethical guidelines suitable for regulating social
science research may hinder social workers undertaking clinical
and program evaluations. What is needed are ethical guidelines
that distinguish between retrospective and prospectively designed
studies, which enumerate when voluntary and informed consent
may not be necessary in order to use data obtained from clients,
and clearer standards pertaining to exempting evaluation studies
from oversight by Institutional Review Boards.
KEYWORDS Ethical guidelines, social work practice, evaluation
In the past 25 years or so in North America, increased demands for health
and social services coupled with scarcer financial resources have resulted
Portions of this paper were previously presented at the annual conference of the Society
for Social Work and Research in Charleston, South Carolina, USA, on January 29–31, 2000,
and at the International Conference on Evaluation for Practice held at the University of
Huddersfield in the United Kingdom on July 12–14, 2000.
Address correspondence to Michael J. Holosko, Ph.D., Pauline M. Berger, Professor of
Family and Child Welfare, University of Georgia School of Social Work, Athens, GA 30602.
E-mail: [email protected]
348
Ethical Guidelines for Evaluation 349
in a burgeoning of evaluation activities. Indeed, as stated by Cronbach and
colleagues (1980), ‘‘evaluation has become the liveliest frontier of American
social science’’ (pp. 12–13). Such activities may validate and provide legiti-
macy to these human services programs, but they also may provide scientific
knowledge about their participants and interventions and the interventions
used with them (Holosko, 1996).
Ever since Harriet Bartlett (1958) included research as a component of
social work in her highly influential working definition of practice, the Code
of Ethics (COE) of the United States National Association of Social Workers
(NASW) has promoted a position that professional social workers engaged
in scientific inquiry should adhere to a set of ethical principles. Section 5 of
the COE (NASW, 1999), Social Worker’s Ethical Responsibilities to the Social
Work Profession, describes these principles. Subsection 5.02, Evaluation and
Research, describes 16 ethical guidelines, including these pertinent ones:
(d) Social workers engaged in evaluation or research should carefully
consider possible consequences and should follow guidelines developed
for the protection of evaluation and research participants. Appropriate
institutional review boards should be consulted.
(e) Social workers engaged in evaluation or research should obtain vol-
untary and written informed consent from participants, when appropri-
ate, without any implied or actual deprivation or penalty for refusal
to participate; without undue inducement to participate; and with due
regard for participants’ well-being, privacy, and dignity. Informed consent
should include information about the nature, extent, and duration of
the participation requested and disclosure of the risks and benefits of
participation in the research.
(h) Social workers should inform participants of their right to withdraw
from evaluation and research at any time without penalty.
(o) Social workers engaged in evaluation or research should be alert to
and avoid conflicts of interest and dual relationships with participants,
should inform participants when a real or potential conflict of interest
arises, and should take steps to resolve the issue in a manner that makes
participants’ interest primary. (NASW, 1999)
It has only been since 1990 that the amended COE has included both
evaluation and research in its guidelines. Prior to that time, the term Scholar-
ship and Research was used as the sub-heading for a less detailed subsection
on research ethics, which included only six guidelines (NASW, 1980).
Outside of the fact that such guidelines are just that—‘‘guidelines’’—they
have been roundly criticized for having no ‘‘teeth’’ and for being ambiguous
(Reamer, 1993). The larger problem is that the COE guidelines on Evaluation
and Research fail to distinguish evaluation from research. As a result, social
350 M. J. Holosko et al.
workers involved in the three main forms of scientific inquiry: (a) basic
social science research (research about social and behavioral phenomena),
(b) practice research (research about social work practice), or (c) evaluation
research, with its two sub-categories of practice evaluation (assessments of
the efficacy of practice) and program evaluation (assessments of programs),
tend to clump these together as ‘‘similar scientific endeavors,’’ all falling
under the rubric of the COE for research involving human subjects.
Although it may appear to be appropriate to use the COE in this manner
due to the relative similarities of these forms of inquiry, it has been demon-
strated that research and evaluation are decidedly different in that they stem
from differing assumptions (Gingerich, 1990) and have different purposes
(Ashmore, 2005; Rubin & Babbie, 2005; Barlow, Hayes, & Nelson, 1984).
Applying ethical guidelines (which come very close to being interpreted as
mandates in some instances) appropriate for research can have an unduly
constraining influence on the design and conduct of evaluation, constraints
that interfere with ongoing efforts within the profession to encourage more
social workers to conduct more and better empirical evaluations of their
own practice using the methods of scientific inquiry. We believe this to be a
serious problem. The last thing we need at this point in the maturation of the
profession is the erection of unnecessary barriers to conducting evaluation
studies. We will discuss some of these issues after reviewing some of the
features that we believe distinguish social research efforts from practice
evaluation studies.
SOCIAL RESEARCH VERSUS PRACTICE EVALUATION
Social research is defined as the systematic investigation of phenomena.
All social research follows a process of inquiry, which minimally includes
defining a research problem or question, reviewing literature and theory
pertaining to the problem/question, collecting data, analyzing data, and
concluding about the phenomena. The ultimate goal of social research con-
ducted by social workers is to seek new generalizable knowledge about
issues emanating from the practice worlds in which they work, namely: in-
dividuals, small groups, families, human service organizations, communities,
and the social environment. Some of the more commonly used quantita-
tive methods of social research used by social workers include exploratory
studies, quantitative-descriptive studies, quasi-experimental studies, descrip-
tive studies, and meta-analyses. Frequently used qualitative methods in-
clude field studies, phenomenological studies, case analyses, and ethnog-
raphies.
Practice evaluations assess interventions to determine their potential
benefits to clients or client systems (i.e., the systems or context in which
clients find themselves, such as a family unit, school, treatment setting,
Ethical Guidelines for Evaluation 351
or community). Practice evaluations also follow a process of inquiry that
minimally includes defining the client system problem, defining the inter-
vention(s) used, collecting data about the intervention’s impact, analyzing
data, and concluding about the intervention and its benefit to the client
system. The goal of practice evaluation is to assess the potential bene-
fit to client systems, and, while the development of generalizable scien-
tific findings is desirable, it is of secondary importance (Bloom & Orme,
1993), if it occurs at all (and, in many instances, it does not). The primary
scientific methods used by social workers to evaluate their practice out-
comes involve single-system designs (see Bloom, Fischer, & Orme, 1999)
and simple group research designs (see Royse, Thyer, Padgett, & Logan,
2006).
As one may surmise, the majority of social research and practice
evaluations both conducted and published by social workers follows a
hypothetico-deductive approach framed in a three-step research process:
purpose, method, and findings. As a result, their differences are often
obfuscated by the fact that they follow this same mode of empirical inquiry.
Table 1 presents a set of criteria differentiating social research from practice
evaluation. No published account of distinguishing features is presented
in the literature, and, as such, this differentiation extends the groundswell
argument offered by other authors for a distinct set of ethical guidelines
for practice evaluation (c.f., Bloom & Orme, 1993; Grigsby & Roof, 1993;
Millstein, Dare-Winters, & Sullivan, 1994).
TABLE 1 Distinguishing Features between Social Research (SR) and Practice Evaluation (PE)
Selected criteria Social research Practice evaluation
I. Purpose � To develop and test
hypothesis
� To seek new knowledge
� To assess how practice
benefits clients/client
systems
II. Doctrine � Value free
� Ideally altruistic
� Value laden
� Deterministic
III. Theoretical Basis � Theoretical � Atheoretical
IV. Intended Audiences � Other researchers
� Academics
� Funding bodies
� Clients
� Agency staff
� Boards of directors
� Funding bodies
V. Methodological Issues
i. Sampling � Multiple
� n 9 10
� Singular
� n D 1
ii. Instrumentation � Standardized � Crafted and standardized
iii. Statistics � Descriptive and
inferential
� Graphs, percents,
descriptive
iv. Generalizability � High � Low
v. Design � Much variability in rigor � Rigorous
vi. Dissemination plan � Optional
� Academic conferences
� Clients
� Agency staff
352 M. J. Holosko et al.
ETHICAL CONSIDERATIONS
The topic of ethics involving research on human subjects has been ex-
tensively studied. Less well known are how the issues of ethics apply to
practice evaluation. Table 1 argues that social research (SR) and practice
evaluation (PE) have a number of distinguishing features, and some of the
main principles of research ethics may apply to PE. These are informing
about the purpose of the study, informed consent, minimal risk, potential and
harm from participation/non-participation, and confidentiality. Since 1980,
the COE has also included one separate guideline for the evaluation of
practice: subsection 5.02 (k), which states ‘‘Social workers engaged in the
evaluation of services should discuss collected information only for profes-
sional purposes and only with people professionally concerned with this
information’’ (NASW, 1999).
Bloom and Orme (1993) offer 10 ethical principles for PEs that highlight
the differences between research and practice evaluation ethics. These are
presented in Table 2.
According to Gillespie (1987), the three major ethical risks involved
in any research using human subjects include potential harm to subjects,
participants, and the community and society. He suggested that the former
potential harm to subjects receives much more discussion than the others. In
an effort to offset this concern and address ethical issues from a participant
perspective, Table 3 presents a list of prerequisite process questions that
social work practitioners could preliminarily address prior to their conducting
practice evaluations.
Table 3 is not presented as an all-inclusive set of questions but could be
used as a guide for further discussion and consideration. We note that these
questions could be used to develop a one-page letter of Intent to Evaluate
TABLE 2 Ethical Principles for the Evaluation of Practice
Ten ethical principles
1. Provide demonstrated help.
2. Demonstrate that no harm is done.
3. Involve the client in the evaluation coming to an agreement on the overall practice
relationship.
4. Involve the client in the identification of the specific problems and/or objectives and the
data collection process, as far as possible.
5. Demonstrate how the evaluation will not intrude on the intervention process.
6. Stop the evaluation whenever painful or harmful to the client, physically,
psychologically, or socially, without prejudice to the services being offered.
7. Maintain confidentiality with regard to the data resulting from the evaluation.
8. Balance the benefits of evaluating practice against the costs of not evaluating practice.
9. Use the Clients Bill of Rights as an intrinsic part of evaluating practice.
10. It should be recognized that any evaluation process reflects the values of the researcher.
Note. From Bloom and Orme, 1993, pp. 164–174.
Ethical Guidelines for Evaluation 353
TABLE 3 Preliminary Process Questions which may be Considered Prior to Conducting a
Practice Evaluation
Process questions for practice evaluation
1. Do I have the necessary skills, education, and training to effectively evaluate my
practice?
2. Why do I want to evaluate my practice at this time?
3. What are three benefits that can come from conducting this PE?
4. What are three consequences that can result from conducting this PE?
5. To what extent are my colleagues, supervisors, and/or administrators committed to this
PE?
6. Who will I seek as a mentor to assist with this PE?
7. How do I propose to evaluate my practice at this time?
8. How will I seek the client’s permission in this?
9. What will happen with this information after I collect it?
10. What steps will I take to ensure that ethical principles are dealt with in this PE?
11. Have I developed my Intent to Evaluate Practice Form and had my supervisor sign it.
12. Has the client signed the Consent to Treatment Form?
Practice that should be written and signed by the practitioner, signed by a
supervisor, and eventually shared with the client during the next phase of
this process. This letter then should be placed on file in the client’s case
record. This suggestion is offered in the context of viewing evaluation as a
form of social work research and, as such, falls under the purview of ethical
guidelines pertaining to research.
There are, however, some alternative perspectives we would like to
raise. If many practice evaluation activities are aimed at producing evidence-
based knowledge about the outcomes of practice with particular clients or
obtained from specific human service programs, not necessarily general-
izable knowledge primarily intended to contribute to the foundations of
science, then it could be contended that evaluation is not research, and that
ethical guidelines intended to govern the design and conduct of research
projects may not be relevant. We certainly need ethical guidelines to assist
with research, but we already have extensive guidelines for practice, and
if evaluation efforts are construed as a part and parcel of everyday social
work practice (and not research), then the existing ethical standards related
to practice also cover our evaluation efforts, and no additional governing
authority (e.g., ethical guidelines dealing with research) is necessary. For
example, the NASW’s ethical guidelines related to evaluation and research
virtually mandate obtaining informed consent:
Social workers engaged in evaluation or research should obtain voluntary
and written informed consent from participants, when appropriate: : : :
Social workers should never design or conduct evaluation or research
that does not use consent procedures, such as certain forms of naturalistic
observation and archival research, unless rigorous and responsible review
of the research has found it to be justified. (NASW, 1999)
354 M. J. Holosko et al.
The suggestions in this article are consistent with these standards. There
are several considerations that render difficult viewing program and clinical
evaluation as forms of research requiring ethical oversight as described in
the NASW COE. A few of these difficulties are reviewed below.
The Complicated Issue of Informed Consent. Informed consent is sim-
ply not always possible, as in the case of retrospective program evalua-
tions (Feely, 2007) based upon agency records—studies that may not have
been planned until very recently, by practitioners who wish to make use of
archived agency and client records dating back some years. Since the data
were gathered for administrative and clinical purposes, there is no informed
consent form completed by clients in the charts. Since many clients may
have been discharged, moved, or died, as a practical and logistical matter,
contacting these former clients to obtain their consent is simply not possible.
If you limit your data to studying information only obtained from those clients
who are subsequently contacted and give permission to use their data, then
the representativeness of your data is open to question. Does this mean that
archival data cannot be used for evaluation purposes? Such a requirement
could bring many types of evaluation to a halt. A literalist interpretation
would also hinder using data obtained from public records or acquired from
commercially available large-scale databases. We believe that an absolutist
standard mandating voluntary informed consent for all forms of evaluation is
a serious barrier to social workers designing and conducting outcome studies
of their own practice.
Some Client Groups Will Refuse Consent. Many agencies and social
workers serve involuntary clients such as sex offenders, substance abusers,
perpetrators of domestic violence or child abuse and neglect, and court-
mandated criminals. Large proportions of such individuals can be expected
to decline the opportunity to participate in evaluation studies, to refuse
voluntary informed consent, even if provided assurances regarding data con-
fidentiality or anonymity. Evaluation research with such client groups could
also grind to a halt. Would feminists support less research on evaluating the
outcomes of spousal battering? Would ‘‘Mothers Against Drunk Driving’’ be
happy if alcohol abuse prevention programs could not be credibly evaluated?
Would parents want to discontinue research on orienting pedophiles to nor-
mal, adult, consensual sexual relationships? Mandating voluntary informed
consent in order to conduct evaluation research with such client groups is
another unrealistic barrier to social workers evaluating their own practice.
Who Owns the Data? Who owns information obtained from clients—
the clients, the agency administration, or the social worker who gathers it?
Take the social worker in clinical practice who wishes to use school atten-
dance records to evaluate the outcomes of a truancy prevention program. Is
the information about the child’s attendance the property of the child, the
parents, or the school—or is it ‘‘public’’ information? Would it be ethical to
Ethical Guidelines for Evaluation 355
not use these data to evaluate the program, irrespective of the consent of
the children or parents concerned?
It could be contended that the data gathered by social workers in
the routine course of their assessment, delivery of care, and monitoring of
outcomes belong to the professional social worker, who has the latitude to
employ the data for evaluation purposes, for teaching, in supervision, and
other scholarly purposes, up and including the publication of articles in pro-
fessional journals, as long as individually identifiable data were not disclosed.
Client permission should not be mandated as an absolute prerequisite for
using clinical data in evaluation studies.
Do You Need IRB Review and Approval for Evaluation Studies? An-
swering this question depends upon the all-important definition of research.
American universities usually follow the definition set forth in the federal
regulation on human subjects which reads: ‘‘Research means a systematic
investigation, including research development, testing and evaluation, de-
signed to develop or contribute to generalizable knowledge’’ (Protection of
Human Subjects, 2005, p. 3).
This definition lends itself to some gray areas. For example, single-
subject designs used to evaluate outcomes with one or a very small number
of individuals are not aimed at producing generalizable knowledge, nor
do smaller-scale group designs, such as the posttest-only (X-O) or pretest-
posttest (O-X-O) design, using non-probability samples. If the intent is to
produce non-generalizable knowledge pertinent only to specific persons or
agency programs, such evaluation studies may not fall under this federal
definition of research, and thus may be exempt from a university’s institu-
tional review board’s (IRB) oversight. Could such activities be called ‘‘quality
assurance studies’’ or ‘‘program/practice evaluation studies,’’ and thereby
avoid the entanglements of IRBs?
In fact, federal regulations already recognize this type of need and
provide exemption for educators to evaluate their practice on an ongoing
basis. Specifically exempted from human subjects policies are: ‘‘Research on
regular and special education instructional strategies or the effectiveness of
or the comparison among instructional techniques, curricula, or classroom
management methods’’ (Protection of Human Subjects, 2005, p. 2). If federal
regulations recognize the need for these evaluation activities to be conducted
without the oversight of IRBs, then shouldn’t the same hold true for similar
activities undertaken by social practitioners?
It is worth noting that the IRB review and approval process is not
without its own costs. The paperwork activities are merely the beginning.
Sometimes personal appearances are required to explain content in your
research proposal to IRB committee members. (This can chew up half a day,
easily!) IRBs often operate according to university timetables and slow down
or halt operation during holidays or inter-term breaks. Gaining IRB approval
356 M. J. Holosko et al.
may take weeks, a month, or longer—precious time during which windows
of opportunities for data collection may close.
There is also a measure of hypocrisy in the IRB process. While it is
piously touted as a method of protecting the rights of human subjects,
the reality is that it primarily functions as a federally mandated mechanism
necessary to keep the flow of federal research dollars flowing to the entire
university—dollars containing perhaps a 40\% or more institutional overhead
figure.
Recently, in the United States, a spate of universities have had their
entire research programs shut down by the federal government because
of violations of human subjects protection regulations by one small study
(Oakes, 2002; Brainard, 2000). This heavy-handed approach to research
oversight is not consistent with free scientific inquiry (Feely, 2007; Dingwall,
2007). Why should you be prohibited from conducting your study, which
is appropriately designed and possesses suitable client protections because
some individual in another department violated an IRB policy?
We would like to note that the authors always follow their respective
university IRB policies and procedures and require their students to do
so as well. Given the present punitive contingencies for failing to follow
IRB policies (Oakes, 2002), we do not believe it is justifiable to place our
students or department colleagues at risk for our not complying with them.
But, we question the present conflation of social science research with
evaluation, and the blanket application of research ethical guidelines to
evaluation studies.
What about Practitioners without IRB Oversight? In this discussion, it is
important to note that social workers work at all practice levels. While those
working in macro or mezzo settings, like the government, academia, or large
agencies with federal funding, have clear-cut accountability to federal regu-
lations and IRB requirements, many working on the micro or mezzo level,
such as in private practice or in smaller agencies without federal funding,
do not have clear oversight. Although exempt from federal requirements,
these practitioners are bound by ethical standards to protect their subjects.
The most important way this is done is through informed consent. Ethical
practitioners in most settings already routinely obtain informed consent at the
beginning of work with clients. However, this type of general consent is not
considered by the authors to be sufficient to meet ethical obligations when
conducting research and/or evaluation studies. It is necessary to delineate
treatment from study largely because clients should not be obligated to be
human subjects (Oakes, 2002). Therefore, separate informed consent specific
to the study must be obtained, which should minimally include explanation
of (a) the purpose, duration, and procedures of the study; (b) foreseeable
risks and benefits; (c) alternate procedures or options; (d) confidentiality of
records; (e) who to contact for questions; and (f ) voluntary participation
with no penalty for non-participation (Protection of Human Subjects, 2005;
Ethical Guidelines for Evaluation 357
NASW, 1999). Reamer (2001) offers a sample informed consent form that
meets these criteria and could easily be modified as needed.
The Problem of Overlapping Governing Authorities. In North America
a social worker may be licensed to practice social work in the state or
province in which s/he resides. S/he may also belong to a major professional
association such as the Canadian Association of Social Workers, the Na-
tional Association of Social Workers, or the Clinical Social Work Federation.
S/he may also belong to a specialty membership organization, such as the
American Association for Marital and Family Therapy, the Association for
Behavior Analysis, or the Association for Advancement of Behavior Therapy.
S/he may also be a member of an evaluation organization, such as the
American Evaluation Association. We note that each and every one of the
above groups provides its own unique and separate codes of ethics! A social
worker belonging to all of these is subject to the ethical guidelines governing
the practice of all members of each group, as well as being subject to the
disciplinary authority constituted to deal with violations of ethical guide-
lines. And there also remain civil legal remedies (e.g., lawsuits) to provide
corrective actions against social workers who harm clients. Another problem
is that sometimes the codes of ethics conflict is in their recommendations.
A social worker who conducts an evaluation study consistent with one
(less restrictive) code may find him/herself the focus of an ethics inquiry
undertaken by the professional association they belong to, which has a more
restrictive code.
Recently, there has been discussion among many of our professional
organizations toward unification of the social work profession and its pro-
fessional organizations (Fortune, 2007), which has some potential to simplify
some of these issues of conflicting ethical guidelines and complicated layers
of accountability.
Do We Really Need New or Additional Ethical Guidelines Regulating
Evaluation of Practice? Perhaps not. The existing codes of ethics promul-
gated by the professional associations, those associated with licensure, and
the legal system provide overlapping and redundant protection of clients’
rights. These layers of oversight and accountability are sufficient to ensure
that social workers ethically evaluate their practice and can be reasonably
contended to be overly restrictive in some matters.
IMPLICATIONS FOR PRACTICE
We believe that additional work needs to be undertaken so that not only
ethical codes but also federal regulations and IRB policies more clearly
discriminate between social science research activities and evaluations of
social work practice. More generous recognition needs to be articulated that
informed and voluntary consent is neither always practical (retrospectively
358 M. J. Holosko et al.
designed studies), and that for many types of evaluation (involuntary clients),
it is simply not possible to obtain. In other studies, such as those involving
publicly observable behavior in which no personally identifiable informa-
tion is gathered, or which used public records, census data, or large scale
databases, consent (and IRB review) may simply not be necessary at all.
Social workers need to more vigorously assert their ownership of client
data collected in the context of their own practice and of their rights to
use this information for evaluation …
CATEGORIES
Economics
Nursing
Applied Sciences
Psychology
Science
Management
Computer Science
Human Resource Management
Accounting
Information Systems
English
Anatomy
Operations Management
Sociology
Literature
Education
Business & Finance
Marketing
Engineering
Statistics
Biology
Political Science
Reading
History
Financial markets
Philosophy
Mathematics
Law
Criminal
Architecture and Design
Government
Social Science
World history
Chemistry
Humanities
Business Finance
Writing
Programming
Telecommunications Engineering
Geography
Physics
Spanish
ach
e. Embedded Entrepreneurship
f. Three Social Entrepreneurship Models
g. Social-Founder Identity
h. Micros-enterprise Development
Outcomes
Subset 2. Indigenous Entrepreneurship Approaches (Outside of Canada)
a. Indigenous Australian Entrepreneurs Exami
Calculus
(people influence of
others) processes that you perceived occurs in this specific Institution Select one of the forms of stratification highlighted (focus on inter the intersectionalities
of these three) to reflect and analyze the potential ways these (
American history
Pharmacology
Ancient history
. Also
Numerical analysis
Environmental science
Electrical Engineering
Precalculus
Physiology
Civil Engineering
Electronic Engineering
ness Horizons
Algebra
Geology
Physical chemistry
nt
When considering both O
lassrooms
Civil
Probability
ions
Identify a specific consumer product that you or your family have used for quite some time. This might be a branded smartphone (if you have used several versions over the years)
or the court to consider in its deliberations. Locard’s exchange principle argues that during the commission of a crime
Chemical Engineering
Ecology
aragraphs (meaning 25 sentences or more). Your assignment may be more than 5 paragraphs but not less.
INSTRUCTIONS:
To access the FNU Online Library for journals and articles you can go the FNU library link here:
https://www.fnu.edu/library/
In order to
n that draws upon the theoretical reading to explain and contextualize the design choices. Be sure to directly quote or paraphrase the reading
ce to the vaccine. Your campaign must educate and inform the audience on the benefits but also create for safe and open dialogue. A key metric of your campaign will be the direct increase in numbers.
Key outcomes: The approach that you take must be clear
Mechanical Engineering
Organic chemistry
Geometry
nment
Topic
You will need to pick one topic for your project (5 pts)
Literature search
You will need to perform a literature search for your topic
Geophysics
you been involved with a company doing a redesign of business processes
Communication on Customer Relations. Discuss how two-way communication on social media channels impacts businesses both positively and negatively. Provide any personal examples from your experience
od pressure and hypertension via a community-wide intervention that targets the problem across the lifespan (i.e. includes all ages).
Develop a community-wide intervention to reduce elevated blood pressure and hypertension in the State of Alabama that in
in body of the report
Conclusions
References (8 References Minimum)
*** Words count = 2000 words.
*** In-Text Citations and References using Harvard style.
*** In Task section I’ve chose (Economic issues in overseas contracting)"
Electromagnetism
w or quality improvement; it was just all part of good nursing care. The goal for quality improvement is to monitor patient outcomes using statistics for comparison to standards of care for different diseases
e a 1 to 2 slide Microsoft PowerPoint presentation on the different models of case management. Include speaker notes... .....Describe three different models of case management.
visual representations of information. They can include numbers
SSAY
ame workbook for all 3 milestones. You do not need to download a new copy for Milestones 2 or 3. When you submit Milestone 3
pages):
Provide a description of an existing intervention in Canada
making the appropriate buying decisions in an ethical and professional manner.
Topic: Purchasing and Technology
You read about blockchain ledger technology. Now do some additional research out on the Internet and share your URL with the rest of the class
be aware of which features their competitors are opting to include so the product development teams can design similar or enhanced features to attract more of the market. The more unique
low (The Top Health Industry Trends to Watch in 2015) to assist you with this discussion.
https://youtu.be/fRym_jyuBc0
Next year the $2.8 trillion U.S. healthcare industry will finally begin to look and feel more like the rest of the business wo
evidence-based primary care curriculum. Throughout your nurse practitioner program
Vignette
Understanding Gender Fluidity
Providing Inclusive Quality Care
Affirming Clinical Encounters
Conclusion
References
Nurse Practitioner Knowledge
Mechanics
and word limit is unit as a guide only.
The assessment may be re-attempted on two further occasions (maximum three attempts in total). All assessments must be resubmitted 3 days within receiving your unsatisfactory grade. You must clearly indicate “Re-su
Trigonometry
Article writing
Other
5. June 29
After the components sending to the manufacturing house
1. In 1972 the Furman v. Georgia case resulted in a decision that would put action into motion. Furman was originally sentenced to death because of a murder he committed in Georgia but the court debated whether or not this was a violation of his 8th amend
One of the first conflicts that would need to be investigated would be whether the human service professional followed the responsibility to client ethical standard. While developing a relationship with client it is important to clarify that if danger or
Ethical behavior is a critical topic in the workplace because the impact of it can make or break a business
No matter which type of health care organization
With a direct sale
During the pandemic
Computers are being used to monitor the spread of outbreaks in different areas of the world and with this record
3. Furman v. Georgia is a U.S Supreme Court case that resolves around the Eighth Amendments ban on cruel and unsual punishment in death penalty cases. The Furman v. Georgia case was based on Furman being convicted of murder in Georgia. Furman was caught i
One major ethical conflict that may arise in my investigation is the Responsibility to Client in both Standard 3 and Standard 4 of the Ethical Standards for Human Service Professionals (2015). Making sure we do not disclose information without consent ev
4. Identify two examples of real world problems that you have observed in your personal
Summary & Evaluation: Reference & 188. Academic Search Ultimate
Ethics
We can mention at least one example of how the violation of ethical standards can be prevented. Many organizations promote ethical self-regulation by creating moral codes to help direct their business activities
*DDB is used for the first three years
For example
The inbound logistics for William Instrument refer to purchase components from various electronic firms. During the purchase process William need to consider the quality and price of the components. In this case
4. A U.S. Supreme Court case known as Furman v. Georgia (1972) is a landmark case that involved Eighth Amendment’s ban of unusual and cruel punishment in death penalty cases (Furman v. Georgia (1972)
With covid coming into place
In my opinion
with
Not necessarily all home buyers are the same! When you choose to work with we buy ugly houses Baltimore & nationwide USA
The ability to view ourselves from an unbiased perspective allows us to critically assess our personal strengths and weaknesses. This is an important step in the process of finding the right resources for our personal learning style. Ego and pride can be
· By Day 1 of this week
While you must form your answers to the questions below from our assigned reading material
CliftonLarsonAllen LLP (2013)
5 The family dynamic is awkward at first since the most outgoing and straight forward person in the family in Linda
Urien
The most important benefit of my statistical analysis would be the accuracy with which I interpret the data. The greatest obstacle
From a similar but larger point of view
4 In order to get the entire family to come back for another session I would suggest coming in on a day the restaurant is not open
When seeking to identify a patient’s health condition
After viewing the you tube videos on prayer
Your paper must be at least two pages in length (not counting the title and reference pages)
The word assimilate is negative to me. I believe everyone should learn about a country that they are going to live in. It doesnt mean that they have to believe that everything in America is better than where they came from. It means that they care enough
Data collection
Single Subject Chris is a social worker in a geriatric case management program located in a midsize Northeastern town. She has an MSW and is part of a team of case managers that likes to continuously improve on its practice. The team is currently using an
I would start off with Linda on repeating her options for the child and going over what she is feeling with each option. I would want to find out what she is afraid of. I would avoid asking her any “why” questions because I want her to be in the here an
Summarize the advantages and disadvantages of using an Internet site as means of collecting data for psychological research (Comp 2.1) 25.0\% Summarization of the advantages and disadvantages of using an Internet site as means of collecting data for psych
Identify the type of research used in a chosen study
Compose a 1
Optics
effect relationship becomes more difficult—as the researcher cannot enact total control of another person even in an experimental environment. Social workers serve clients in highly complex real-world environments. Clients often implement recommended inte
I think knowing more about you will allow you to be able to choose the right resources
Be 4 pages in length
soft MB-920 dumps review and documentation and high-quality listing pdf MB-920 braindumps also recommended and approved by Microsoft experts. The practical test
g
One thing you will need to do in college is learn how to find and use references. References support your ideas. College-level work must be supported by research. You are expected to do that for this paper. You will research
Elaborate on any potential confounds or ethical concerns while participating in the psychological study 20.0\% Elaboration on any potential confounds or ethical concerns while participating in the psychological study is missing. Elaboration on any potenti
3 The first thing I would do in the family’s first session is develop a genogram of the family to get an idea of all the individuals who play a major role in Linda’s life. After establishing where each member is in relation to the family
A Health in All Policies approach
Note: The requirements outlined below correspond to the grading criteria in the scoring guide. At a minimum
Chen
Read Connecting Communities and Complexity: A Case Study in Creating the Conditions for Transformational Change
Read Reflections on Cultural Humility
Read A Basic Guide to ABCD Community Organizing
Use the bolded black section and sub-section titles below to organize your paper. For each section
Losinski forwarded the article on a priority basis to Mary Scott
Losinksi wanted details on use of the ED at CGH. He asked the administrative resident