HW1 - Computer Science
1 . It is perfectly fine to slightly modify the code so that it runs on Python 3.x instead of 2.x. Please be sure to include a short note documenting the change you made to get it to run (it should be very minor). Do not use Jupyter notebooks, R or some other language. You must fill in the .py file to receive a grade.
3. For EVERY unique map reduce job, you have to clear out the global data structures! Otherwise, you're persisting information from one MR job to the other, and you will get wrong results. The following may help to clarify: (a) a map reduce JOB is run over a set of documents (the 'corpus') since you're doing word count over a corpus, (b) when computing the execution times of map and reduce, make sure to take this into account. Specifically, you will have to ADD (not average) the execution times for a mapper over the corpus (e.g., if the mapper takes 3 seconds to process doc_1, 2 seconds for doc_2 and 4 seconds for doc_3, the total time taken by the mapper is 9 seconds). You are, of course, generating corpora ten times for each document length (say 10). This will give you ten mapper and ten reducer times for that length. These ten numbers will (respectively) each be averaged into one, and these averaged numbers will be your 'data point' for that length, along with error bars. You can use any tool you want for visualization (Excel works well enough, but you could use matplotlib, or other external tools. The plots will be attached with your submission).
4. In your submission, try to be as organized as possible, but you should absolutely include all supporting data that you can, including plots and raw timing data that you used for the plots. Like real life, the HW doesn't have a single right or wrong answer. We are more interested in seeing the effort you made and good reasoning for decisions or assumptions you found yourself making in the context of the assignment.
sheep,1
cow,1
jumped,2
over,2
moon,1
sun,1
the,4
import time # declare all your imports in this section
import numpy
"""
In this homework, we will be studying two applied topics:
(1) how to write practical python code!
(2) how to run meaningful timed experiments by 'simulating' architectures like mapreduce
This homework is best done after re-starting your machine, and making sure minimal 'stuff' is open. You may
need to consult a browser occasionally, as well as the command line, but otherwise please try to minimize
the programs running in the background. MAX SCORE POSSIBLE: 100, not including 20 extra credit points.
"""
# Let us begin by giving you a crash course in a simple way to measure timing in python. Timing is actually
# a complicated issue in programming, and will vary depending on your own system. So to make it uniform
# for all students (in procedure, if not in answer), we are telling you how you should be measuring timing.
# Before getting started, it's good to install PyCharm, which is an integrated development environment (IDE)
# for writing Python programs and setting up Python projects.
# Uncomment the code fragment below, and run it (hint: on mac, you can select the lines and use command+/).
# Otherwise, look under the 'Code' menu.
# start = time.time()
# test_list = list()
# for i in range(0,10):
# test_list.append(i)
# end = time.time()
# print(end - start)
# print len(test_list)
# test_list = None # free up memory
"""
PART I: 15 POINTS
[+5 extra credit points]
Q1. Did the code fragment run without errors? What's the answer? [2+2 points]
ANS:
Q2: What's the answer when you change range(0,x) from x=10 to x=1000? What about x= 100000? [3+3 points]
(I don't care about the 'units' of the timing, as long as you're consistent. However, you should look up
the Python doc and try to be precise about what units the code is outputting i.e. is is milliseconds, seconds, etc.?)
ANS:
Q3: Given that we 'technically' increased the input by 100x (from x=10 to x=1000) followed by another 100x increase, do
you see a similar 'slowdown' in timing? What are the slowdown factors for both of
the 100x input increases above COMPARED to the original input (when x=10)? (For example, if I went from 2 seconds to
30 seconds when changing from x=10 to x=1000, then the slowdown factor is 30/2=15 compared to the original input) [5 points]
ANS:
Note 1: If you have a Mac or Unix-based system, a command like 'top' should work on the command line.
This is a useful command because it gives you a snapshot of threads/processes running on your machine
and will tell you how much memory is free or being used. You will need to run this command or its equivalent (if 'top' is not applicable to your specific OS) if you want the extra credit in some of the sections (not necessary if you don't)
Extra Credit Q1 [5 points]: If you run top right now, how much memory is free on your machine? What is the value of x
in range(0,x) for which roughly half of your current free memory gets depleted?
ANS:
Note 2: Please re-comment the code fragment above after you're done with Q1-3. You should be able
to use/write similar timing code to measure timing for the sections below.
Now we are ready to start engaging with numpy. Make sure you have installed it using either pip or within
Pycharm. We will begin with a simple experiment based on sampling. First, let's review
a list of distributions you can sample from here: https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.random.html
Uncomment lines 74 and 78 below.
"""
# let's try sampling values from normal distribution:
# https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.random.normal.html#numpy.random.normal
# sample = numpy.random.normal(size=1) # The first
#two parameters of the normal function, namely loc and scale have default values specified. Make sure you understand
#this, since it will help you read documentation and learn how to use python libraries.
# print sample
"""
PART II: 25 POINTS
[+5 extra credit]
Q1. What is the value printed when you run the code above? [3 points]
ANS:
Q2. Run the code five times. Do you get the same value each time? [3 points]
ANS:
Extra Credit Q2 [5 points]: How do I guarantee that I get the same value each time I run the code above (briefly state
your answer below)? Can you add
a command before sample = numpy.random.normal(size=1) such that every time we execute this code we get the same
'sample' value printed? Write such a command above the 'sample = ' command and comment it out.
ANS:
Q3: What happens if I run the code with no arguments (i.e. delete size=1)? Does the program crash? Briefly
explain the difference compared to the size=1 case. [9 points]
ANS:
Q4: Add a code fragment below this quoted segment to sample and print 10 values from a laplace distribution
where the peak of the distribution is at position 2.0 and the exponential decay is 0.8. Run the code and paste the
output here. Then comment the code. (Hint: I provided a link to available numpy distributions earlier in this exercise.
Also do not forget to carefully READ the documentation where applicable, it does not take much time) [10 points]
ANS:
"""
#paste your response to Q4 below this line.
"""
PART III: 60 POINTS
[+10 extra credit]
This is where we start getting into the weeds with 'simulated' mapreduce. As a first step, please make sure to download
words_alpha.txt. It is a list of words. We will do wordcount, but with a twist: we will 'control' for the distribution
of the words to see how mapreduce performs.
"""
# we will first use the numpy.choice function that allows us to randomly choose items from a list of items:
# https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.random.choice.html#numpy.random.choice
"""
Q1: Read in the words in our vocabulary or dictionary file 'words_alpha.txt' into a list (Hint: do not underestimate this 'simple' task, even
though it is only a few commands; being up close and personal with I/O in any language the first time
can take a couple of trials) Fill in your code in read_in_dictionary_to_list below. Next, right after the function,
uncomment the print statement and replace the path to words_alpha.txt with wherever it is in your machine. How many words in the list? [10 points].
ANS:
Q2: Now we will write a function to 'generate' a document (in generate_document). It takes a list_of_words (e.g., the output
of read_in_dictionary...) as input and outputs another list of length document_length, which you can basically think about
as a 'document' with document_length (possibly repeated) words.
We can use any probability distribution (over the words in the original input list) to generate such a document. Let's try the
choice function as an obvious choice (no pun intended). Go to the link I provided above to read about it. Now, in
generate_document, write the code to generate a document of LENGTH 20 assuming UNIFORM distribution WITH REPLACEMENT over the original
input list (that you read in from file). (Hint: the tricky part in such generators is setting the value for the 'p' argument, but do you
really need to for this question? See the documentation of choice carefully) [10 points]
Extra Credit Q3 [5 points]: Write an alternate generator function (called generate_document_1) that is the same as above
but the distribution is no longer uniform. Instead, assume that the probability of sampling a word with k characters is
inversely proportional to k (Hint: this will mean creating your own p distribution; make sure that it is a valid probability distribution!)
Q3: Now we will run a simple map-reduce function. For context, see the 'master' function. I have already written the
map and reduce functions to make life easier for you. Your primary task will be to run timed experiments. As a first step,
go to the 'master' function and fill in the rest of the code as required. At this point you should be able to run
an experiment (by calling master) and output a file with word counts. [15 points]
Q4: Use the code in the first part of this exercise on 'timing' to plot various curves (you can use Excel or another tool to
plot the curves and only use the code here for collecting 'raw' data) where the length of the document is on the x axis
and varies from 10-100000 in increments of 10,000, and the y-axis is time units. Note that this means you will be computing data for approximately 100000/10000 = 10 x,y points. In practice, we would be considering more points to get a smoother curve, but it is unnecessary for this assignment.
<Note: Write additional functions as you see fit to collect/write out the raw data.
Do not modify master for this question; we will use it for grading Q3. For each length (i.e. each x value on your plot) in
your experiment, you MUST generate 10 documents (to account for randomness of sampling), and then average the data over the ten. In the plots below, make sure to show error bars.>
Specifically:
(a) Plot a curve showing how much time the mapper takes as the length of a document increases.
(b) Plot a curve showing how much time the reducer takes as the length of a document increases
(hint: review the start-end timing methodology at the beginning of this file, and record 'time' variables where
appropriate to help you get the data for the plot)
[25 points]
EXTRA CREDIT [5 points]: Use 'top' to collect data on memory consumption and give me a single plot where memory
consumption (in standard units of your choice) is on the y-axis instead of time. Is memory consumption linear
in the length of the documents? If the program terminates quickly for small-length documents, you can plot the curve
starting from a 'higher' length like 20,000
"""
def read_in_dictionary_to_list(input_file):
# this function must return a list. Normally, good software engineering requires you to deal with 'edge cases' such
# as an empty file, or a file that does not exist, but you can safely assume the input_file will be the path
# to words_alpha.txt.
pass # pass is a placeholder that does nothing at all. It is a syntactic device; remove and replace it with your own code when ready
#print len(read_in_dictionary_to_list(input_file="path to words_alpha.txt")) # what this hopefully also conveys is that you actually have to
# call a function with inputs to get it to 'do something'. Otherwise, it's just defined and sitting there...waiting.
def generate_document(list_of_words, document_length):
pass
# this function is only for instructional purposes, to help you understand dicts. Don't forget to call it to get it to do something!
def playing_with_dicts():
test_dict = dict()
# keys and values can be any data type. In this case, the key is string and the value is integer. Values can potentially be
# complicated data structures, even Python dictionaries themselves!
sample_words = ['the', 'cow', 'jumped', 'over','the','moon']
for s in sample_words:
if s not in test_dict: # is s a key already in test_dict?
test_dict[s] = 0 # initialize count of s to 0
test_dict[s] += 1 # at this point, s HAS to be a key in test_dict, so we just increment the count
print test_dict # what happens? make sure to play and experiment so that you get the hang of working with dicts!
#playing_with_dicts() # uncomment for the function above to do something.
"""
Background on global_data (if you don't want to read this, make sure to check out the code fragment at the very end
of this file; it provides some intuition on what I've explained below and gives you a way to try things out!):
Just like in class, the mapper takes a 'document' i.e. a list that you generated
as input. The tricky part is the output, since in an 'implemented' mapreduce system, there is a mechanism to ensure that the 'key-value' pairs that are
emitted by the mapper are routed correctly to the reducer (as we discussed in class, all that this means is that the key-value
pairs that have the same key are guaranteed to be routed to the same reducer. Recall that reducers and mappers do not share information
otherwise, thus being embarrassingly parallel). In this implementation, because everything is being done within a single
program we will use a global data structure called 'global_data' to 'simulate' this mechanism. As the code shows, global_data
is a 'dict()' which is a python data structure (DS) that supports keys and values. See my code fragment playing_with_dicts, play with
them! They're the most 'pythonic' DS of all.
So where does global_data come in? We use it to store all key-value pairs emitted by the mapper. To do so, the 'value'
in global_data is actually a list. The list contains all the values emitted by individual mappers. For example, imagine
that the word 'the' occurs thrice in one document and twice in another document. global_data now contains a key 'the'
with value [3,2]. The reducer will independently receive key-value pairs from global_data.
Reduced_data works in a similar way; it records the outputs of the reducer. You will have to write out the outputs
of reduced_data to file.
"""
global_data = dict()
reduced_data = dict()
# already written for you
# as a unit test, try to send in a list (especially with repeated words) and then invoke print_dict using global_data as input
def map(input_list):
local_counts = dict()
for i in input_list:
if i not in local_counts:
local_counts[i] = 0
local_counts[i] += 1
for k, v in local_counts.items():
if k not in global_data:
global_data[k] = list()
global_data[k].append(v)
def print_dict(dictionary): # helper function that will print dictionary in a nice way
for k, v in dictionary.items():
print k, ' ', v
#already written for you
def reduce(map_key, map_value): # remember, in python we don't define or fix data types, so the 'types' of these arguments
#can be anything you want!
total = 0
if map_value:
for m in map_value:
total += m
reduced_data[map_key] = total
def master(input_file, output_file): # see Q3
word_list = read_in_dictionary_to_list(input_file)
# write the code below (replace pass) to generate 10 documents, each with the properties in Q2. You can use a
# list-of-lists i.e. each document is a list, and you could place all 10 documents in an 'outer' list. [6 points]
pass
# Call map over each of your documents (hence, you will have to call map ten times, once over each of your documents.
# Needless to say, it's good to use a 'for' loop or some other iterator to do this in just a couple of lines of code [ 6 points]
pass
for k, v in global_data.items(): # at this point global_data has been populated by map. We will iterate over keys and values
# and call reduce over each key-value pair. Reduce will populate reduced_data
reduce(k, v)
# write out reduced_data to output_file. Make sure it's a comma delimited csv, I've provided a sample output in sample_output.csv [3 points]
pass
# master(input_file="path to words_alpha.txt", output_file="output to a file of your choice") # uncomment this to invoke master
# simple test for map/reduce (uncomment and try for yourself!)
# map(['the', 'cow', 'jumped', 'over','the','moon'])
# map(['the', 'sheep', 'jumped', 'over','the','sun'])
# print global_data
# for k, v in global_data.items():
# reduce(k,v)
# print reduced_data
This entire assignment is due before 11:59 pm on Monday, Sept. 13. Submit your work as a single zip file. While you can discuss homework among yourself, the work must be individual. There must be no sharing of code.
PART I
We will explore a basic issue that arises often in analytics, namely whether to use a one-sided vs. two-sided student’s t-test, and also whether the test should be paired. You may want to review Student’s t-test either from your old statistics text or from Wikipedia (it is accurate in this case). You will be using the data below. Assume that ‘statistical significance’ means a confidence level of 95%. To do the analysis in the questions, you may use any tool you wish (I will not ask for evidence), including excel, Python, R etc. I do recommend that you use a tool because it would not do to lose points on manual calculation errors.
My primary goal in giving you this exercise is to reinforce the importance of basic statistics and significance testing. Feel free to look up any and every resource on the Web to brush up on what you need to know. We’re aiming for conceptual clarity and application, not memorization.
Test subject (‘student’)
GPA (Fall 2018)
GPA (Spring 2019)
1
2
3
4
5
6
7
8
9
10
3.44481587332
3.40753716919
3.67967040671
3.49235971237
3.35806029563
3.59876412408
3.20857956506
3.49077194424
3.4864916754
3.39281679695
3.63235695644
3.2735320054
3.47597264719
3.45727477966
3.20981902735
3.56866681634
3.15388226146
3.56383533036
3.62542408166
3.00151409103
1. [10 points] The Dean wants to know if the average GPA of the students is at least 3.5, the default in previous semesters being that it was below 3.5. Using g to represent the sample GPA mean for fall and G to represent the population GPA mean, write down the Dean’s null (H0) and alternate (Ha) hypothesis.
2. [10 points] Assuming the population variance of the two semesters above are equal (but unknown), individually conduct the Student’s t-test for each semester. In either semester can you (individually) reject the null hypothesis?
3. [15 points] You are asked to determine if the average GPA has changed over the two semesters. You decide to conduct a paired t-test, assuming (as we will see, unwisely) that the same test subjects were sampled for their GPAs in both fall and spring. What do you find? Is the difference between the average GPAs statistically significant?
4. [15 points] In a conversation with the person who sampled the data, you now discover that the ten subjects in the fall semester are not the same as the subjects in the spring semester (however, you are told that you can still make the equal variance assumption about both semesters). Should you still go with the results above, or conduct a different t-test? If you do, what do you now find?
PART II
See hw-1.py
…
CATEGORIES
Economics
Nursing
Applied Sciences
Psychology
Science
Management
Computer Science
Human Resource Management
Accounting
Information Systems
English
Anatomy
Operations Management
Sociology
Literature
Education
Business & Finance
Marketing
Engineering
Statistics
Biology
Political Science
Reading
History
Financial markets
Philosophy
Mathematics
Law
Criminal
Architecture and Design
Government
Social Science
World history
Chemistry
Humanities
Business Finance
Writing
Programming
Telecommunications Engineering
Geography
Physics
Spanish
ach
e. Embedded Entrepreneurship
f. Three Social Entrepreneurship Models
g. Social-Founder Identity
h. Micros-enterprise Development
Outcomes
Subset 2. Indigenous Entrepreneurship Approaches (Outside of Canada)
a. Indigenous Australian Entrepreneurs Exami
Calculus
(people influence of
others) processes that you perceived occurs in this specific Institution Select one of the forms of stratification highlighted (focus on inter the intersectionalities
of these three) to reflect and analyze the potential ways these (
American history
Pharmacology
Ancient history
. Also
Numerical analysis
Environmental science
Electrical Engineering
Precalculus
Physiology
Civil Engineering
Electronic Engineering
ness Horizons
Algebra
Geology
Physical chemistry
nt
When considering both O
lassrooms
Civil
Probability
ions
Identify a specific consumer product that you or your family have used for quite some time. This might be a branded smartphone (if you have used several versions over the years)
or the court to consider in its deliberations. Locard’s exchange principle argues that during the commission of a crime
Chemical Engineering
Ecology
aragraphs (meaning 25 sentences or more). Your assignment may be more than 5 paragraphs but not less.
INSTRUCTIONS:
To access the FNU Online Library for journals and articles you can go the FNU library link here:
https://www.fnu.edu/library/
In order to
n that draws upon the theoretical reading to explain and contextualize the design choices. Be sure to directly quote or paraphrase the reading
ce to the vaccine. Your campaign must educate and inform the audience on the benefits but also create for safe and open dialogue. A key metric of your campaign will be the direct increase in numbers.
Key outcomes: The approach that you take must be clear
Mechanical Engineering
Organic chemistry
Geometry
nment
Topic
You will need to pick one topic for your project (5 pts)
Literature search
You will need to perform a literature search for your topic
Geophysics
you been involved with a company doing a redesign of business processes
Communication on Customer Relations. Discuss how two-way communication on social media channels impacts businesses both positively and negatively. Provide any personal examples from your experience
od pressure and hypertension via a community-wide intervention that targets the problem across the lifespan (i.e. includes all ages).
Develop a community-wide intervention to reduce elevated blood pressure and hypertension in the State of Alabama that in
in body of the report
Conclusions
References (8 References Minimum)
*** Words count = 2000 words.
*** In-Text Citations and References using Harvard style.
*** In Task section I’ve chose (Economic issues in overseas contracting)"
Electromagnetism
w or quality improvement; it was just all part of good nursing care. The goal for quality improvement is to monitor patient outcomes using statistics for comparison to standards of care for different diseases
e a 1 to 2 slide Microsoft PowerPoint presentation on the different models of case management. Include speaker notes... .....Describe three different models of case management.
visual representations of information. They can include numbers
SSAY
ame workbook for all 3 milestones. You do not need to download a new copy for Milestones 2 or 3. When you submit Milestone 3
pages):
Provide a description of an existing intervention in Canada
making the appropriate buying decisions in an ethical and professional manner.
Topic: Purchasing and Technology
You read about blockchain ledger technology. Now do some additional research out on the Internet and share your URL with the rest of the class
be aware of which features their competitors are opting to include so the product development teams can design similar or enhanced features to attract more of the market. The more unique
low (The Top Health Industry Trends to Watch in 2015) to assist you with this discussion.
https://youtu.be/fRym_jyuBc0
Next year the $2.8 trillion U.S. healthcare industry will finally begin to look and feel more like the rest of the business wo
evidence-based primary care curriculum. Throughout your nurse practitioner program
Vignette
Understanding Gender Fluidity
Providing Inclusive Quality Care
Affirming Clinical Encounters
Conclusion
References
Nurse Practitioner Knowledge
Mechanics
and word limit is unit as a guide only.
The assessment may be re-attempted on two further occasions (maximum three attempts in total). All assessments must be resubmitted 3 days within receiving your unsatisfactory grade. You must clearly indicate “Re-su
Trigonometry
Article writing
Other
5. June 29
After the components sending to the manufacturing house
1. In 1972 the Furman v. Georgia case resulted in a decision that would put action into motion. Furman was originally sentenced to death because of a murder he committed in Georgia but the court debated whether or not this was a violation of his 8th amend
One of the first conflicts that would need to be investigated would be whether the human service professional followed the responsibility to client ethical standard. While developing a relationship with client it is important to clarify that if danger or
Ethical behavior is a critical topic in the workplace because the impact of it can make or break a business
No matter which type of health care organization
With a direct sale
During the pandemic
Computers are being used to monitor the spread of outbreaks in different areas of the world and with this record
3. Furman v. Georgia is a U.S Supreme Court case that resolves around the Eighth Amendments ban on cruel and unsual punishment in death penalty cases. The Furman v. Georgia case was based on Furman being convicted of murder in Georgia. Furman was caught i
One major ethical conflict that may arise in my investigation is the Responsibility to Client in both Standard 3 and Standard 4 of the Ethical Standards for Human Service Professionals (2015). Making sure we do not disclose information without consent ev
4. Identify two examples of real world problems that you have observed in your personal
Summary & Evaluation: Reference & 188. Academic Search Ultimate
Ethics
We can mention at least one example of how the violation of ethical standards can be prevented. Many organizations promote ethical self-regulation by creating moral codes to help direct their business activities
*DDB is used for the first three years
For example
The inbound logistics for William Instrument refer to purchase components from various electronic firms. During the purchase process William need to consider the quality and price of the components. In this case
4. A U.S. Supreme Court case known as Furman v. Georgia (1972) is a landmark case that involved Eighth Amendment’s ban of unusual and cruel punishment in death penalty cases (Furman v. Georgia (1972)
With covid coming into place
In my opinion
with
Not necessarily all home buyers are the same! When you choose to work with we buy ugly houses Baltimore & nationwide USA
The ability to view ourselves from an unbiased perspective allows us to critically assess our personal strengths and weaknesses. This is an important step in the process of finding the right resources for our personal learning style. Ego and pride can be
· By Day 1 of this week
While you must form your answers to the questions below from our assigned reading material
CliftonLarsonAllen LLP (2013)
5 The family dynamic is awkward at first since the most outgoing and straight forward person in the family in Linda
Urien
The most important benefit of my statistical analysis would be the accuracy with which I interpret the data. The greatest obstacle
From a similar but larger point of view
4 In order to get the entire family to come back for another session I would suggest coming in on a day the restaurant is not open
When seeking to identify a patient’s health condition
After viewing the you tube videos on prayer
Your paper must be at least two pages in length (not counting the title and reference pages)
The word assimilate is negative to me. I believe everyone should learn about a country that they are going to live in. It doesnt mean that they have to believe that everything in America is better than where they came from. It means that they care enough
Data collection
Single Subject Chris is a social worker in a geriatric case management program located in a midsize Northeastern town. She has an MSW and is part of a team of case managers that likes to continuously improve on its practice. The team is currently using an
I would start off with Linda on repeating her options for the child and going over what she is feeling with each option. I would want to find out what she is afraid of. I would avoid asking her any “why” questions because I want her to be in the here an
Summarize the advantages and disadvantages of using an Internet site as means of collecting data for psychological research (Comp 2.1) 25.0\% Summarization of the advantages and disadvantages of using an Internet site as means of collecting data for psych
Identify the type of research used in a chosen study
Compose a 1
Optics
effect relationship becomes more difficult—as the researcher cannot enact total control of another person even in an experimental environment. Social workers serve clients in highly complex real-world environments. Clients often implement recommended inte
I think knowing more about you will allow you to be able to choose the right resources
Be 4 pages in length
soft MB-920 dumps review and documentation and high-quality listing pdf MB-920 braindumps also recommended and approved by Microsoft experts. The practical test
g
One thing you will need to do in college is learn how to find and use references. References support your ideas. College-level work must be supported by research. You are expected to do that for this paper. You will research
Elaborate on any potential confounds or ethical concerns while participating in the psychological study 20.0\% Elaboration on any potential confounds or ethical concerns while participating in the psychological study is missing. Elaboration on any potenti
3 The first thing I would do in the family’s first session is develop a genogram of the family to get an idea of all the individuals who play a major role in Linda’s life. After establishing where each member is in relation to the family
A Health in All Policies approach
Note: The requirements outlined below correspond to the grading criteria in the scoring guide. At a minimum
Chen
Read Connecting Communities and Complexity: A Case Study in Creating the Conditions for Transformational Change
Read Reflections on Cultural Humility
Read A Basic Guide to ABCD Community Organizing
Use the bolded black section and sub-section titles below to organize your paper. For each section
Losinski forwarded the article on a priority basis to Mary Scott
Losinksi wanted details on use of the ED at CGH. He asked the administrative resident