Effectiveness of Messages - Management
Need assistance with assignment will provide all resource and instructions need a turnitin report with the submission of work.
Communication Plan
I. Communication Goal
Our goal is to communicate the actions the company will take to improve the safety of its products to
quickly restore trust with our target audiences.
II. Target Audiences
Customers
o Customers that were especially affected by the safety incident reside in the Pacific Northwest of
the United States.
o When asked to identify their concerns, the sample customer base showed concern for the safety
of our toys and have considered buying similar products from other companies.
o Our long-time customers and potential new customers primarily consist of adults aged 30–45
with one or more children, and they consider themselves family-oriented.
o Our customers stated they are busy and will not likely have time to check on the progress of the
company’s efforts to implement a solution to the safety issue.
o Customers stated that they are unsure of the details of how the injury occurred and would like
more information on what happened, why it happened, how the issue will be resolved, and the
actions the company is taking to avoid similar situations in the future.
Employees
o The sample employee base consisted of members from the manufacturing team.
o The majority of our manufacturing employees have been working at the company for two or
more years.
o Our employees showed concern about the security of their jobs, and specifically how the
negative publicity will impact their reputation working for a company that manufactures and
sells unsafe products.
o Employees stated that new employees do not receive sufficient training on safety practices.
Some employees stated that “employees have had to learn about safety practices over time.”
III. Campaign Messages
Magazine Interview: The company is planning to have a magazine article written about the incident and
the changes that the company intends to make to ensure the safety of its products. An interview will be
conducted for an article that will appear in Time Magazine next month. The following are responses to
questions in the interview:
“We want to apologize to our customers who have been negatively impacted by the incident that occurred.”
“As CEO of Toys For All, I am taking action to ensure a similar incident does not occur in the future.”
“I have worked in the manufacturing department of Toys For All for five years and never thought our production practices
or products presented any type of safety hazard for our customers or employees. I was honestly surprised to hear about
the incident. My team and I consider safety before all else.”
Social Media Post: The company is planning to post a visual message on the website and other social
media platforms, including Twitter, Facebook, Instagram, and LinkedIn. The following message will be
posted:
A message from the CEO: “Without discussing the details of the incident, I would like to first and foremost apologize to
our customers who have been negatively impacted by the incident. As the CEO of Toys For All, I am dedicated to taking
action to ensure a similar incident does not occur in the future. In our company promise, we are pledging to make
appropriate changes to our manufacturing process.”
Competency
In this project, you will demonstrate your mastery of the following competency:
· Evaluate the effectiveness of messages on targeted audiences in relation to communication project goals
Scenario
You work in the global marketing and communications department of a children’s toy store, Toys For All. An incident recently occurred where a child was injured by a malfunctioning product that your company designs, manufactures, and sells. This issue resulted in a lawsuit and serious distrust from the public, including even your most loyal customers. Your company heavily relies on support from customers, employees, and stakeholders. To maintain regular business activity with minimal disruption, your company will need to quickly and effectively articulate a plan to improve the safety of its products. The CEO has been working to create a plan to communicate to target audiences and has asked you to review two messages that the company intends to share with the public. You receive the following email from the CEO:
From: Chantel E. Ollinger
Date: xx/xx/xxxx 11:52 p.m.
Subject: High-Priority Review
Hello,
I am busy responding to the numerous media requests we’ve received regarding the incident, so thank you for stepping up to help review the campaign messages. Sharon from manufacturing and I sat down this evening to create a communication plan and draft messages that we are planning to release to the public ASAP. Our goal is to communicate the actions the company will take to improve the safety of our products and quickly restore trust with our target audiences. For part of our efforts, we organized a last-minute focus group to gather metrics based on our target audiences. Please review our
communication plan
and provide feedback, in an executive summary, on the effectiveness of each message related to each target audience. I would especially like to know whether the message content and delivery method we are proposing will be effective. Please evaluate the messages using other appropriate measures of effectiveness based on our target audiences and communication goals as you see fit.
Please let me know if you have any questions.
Thanks,
Chantel
[email protected]
CEO | Toys For All
Directions
Executive Summary: The CEO is busy working to hold off the media while you help ensure the messages will restore trust with your target audiences. The CEO wants you to review the drafted messages and evaluate each message for its effectiveness in communicating the actions the company will take to improve product safety and quickly restore trust with customers and employees. The messages are planned to be sent out as soon as possible, so the CEO is asking you to write an executive summary based on your evaluation. In your summary, address the following:
· Identify Audiences: Describe each target audience using the appropriate metrics. Consider which information is relevant in communicating with the target audience and achieving the project goal.
· Assess Message Content: Assess the effectiveness of the content of each message for each target audience. Address the following for each message in your evaluation:
· Does the content of the messages inform each target audience of the company’s proposed solution?
· Does the content of the messages align with each target audiences’ values, interests, and lifestyle?
· Assess Delivery Method: Assess the effectiveness of the delivery method for each message for each target audience. Address the following in your evaluation:
· Does the message reach the target audiences within the intended time frame?
· Why are the delivery methods appropriate or inappropriate for each target audience?
· Determine an Additional Measure: Consider your target audiences and the project goal to determine an appropriate measure of effectiveness in addition to message content and delivery method. Support your measure of effectiveness with two considerations that will guide your evaluation of each message.
· Assess Additional Measure: Assess each message for each target audience using the additional measure of effectiveness you identified.
· Support Your Reasoning: Explain your reasoning for your analysis of each message using the three measures of effectiveness. Specifically, include which aspects of each message are effective or ineffective for each target audience, and discuss the target-audience metrics that led you to the conclusions in your evaluation.
What to Submit
Every project has a deliverable or deliverables, which are the files that must be submitted before your project can be assessed. For this project, you must submit the following:
Executive Summary to the CEO
Use the
executive summary template
to evaluate the effectiveness of campaign messages and deliver an executive summary to the CEO. Your summary should not exceed 2 pages. Cite all sources.
Supporting Materials
The following resource may help support your work on the project:
Citation Help
Need help citing your sources? Use the CfA Citation Guide and Citation Maker.
Reading:
Communication Plan
Please see this email attachment to evaluate the effectiveness of the CEOs plan for communicating the companys actions to improve product safety and quickly restore the trust of customers and employees.
Executive Summary Template Tips for Success
If you need help completing the executive summary template, please read these tips for success.
Executive Summary Template Visual Aid
For more assistance with the executive summary template, please see this visual aid.
The following rubric will be used to assess your project. The rubric is a detailed list of the specific expectations your project submission must meet to demonstrate mastery of the competency. You may resubmit the project until you have demonstrated mastery of each rubric row.
This competency has a
Learning Resources
area. This area includes units with resources such as readings and videos, which have been provided to help support your work on this project. You may be wondering how what you’re working on in the project fits together with these units. The following table shows how the different rubric rows align to the units.
If you need more help with a particular rubric row, you can use the resources in the matching unit to help support your work.
Project Rubric: Executive Summary to the CEO
Executive Summary
Which Resources Can Help?
Identify Audiences: Describes each target audience using the appropriate metrics
☐ Mastered
☐ Not Yet
·
Unit Resources: Communication Project Goals
·
Unit Resources: Target Audiences
Assess Message Content: Assesses the effectiveness of the content of each message for each target audience
☐ Mastered
☐ Not Yet
·
Unit Resources: Evaluating Messages
Assess Delivery Method: Assesses the effectiveness of the delivery method for each message for each target audience
☐ Mastered
☐ Not Yet
·
Unit Resources: Evaluating Messages
Determine Additional Measure: Considers each target audience and the project goal to determine an appropriate measure of effectiveness in addition to message content and delivery method
☐ Mastered
☐ Not Yet
·
Unit Resources: Communication Project Goals
·
Unit Resources: Measures of Effectiveness
Assess Additional Measure: Assesses each message for each target audience using the measure of effectiveness identified
☐ Mastered
☐ Not Yet
·
Unit Resources: Evaluating Messages
Support your Reasoning: Explains reasoning for analysis of each message using three measures of effectiveness and discusses target-audience metrics that led to determining which aspects of each message are effective or ineffective
☐ Mastered
☐ Not Yet
·
Unit Resources: Target Audiences
·
Unit Resources: Evaluating Messages
General
Which Resources Can Help?
Clearly conveys meaning with correct grammar, sentence structure, and spelling; shows understanding of audience and purpose
☐ Mastered
☐ Not Yet
·
Academic Support
Lists sources where needed using citation methods with no major errors
☐ Mastered
☐ Not Yet
·
Citation Help
2.3 Messages
Learning Objectives
1. Describe three different types of messages and their functions.
2. Describe five different parts of a message and their functions.
Before we explore the principles of language, it will be helpful to stop for a moment and examine some characteristics of the messages we send when we communicate. When you say something, you not only share the meaning(s) associated with the words you choose, but you also say something about yourself and your relationship to the intended recipient. In addition, you say something about what the relationship means to you as well as your assumed familiarity as you choose formal or informal ways of expressing yourself. Your message may also carry unintended meanings that you cannot completely anticipate. Some words are loaded with meaning for some people, so that by using such words you can “push their buttons” without even realizing what you’ve done. Messages carry far more than the literal meaning of each word, and in this section we explore that complexity.
Figure 2.4
Messages are often more complex than they first appear!
Source: Roberto Zingales, Message error 404, retrieved at: https://www.flickr.com/photos/filicudi/2891898817 CC-BY
Primary Message Is Not the Whole Message
When considering how to communicate, keep in mind there are three distinct types of messages you will be communicating: primary, secondary, and auxiliary.
Primary messagesRefer to the intentional content in a message, both verbal and nonverbal. refer to the intentional content, both verbal and nonverbal. These are the words or ways you choose to express yourself and communicate your message. For example, if you are sitting at your desk and a coworker stops by to ask you a question, you may say, “Here, have a seat.” These words are your primary message.
Even such a short, seemingly simple and direct message could be misunderstood. It may seem obvious that you are not literally offering to “give” a “seat” to your visitor, but to someone who knows only formal English and is unfamiliar with colloquial expressions, it may be puzzling. “Have a seat” may be much more difficult to understand than “please sit down.”
Secondary messagesRefer to the unintentional content in a message, both verbal and nonverbal. refer to the unintentional content, both verbal and nonverbal. Your audience will form impressions of your intentional messages, both negative and positive, over which you have no control. Perceptions of physical attractiveness, age, gender, or ethnicity or even simple mannerisms and patterns of speech may unintentionally influence the message.
Perhaps, out of courtesy, you stand up while offering your visitor a seat; or perhaps your visitor has an expectation that you ought to do so. Perhaps a photograph of your family on your desk makes an impression on your visitor. Perhaps a cartoon on your bulletin board sends a message.
Auxiliary messagesRefer to the intentional and unintentional ways a primary message is communicated. refer to the intentional and unintentional ways a primary message is communicated. This may include vocal inflection, gestures and posture, or rate of speech that influence the interpretation or perception of your message.
When you say, “Here, have a seat,” do you smile and wave your hand to indicate the empty chair on the other side of your desk? Or do you look flustered and quickly lift a pile of file folders out of the way? Are your eyes on your computer as you finish sending an e-mail before turning your attention to your visitor? Your auxiliary message might be, “I’m glad you came by, I always enjoy exchanging ideas with you” or “I always learn something new when someone asks me a question.” On the other hand, it might be, “I’ll answer your question, but I’m too busy for a long discussion,” or maybe even, “I wish you’d do your work and not bother me with your dumb questions!”
Parts of a Message
When you create a message, it is often helpful to think of it as having five parts:
1. Attention statement
2. Introduction
3. Body
4. Conclusion
5. Residual message
Each of these parts has its own function. They may not always be clear, and sometimes we skip parts of them or they may be missing, but through an understanding of each part and its function we can begin to focus on improving our message and its delivery, reception, and effectiveness.
The attention statementThe way you focus the audience’s attention on you and your speech., as you may guess, is used to capture the attention of your audience. While it may be used anywhere in your message, it is especially useful at the outset. There are many ways to attract attention from readers or listeners, but one of the most effective is the “what’s in it for me” strategy: telling them how your message can benefit them. An attention statement like, “I’m going to explain how you can save up to $500 a year on car insurance” is quite likely to hold an audience’s attention.
Once you have your audience’s attention, it is time to move on to the introduction. In your introductionPart of a speech that establishes a relationship with your audience and clearly states your topic. you will make a clear statement about your topic; this is also the time to establish a relationship with your audience. One way to do this is to create common ground with the audience, drawing on familiar or shared experiences, or by referring to the person who introduced you. You may also explain why you chose to convey this message at this time, why the topic is important to you, what kind of expertise you have, or how your personal experience has led you to share this message.
After the introduction comes the bodyMain content area of a speech. of your message. Here you will present your message in detail, using any of a variety of organizational structures. Regardless of the type of organization you choose for your document or speech, it is important to make your main points clear, provide support for each point, and use transitions to guide your readers or listeners from one point to the next.
At the end of the message, your conclusionPart of a speech that provides the audience with a sense of closure by summarizing the main points and relating the points to the overall topic. should provide the audience with a sense of closure by summarizing your main points and relating them to the overall topic. In one sense, it is important to focus on your organizational structure again and incorporate the main elements into your summary, reminding the audience of what you have covered. In another sense, it is important not to merely state your list of main points again, but to convey a sense that you have accomplished what you stated you would do in your introduction, allowing the audience to have psychological closure.
The residual messageIdea or thought that stays with your audience well after the speech., a message or thought that stays with your audience well after the communication is finished, is an important part of your message. Ask yourself of the following:
Figure 2.5
The channel you use impacts the message!
Source: Kārlis Dambrāns, Nokia Lumia 930, retrieved from: https://www.flickr.com/photos/janitors/14711724451 CC-BY
· What do I want my listeners or readers to remember?
· What information do I want to have the audience retain or act upon?
· What do I want the audience to do?
Messages are complex, but that complexity provides a wide range of options. From marketing to public relations to sales, the ability to tailor a message to the customer, group, audience, or community can enhance its effectiveness. Messages can involve verbal and nonverbal aspects, from words to images and video, and can be delivered across a range of channels, from e-mail to tweets, television and Internet, print, radio, and even in person. Each channel has its advantages and disadvantages to consider.
Which is the best channel? That depends on several factors, but let’s consider richness, or the “bandwidth” of a given channel, the message complexity, as well as feedback and urgency. For example, if we have a complex story with many graphs, charts, maps, and YouTube interviews to compile into a message, a tweet, limited to 140 characters, would not be our best choice. That tweet might be a message with a URL link to your blog that contains all of those information elements, and that might be a wise use of the social media tool, but the tweet itself has a limited ability to transmit information. “Bandwidth” refers to the ability of a channel to transmit information, or literally the width of the band, so in saying a tweet has “limited bandwidth” we underscore its limits in terms of conveying complex information. While 140 characters might sound limited, its speed is a key advantage. Rather than looking to a tweet for message complexity, if we value speed or urgency above feedback or complexity, then a tweet may rank higher than a blog, for example, depending on context and use.
Let’s also consider other variables. Perhaps the information you want to convey is visual or emotional, or communicates feelings that aren’t easily captured in words, graphs, or pie charts. Perhaps the information you want to convey doesnt need words at all. A hug, a smile, or an embrace can communicate significant information without words. Finally, let’s consider the possibility that you need to deliver information that communicates both facts and emotions, requires words and images, and is quite important. In each case we can see how some channels, like an e-mail or a tweet, just won’t work as well as an in-person message delivered in words, with gestures, in real time. Perhaps you need to communicate your condolences to a friend whose parent has just died. While a card might serve at a distance, if you are nearby and have a relationship you value, you might want to communicate your sympathy in person.
Finally, let’s examine several types of channels used to convey or transmit messages. In-person conversations provide verbal and nonverbal cues as well as immediate feedback right away. E-mails are more limited in their ability to share information and receive feedback. Radio can be a great way to transmit information across a distance, sharing one message with a large audience, but doesn’t allow for complex messages, visual reinforcement, or much feedback. The table below lists many common channels used to deliver messages ranked on a Likert scale from 1, the lowest number and ranking, to 4, the highest number and ranking. Take a look at the common business channels and pretend you are a marketing or public relations professional with a sales message, announcing an important event. Each channel has its cost, advantages, and disadvantages. If you know your audience is quite large, an in-person meeting with each person simply isn’t feasible. Given the scenario, which method(s) would you select?
Table 2.1
Channel(s)
Richness
Complexity
Feedback
Urgency/Immediacy
In-person interactions
4
4
4
4
In-person presentations
4
4
4
4
Videoconferencing, Skype, Google Hangout
3
3
3
4
Live voice or telephone
2
2
3
4
Text, tweet, WhatsApp, e-mail
1
1
2
2
Print and image (blog, online news article) without comments
3
3
1
1
Print, image, and/or video with comments
3
3
2
1
Radio
2
2
1
1
Print, letters, and memos
3
3
1
1
Print, magazines and newspapers
3
3
1
1
Print, business reports, sales analysis, annual reports
3
3
1
1
Handwritten letter
4
3
2
1
As we can see, messages are an important part of business communication. We can craft a message with words and images to evoke emotions, reinforce brand image, or gain attention that leads to interest and sales. We may view messages from a marketing viewpoint, or through the lens of public relations, or as someone who serves customers one-on-one. Our messages represent our ideas, and are open to interpretation. Through feedback and understanding we can adjust our message again and again as we target audiences and focus on key points. We can consider what a person has told us they want in a cellphone, for example, and tailor our response to their needs. Messages are an important part of business communication, are impacted and influenced by the channels we use to transmit or share them, and require our consideration, planning, and preparation in order to maximize our return on investment.
How YouTube is Driving Innovation
Key Takeaway
Messages are primary, secondary, and auxiliary. A message can be divided into a five-part structure composed of an attention statement, introduction, body, conclusion, and residual message. Messages are impacted by the channels we choose to deliver, and receive feedback, from them.
Exercises
1. Choose three examples of communication and identify the primary message. Share and compare with classmates.
2. Choose three examples of communication and identify the auxiliary message(s). Share and compare with classmates.
3. Think of a time when someone said something like “please take a seat” and you correctly or incorrectly interpreted the message as indicating that you were in trouble and about to be reprimanded. Share and compare with classmates.
4. How does language affect self-concept? Explore and research your answer, finding examples that can serve as case studies.
5. Choose an article or opinion piece from a major newspaper or news Website. Analyze the piece according to the five-part structure described here. Does the headline serve as a good attention statement? Does the piece conclude with a sense of closure? How are the main points presented and supported? Share your analysis with your classmates. For a further challenge, watch a television commercial and do the same analysis.
6. Role playing exercise: Youve been tasked with either 1) delivering negative news to an employee or 2) addressing a complaint from a significant and long-term customer. Choose three channels to delivery your message, rank them in order of effectiveness, and explain why you selected and ranked them. Share and compare with classmates.
7. ShamWow exercise: Think of an advertisement that got your attention. Consider how it gained your attention and whether it was effective. Share and compare with classmates.
8. When is best to choose a channel that limits the scope of interaction? When is an e-mail better than a face-to-face conversation for delivering a message? Explain your viewpoint.
9. Choose a job description and identify the key messages. For example, what is the core problem that the position addresses and why? Share and compare with classmates.
10. ePortfolio Exercise: What are the core message(s) in your ePortfolio? How can you best communicate them using which channels and why?
11. Branding Exercise: Choose a brand and identify its core message(s). Explain how you identified the messages and what channels are used by the brand to communicate them. Share and compare with classmates.
arewethereyet.pdf (luminafoundation.org)
Chapter 6: Evaluating Communication Campaigns
· By: Thomas W. Valente & Patchareeya P. Kwan
· In:Public Communication Campaigns
· Chapter DOI:http://dx.doi.org.ezproxy.snhu.edu/10.4135/9781544308449.n6
· Subject:Health Communication, Mass Communication, Public Relations
· Keywords:campaigns; contraception; contraceptive methods; control groups; evaluation; program implementation; summative evaluation
· icon eyeShow page numbers
Evaluating Communication Campaigns
Thomas W. ValentePatchareeya P. Kwan
This chapter provides an introduction to communication campaign evaluation. The first section briefly defines the nature of campaign evaluation, including debates, functions, and barriers. The chapter then presents an evaluation framework that delineates the steps in the evaluation process and presents study designs used to conduct evaluation research. Sections following the evaluation framework first consider aspects of the formative evaluation phase: theories, study designs, statistical power and sample size, levels of assignment, and analysis. These sections distinguish evaluation procedures based on sample types (cross-sectional vs. panel) and levels of assignment (group, individual, self-selected). The process evaluation phase sections discuss campaign exposure and interpersonal communication. The evaluation phase sections review impact measurement and dissemination. The importance of specifying a theoretical basis for an evaluation is stressed throughout. Examples from both past and recent studies are given throughout the chapter, and it closes with a discussion of impact assessment and dissemination procedures.
The Nature of Evaluation
Evaluation is the systematic application of research procedures to understand the conceptualization, design, implementation, and utility of interventions (here, communication campaigns). Evaluation research determines whether a program was effective, how it did or did not achieve its goals, and the efficiency with which it achieved them (Boruch, 1996; Mohr, 1992; Rossi, Freeman, & Lispsey, 1998; Shadish, Cook, & Leviton, 1991; see also chapters in this volume by Atkin & Freimuth; Rice & Foote; Salmon & Murray-Johnson; and Snyder & LaCroix). Evaluation contributes to the knowledge base of how programs reach and influence their intended audiences so that researchers can learn lessons from these experiences and implement more effective programs in the future. Rather than assessing staff performances, evaluation is meant to identify sources or potential sources of implementation problems (i.e., process evaluation) in addition to finding out whether and how the program worked (i.e., summative evaluation) so that corrections can be made for current and future programming purposes. Empirical work on evaluation as a specialty field has blossomed in the past few decades, and today, evaluation is seen as a distinct research enterprise (Shadish, Cook, & Leviton, 1991).
Three major debates often frame an evaluation enterprise. One debate concerns the use of quantitative versus qualitative methodologies. The balance of emphasis between the two methods should be driven by their ability to answer the research questions being posed and the availability of data. In order to evaluate an intervention effectively, evaluators may want to employ both quantitative and qualitative methods as the findings from one will help to supplement the results of another (Guba & Lincoln, 1981). A second debate concerns whether nonexperimental designs can adequately control for selectivity and other biases inherent in the public communication evaluation process (Black, 1996; Victora, Habicht, & Bryce, 2004). With proper theoretical and methodological specification, researchers can use quasi-experimental designs to evaluate campaigns. A third debate concerns the advantages and disadvantages of internal versus external evaluators. External evaluators often have more credibility but are usually less informed than internal evaluators about aspects of the campaigns implementation that may influence its effectiveness. See Valente (2002) for more details on these issues.
Evaluation research serves three functions in any communication campaign. It improves the probability of achieving program success by forcing campaign programmers to specify explicitly in advance the goals and objectives of the campaign and the theoretical or causal relations leading to those expectations. Once the campaign objectives are specified, it becomes possible to create programs to meet these objectives and develop instruments to measure them. The first function of an evaluation, then, is to determine the expected impacts and outcomes of the program. For example, if a campaign was designed to raise the awareness of the dangers of substance abuse, the evaluation proposal should state the percentage increase expected in this awareness. The second function of a campaign evaluation is to help planners and scholars understand how or why a particular campaign worked or did not work. Knowing how or why a program succeeded or failed—that is, the theoretical and causal as well as implementation reasons—increases the likelihood that successes can be repeated and failures avoided in future behavioral promotion programs. Finally, a third function of evaluation is to provide information relevant for planning future activities. Evaluation results can indicate what behaviors or which audiences should be addressed in the next round of activities. In sum, we conduct evaluation (particularly process and summative evaluation) to know whether the program worked, how and why it worked, and how to make future programs better.
There are many barriers to rigorous evaluation. One major barrier is the perceived cost of evaluation. Many programmers argue that money spent on evaluation research should not be diverted from program activities. This argument neglects the fact that evaluation should be an integral part of any program. Research costs should normally be limited to approximately 10\% to 15\% of the total project budget (Piotrow, Kincaid, Rimon, & Rinehart, 1997). These costs, however, provide a high rate of return to the program by improving its implementation and the basis for explaining and understanding the results. A second barrier to rigorous evaluation is the perception that research takes too much time. To minimize this concern, evaluation results should be available before, during, immediately after, and some later time after the program is completed. Finally, many people object to evaluation on the grounds that it detracts from program implementation. Evaluations should not interfere with programs but rather should be considered an integral part of, and complement to, the program. Indeed, planning and implementing a rigorous evaluation clarifies the timing and objectives of various program components such as when to conduct the program launch, when to start broadcasts, and how to space supplementary activities to maximize reach and effectiveness (Rice & Foote, Chapter 5).
Evaluation Frameworks
Figure 6.1 provides a conceptual framework of the campaign evaluation process. Evaluation research is often conducted in three distinct phases: formative, process, and summative (or outcome). The first step in the evaluation process is to identify and assess the needs that drive the desire for a communication campaign. These may be identified by the relevant communities themselves (see Bracht and Rice, Chapter 20). Once needs are identified, formative research is conducted to more clearly understand the subject of the program.
Formative research consists of those activities that define the scope of the problem, gather data on possible intervention strategies, learn about the intended audience, and investigate possible factors that might limit program implementation. Formative research is also used to test message strategy, test the effectiveness of possible communication channels, and learn about audience beliefs, motivations, perceptions, and so on (Atkin & Freimuth, Chapter 4). Programmers need to understand the current problem and intended audience prior to developing interventions to address them. They need to know things such as what the intended audience thinks, how they feel, what they do, why they choose not to perform a certain behavior, and so on (see also Dervin & Foreman-Wernet, Chapter 10). Typically, formative research is conducted using qualitative research methods such as focus group discussions (FGDs), in-depth interviews, and ethnographic observations, though initial surveys are useful as well.
A recent study used FGDs to assess the barriers and facilitators to adopting obesity prevention recommendations among parents of overweight children (Sonneville, La Pelle, Taveras, Gillman, & Prosser, 2009). The study showed that there were many barriers to adopting healthier habits for their children, including perceived cost, lack of information, available transportation, safe facilities, and difficulty with changing habits. However, in the FGDs, parents indicated that availability of physical activity programs and healthier food choices were all facilitators to adopting new behaviors. The results of the FGDs allowed program developers to tap into information that they would otherwise not have known and made it possible to develop an intervention strategy that specifically addressed the concerns of their audience.
Process research (also known as monitoring) consists of those activities conducted to measure the degree of program implementation to determine whether the program was delivered as it was intended. Process research is usually conducted by collecting data on when, where, and for how long the campaign is implemented. The evaluator might want to hire individuals to watch TV or listen to the radio at prespecified times to record the program, conduct content analyses of local newspapers, or collect usage data from a campaign website. In the Bolivia National Reproductive Health Program (NRHP) project (Valente & Saba, 1997), we requested logbooks from the advertising agency contracted to produce and broadcast information about reproductive health in Bolivia. We discovered that the campaign was not implemented as intended in the smaller cities, and this warranted a rebroadcast of the campaign (Valente & Saba, 1998).
Figure 6.1 Evaluation Framework.
Summative (or outcome) research consists of those activities conducted to measure the programs impact and the lessons learned from the study and to disseminate research findings (see also Salmon & Murray-Johnson, Chapter 7). Summative research is usually conducted by analyzing quantitative data collected before, during, and after the campaign, depending on the research design (see below). Summative research is an interactive and iterative process in which preliminary findings are shared with program planners and other stakeholders prior to widespread dissemination. The stakeholders provide insight into the interpretation of data and can help set the agenda for specific data analyses. Once the summative research is completed, the findings can be disseminated. The evaluation framework in Figure 6.1 delineates the steps in the evaluation process, but it is theory concerning human behavior that informs it.
Summative Evaluation Research
Theoretical Considerations
Program goals and objectives should not emerge spontaneously but rather stem from theoretical explanations for behavior (although, indeed, the campaign may also be testing proposed theoretical explanations and causal relations). The choice of a theoretical orientation to develop a model for and explain the behavior under study can drive both the design of the program and the design of the evaluation, particularly the design of the study measures and surveys. If, for example, self-efficacy is theorized to be an important influence on whether individuals use condoms during sex, then the program may try to increase self-efficacy among the intended audience. The evaluation therefore needs to measure self-efficacy before, during, and after the campaign to determine whether the program did indeed deliver self-efficacy messages or training (process evaluation), and change self-efficacy levels in the appropriate communities (summative evaluation). As an example, the Sister-to-Sister: The Black Womens Health Project was a skill-building HIV/STD risk reduction intervention that aimed to increase self-efficacy for impulse control, carrying condoms, and consistent condom use with a partner (OLeary, Jemmott, & Jemmott, 2008). In order to test program impact, self-efficacy was measured at baseline, posttest, and 3-, 6- and 12-month follow-up.
Weiss (1972) made a useful distinction between successful theoretical specification (determined by summative evaluation) and successful program implementation (determined by process evaluation) via three different scenarios in which the program or the theory might succeed or fail. In the first scenario, a successful program sets in motion a causal process specified by a theory that results in the predicted and desired outcome. In the second scenario, there is a failure of theory in which there is a successfully implemented program (as measured by process evaluation) that sets in motion a causal process that did not result in the desired outcome. In the third scenario, because of a program failure, the intervention did not start or complete an expected causal sequence, so the theory could not be tested.
Thus, the congruence between theory and program implementation can be very important, and a tight linkage between the two increases the likelihood the program will be judged a success by avoiding cases in which there is a theory failure (Chen & Rossi, 1983). If the program fails, you may not be able to test theory, but if the theory fails, you are left with incorrectly concluding either that the program failed or succeeded. Having set goals and objectives and determining the theoretical underpinnings for the program and the evaluation, the researcher must then specify a study design.
Study Designs
Study design is the specification and assignment of intervention and control conditions; the sampling methodology and sample selection procedures; and the statistical tests to be used to make decisions about the effectiveness of a campaign. An appropriate research design is one that minimizes threats to internal and external validity given the constraints of program implementation and resources. Threats to validity are factors that conspire to provide alternative explanations for intervention effects. Examples of threats to validity include history (occurrence of controllable events during the study), maturation (e.g., aging of subjects), testing (effect of surveys on responses), selectivity (bias due to selection process), and sensitization (interaction between pretest and intervention). The following sections address levels of assignment and sampling procedures and then present six study designs.
Levels of Assignment and Sampling
Specification of intervention and control conditions can occur at the individual, group or community, or self-selection level. Individual assignment occurs when the researcher can specify which individuals will be exposed to the intervention and which ones will not. Ideally, researchers can identify a population, select a random sample, and then randomly assign individuals to groups that receive the intervention (or different interventions) and groups that do not (control groups). Group assignment occurs when the researcher specifies that certain groups such as schools, organizations, or communities will be the focus of the intervention and that the intervention can be restricted to these groups. In such cases, the intervention is applied to one or some groups, while other groups that do not receive the intervention act as comparison groups. Self-selection assignment occurs when the study subjects themselves determine who is exposed to the intervention. Self-selection assignment occurs most commonly in mass media campaign studies as a majority of the population has the ability to selectively hear or see the program and many people may selectively recall it. Once the level of assignment is specified, researchers then determine whether it is feasible to collect panel samples (i.e., interviews with the same respondents at multiple points in time) or cross-sectional samples (i.e., interviews with different respondents at one or multiple points in time).
Study Designs
A standard way to depict study designs is to use an X to refer to an intervention condition and an O to refer to a data collection observation (e.g., administration of a survey). Subscripts distinguish different Xs and Os. For example, X1 and X2 may refer to two different interventions such as a media-only campaign (X1) and a media campaign plus supplementary interpersonal counseling (X2). For observations, subscripts are often used to refer to observations in different conditions (e.g., intervention and control groups) and at different time periods (e.g., before and after a campaign) (Campbell & Stanley, 1963; Cook & Campbell, 1979).
Table 6.1 Study Designs
Table 6.1 arrays the six study designs in descending order in terms of the number of observation groups for each. Design 1, postcampaign only, collects data on the degree of exposure to the campaign, any self-reports of whether respondents feel that the campaign influenced them, and possibly behavioral data. Design 1 is the weakest evaluation design because it only provides data on posttreatment conditions and does not control for any threats to validity.
Design 2, pre- and postcomparison, reduces some threats to validity as it provides a baseline against which to compare the postcampaign scores. Design 2 is often used when identifying or creating a control group is not feasible. A cross-sectional pre- and postcomparison consists of selecting two comparable independent samples before and after the campaign and making population-level comparisons. For example, Kerrigan, Telles, Torres, Overs, and Castle (2008) collected cross-sectional data from about 500 female sex workers in Brazil before and after an HIV/STI prevention intervention that used a community development component. They found that social cohesion and mutual aid were associated with consistent condom use but that only a small number of women participated in community-related activities over the course of the intervention, and thus, there were limited changes in the community and no significant increase in preventive behaviors among the women. The shortcoming of this methodology is that there may be fluctuations in sample characteristics that account for differences in outcomes between the two surveys. Perhaps the first group of women in the Kerrigan and others’ study was somehow different than the second group, and these differences were not controllable (e.g., everyone who was eager to participate in community-level activities signed up for the study immediately and completed the pretest, while those who were not so interested did not initially sign up, and thus, the posttest sample consisted of less eager women). Moreover, secular trends, historical events, or other factors may have created the behavior change.
A panel pre- and postdesign can help eliminate some of these rival explanations as the same persons are interviewed and changes can be measured on individuals (i.e., not solely at the population level). For example, in an evaluation of a street theaters effectiveness at reducing family planning misinformation, Valente, Poppe, Alva, de Briceno, and Cases (1994) interviewed passersby about their family planning knowledge before and after the drama. The study found that the drama reduced misinformation by 9.4\%. The shortcoming of the panel pre/post design, however, is that, because there is no control group, it is hard to say what would have happened in the absence of the campaign and hard to determine the influence that taking the survey had on respondents misinformation scores (validity threats of history, testing, and sensitization among others).
Designs 3 and 4 include a post-only control group and a pre- and postcontrol group, respectively. Both of these designs are strong and provide relatively good measures of program impact as the intervention groups scores can be compared to the control groups. For example, Valente and Bharath (1999) interviewed 100 randomly selected people who attended a three-hour drama designed to improve knowledge about HIV/AIDS transmission. The same 100 persons were interviewed immediately after the drama, and an additional 100 persons were interviewed after the drama only. Results indicated that the drama increased knowledge by 26\% amongst the first group (i.e., we compared predrama with postdrama knowledge). Comparison of the postdrama scores between those interviewed before and those not interviewed before provided an estimate of the effect of the intervention attributable to taking the pretest (3\% in this case). The main limitations of Designs 1 through 3 are that they cannot measure the degree of change due to history or maturation, which are threats to validity that happen over time.
Design 4 enables the comparison of difference scores for both the treatment and control groups and a computation of the difference of differences. Because Design 4 has a group measured before and after the intervention that did not receive the intervention, the scores for the control group can be used to estimate the amount of change attributable to history and maturation. Design 4 enables researchers to subtract pretest scores from the posttest ones for both experimental and control groups and then subtract these differences from one another (the difference of differences). This value provides a measure of program effectiveness, but it does not, however, control for testing and sensitization effects.
Design 5 adds a second intervention group, which does not get pretested, in order to control for the possible effects of the pretest sensitization. Design 5 is quite rigorous as it controls for most threats to validity. The only limitation to Design 5 is that it is restrictive in its ability to determine how much of the effects due to validity threats can be attributed to specific validity threats. For example, the post-only intervention groups may have scores comparable to the control group due to historical factors that the researcher wants to separate from those effects due to the pretest sensitization. As a hypothetical example, suppose a campaign was launched to improve knowledge on the harmful effects of substance abuse. Further suppose that a historical event such as the death of a famous person due to a drug overdose occurs during the study period. Comparisons of group scores will not be able to differentiate whether posttest scores changed due to the interaction between the program and the historical event or interactions between the pretest and the historical event.
Design 6, the Solomon (1949) Four Group Design, adds a final control group that receives no intervention and no pretest survey. The Solomon Four Group Design is the most rigorous as it controls for (or allows measurement of) all threats to validity. These six study designs are usually implemented with panel samples because Designs 4, 5, and 6 cannot be implemented with cross-sectional samples unless groups are used at the level of assignment, requiring a large numbers of groups.
Statistical Power and Sample Size
Once the study design is specified, researchers must determine the needed sample size to conduct the study. Available resources often determine the sample size used for a study because many studies have fixed budgets and researchers often decide that they will spend a portion of that for data collection. However, regardless of the available resources, it is prudent to conduct power analysis to determine 1) the appropriate sample size needed before the study is conducted, and 2) the degree of reliability in the study results. Power is the ability of a statistical test to detect a significant association when such an association actually exists, that is, avoiding Type II errors (Borenstein, Rothstein, & Cohen, 1997; Cohen, 1977). For example, suppose a study finds that participants improved their knowledge by 10\% but reports that this increase was not statistically significant. Power analysis may show that the power is only 40\%, meaning that, out of 100 tests conducted among this population and with this effect size, 40 of the tests would be considered significant. This indicates that a larger sample drawn from the population (or greater control of other variables) would be needed to produce an effect size that is statistically significant.
Power analysis consists of the interrelationship between four components of study design: 1) effect size (magnitude of difference, correlation coefficient, etc., also known as Δ), 2) significance level (Type I error or alpha level—usually .05 or .01), 3) sample size, and 4) power (calculated as 1-beta or Type II error—usually .80 or .90). These four components are related to one another by power formulas (equations) so that, if any three components are known, the fourth can be computed (Hill & Lewicki, 2007). Thus, a researcher can determine the minimum sample size needed for a study by specifying the expected or presumed effect size, the desired significance level, and desired power.
Levels of Assignment and Analysis
Many interventions are developed for specific communities or are tested in multiple communities before being scaled up to regional or national audiences. In many studies, certain communities (schools, organizations, towns, etc.) act as intervention sites, while other communities act as controls (Berkowitz, Huhman, & Nolin, 2008; Fox, Stein, Gonzalea, Farrenkopf, & Dellinger, 2010; Rogers et al., 1999). Study designs in which the group is the unit of randomization are referred to as group-randomized trials (Murray, 1998). In group randomized trials, data are collected on individual and sometimes appropriate group-level behavior, but because individuals are clustered into groups, researchers need to conduct their analysis in a manner that accounts for differences in variation between groups versus variation within groups. Generally, individuals in the same group will be more alike one another (have less variation) than individuals from different groups (have more variation). Given the group difference in variances, statistical clustering correction techniques such as the intraclass correlation should be used (Murray, 1998).
When impact studies are conducted using groups as the unit of assignment, such that some groups or communities are exposed to the campaign and others serve as comparisons, the researcher can compute group-level values for the variables and conduct statistical analysis using the group as the unit of analysis (Kirkwood, Cousens, Victora, & de Zoysa, 1997). This analytic technique provides a test of program impact at the community level.
The more common analytic approach for community-based data is to create dummy variables indicating whether the respondent was in the intervention (or which intervention) or control group. For example, when the campaign is broadcast in multiple communities, a variable is created that indicates which respondents were in intervention groups and which were in comparison groups. A statistical test such as ANOVA is conducted to determine whether outcomes were significantly different between conditions. If there are multiple conditions—for example, mass media supplemented with interpersonal persuasion—then a new variables is created with a value for each condition so that differences across each condition can be tested. Murray (1998) has presented techniques that control for the within-group variations inherent in this type of analysis.
Many mass media campaigns cannot be restricted to intervention and control conditions because they are broadcast to an entire region or even to an entire country. In such cases, comparison between conditions is not possible, and the researcher has to rely on before–after surveys with independent or panel surveys. Although such designs, referred to as quasi-experimental, are usually considered less rigorous, evidence suggests (Berkowitz et al., 2008; Castellanos-Ortega et al., 2010; Snyder, Hamilton, Mitchell, Kiwanuka-Tondo, Fleming-Milici, & Proctor, 2004; Snyder & LaCroix, Chapter 8) that they still provide valid evaluations of campaign effects. The chief threat to validity for quasi-experimental campaign evaluations is that researchers have difficulty determining whether associations between program exposure and outcomes are genuine or spurious. In other words, respondents may report an increase in campaign exposure and state that they started a new behavior, but the researcher cannot attribute this change solely to exposure.
To address the problem of spuriousness and other validity threats with cross-sectional data, researchers can 1) include a time variable to control for any secular trends and sample variations, 2) control for sociodemographic and other characteristics with multivariate statistical techniques, 3) create valid and reliable measures of campaign exposure (verified by process evaluation) that can be linked in a dose–response relationship (Jato, Simbakalia, Tarasevich, Awasum, Kohings, & Ngirwamungu, 1999), and 4) attempt to eliminate rival or alternative explanations. In panel studies, researchers can also do 1 to 4 above but have the further advantage of controlling for and measuring the influence of past behavior.
No single methodological technique completely solves the spuriousness problem. The best approach is to start with a plausible theoretical model and use multiple sources of data at multiple points in time to construct a coherent explanation for a campaigns impact (or lack thereof).
Campaign Exposure
A critical variable used to evaluate public communication campaigns is campaign exposure. Campaign exposure is the degree to which audience members have access to, recall, or recognize the intervention. High levels of campaign exposure can indicate that the intended message reached the audience (or perhaps some segments). High levels of campaign exposure can also lead to campaign success (Berkowitz et al., 2008). Campaign exposure often provides the first step in a behavior change sequence and provides a necessary indicator to distinguish between program and theory failure. Note that process evaluation can help identify how much of the planned exposure or treatment is actually engaged by particular audiences (see Rice & Foote, Chapter 5).
A further recommendation when looking at campaign exposure in both panel and cross-sectional studies is to collect data from respondents on their recollections of exactly when or for how long the respondents had been engaging in the behavior. Such data provide a link between self-reports and the timing of programmatic interventions (i.e., exposure). For example, in the Bolivia NRHP campaign, we knew when the specific advertisements promoting contraceptive methods were broadcast, and we asked respondents how long they had been using their current contraceptive method. We linked the two pieces of information and found that the modal month that new adopters of contraceptives started using their method was the same month that the TV and radio spots were first broadcast.
Interpersonal Communication
Public communication campaigns can be very effective at stimulating interpersonal communication (e.g., Valente, Poppe et al., 1994). Campaigns often purposively stimulate interpersonal communication, but the impact of this media-generated communication will not be apparent until the information percolates through interpersonal networks and individuals have time to share their attitudes, experiences, and opinions with one another (Boulay, Storey, & Sood, 2002; Valente, 1995). Valente extended the two-step flow hypothesis (whereby media influence opinion leaders who in turn influence others—Katz, 1957) by developing a threshold model in which the media first influence those with low thresholds who in turn influence others in their personal networks with higher thresholds (Valente, 1995, 1996; Valente & Saba, 1998).
Measuring Impact
A primary goal of summative evaluation research is to measure the effects of an intervention (see Rice & Foote, Chapter 5; Salmon & Murray-Johnson, Chapter 7). The researcher first determines whether outcomes have changed between baseline and follow-up (for example, with a t-test). Change scores can then be computed and analysis of variance tests conducted to determine whether change was different for different demographic and socioeconomic groups, such as between males and females or between different ethnic groups. Typically, the researcher will then compute a regression equation with the follow-up score as the dependent variable and the baseline and intermediate time period scores as independent variables (this is known as lagged analysis) and include sociodemographic and system constraint variables as controls. Importantly, campaign researchers should also include variables that measure access to (or literacy of) the media channels over which the campaign is disseminated (perhaps obtained through the process evaluation) and include variables for campaign exposure (see Valente, 2002).
In the Bolivia NRHP campaign evaluation example, we analyzed data from a cross-section and panel sample of respondents who were interviewed before and after the nine-month campaign. Two of the outcomes were an index of awareness of contraceptive methods and self-reported use of a modern method of contraceptives (Valente & Saba, 2001). We hierarchically regressed method awareness using ordinary least squares (OLS) on 1) sociodemographic variables of education, income, gender, marital status, age, and number of children, 2) city method prevalence rate, 3) time (baseline or follow-up), 4) network exposure (number of the respondents network whom he or she thinks uses a contraceptive method), and 5) campaign exposure.
We regressed contraceptive use using logistic regression because it is a dichotomous variable (i.e., use of a modern method of contraceptives or no use) on the same variables. Results indicate that education, income, being female, network, and campaign exposure were all significantly positively associated with method awareness in the cross-sectional data. Time had a slight negative association, which indicates that the follow-up sample had lower method awareness. For contraceptive use, all variables except number of children, time, and campaign exposure were associated with use of contraceptives. The panel data analysis showed similar results for method awareness with the exception that age and number of children were also significantly associated with method awareness. Method awareness at baseline was significantly associated with method awareness at follow-up. The adjusted odds ratio indicated that persons who used contraceptives at baseline were 4.9 times more likely to use contraceptives at follow-up than those who did not use contraceptives at baseline. Campaign exposure was significantly associated with awareness but not with use, indicating that the campaign influenced knowledge but did not directly influence use. This analysis lends support to an interpretation of the campaigns effect as indirectly influencing behavior through method awareness (but see Valente & Saba, 1998).
Dissemination of Findings
Once the study has been completed, the researcher should disseminate the results. Dissemination (sometimes referred to as feedback or feed-forwarding) is often neglected in evaluation plans because researchers are preoccupied with planning and conducting the study and often feel that the results will speak for themselves. Moreover, it is difficult to anticipate the appropriate audiences and channels for dissemination before the findings are known.
Evaluation findings can be disseminated in at least five ways: 1) via scheduled meetings with the programmers, 2) at scientific conferences relevant to the topic, 3) through key findings or technical reports, 4) through academic papers published in refereed journals, and 5) on online websites—which is fast becoming a trend because it is quickly disseminated and has a far reach. Most campaigns not documented in evaluation reports are often forgotten within a year or two.
Conclusion
Public communication campaign evaluation represents an exciting intellectual opportunity to bridge behavioral theory with important practice. The value of these behavior change programs, however, is fundamentally dependent on determining who was reached, how the program influenced them, and how programs can be improved in the future. Without evaluation, the utility of implementing public communication campaigns is subject to debate, criticism, and even ridicule. Disseminating information to foster a more informed and hence more empowered populace represents a significant means to improve the quality of life for all. The challenge for communication campaign evaluators is to implement rigorous evaluations that advance the science of communication and simultaneously provide relevant information to the practice of communication.
References
Berkowitz J. M., Huhman M., & Nolin M. J. (2008). Did augmenting the VERB Campaign advertising in select communities have an effect on awareness, attitudes and physical activity? American Journal of Preventive Medicine,, 34(6), S257–S266.
Black N. (1996). Why we need observational studies to evaluate the effectiveness of health care. British Medical Journal, 312(7040), 1215–1218.
Borenstein M., Rothstein H., & Cohen J. (1997). Power and precision: A computer program for statistical power analysis and confidence intervals. Teaneck, NJ: Biostat.
Boruch R. (1996). Randomized experiments for planning and change. Thousand Oaks, CA: Sage.
Boulay M., Storey J. D., & Sood S. (2002). Indirect exposure to a family planning mass media campaign in Nepal. Journal of Health Communication, 7(5), 379–399.
Campbell D. T., & Stanley J. C. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.
Castellanos-Ortega A., Suberviola B., Garcia-Astudillo L. A., Holanda M. S., Ortiz F., Llorca J., & Delgado-Rodriguez M. (2010). Impact of the Surviving Sepsis Campaign protocols on hospital length of stay and mortality in septic shock patients: Results of a three-year follow-up quasi-experimental study. Critical Care Medicine, 38(4), 1036–1043.
Chen H. T., & Rossi P. H. (1983). Evaluating with sense: The theory-driven approach. Evaluation Review, 7(3), 283–302.
Cohen J. (1977). Statistical power analysis for the behavioral sciences (Rev. ed.). New York: Academic Press.
Cook T. D., & Campbell D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.
Fox S. A., Stein J. A., Gonzalea R. E., Farrenkopf M., & Dellinger A. (2010). A trial to increase mammography utilization among Los Angeles Hispanic women. Journal of Health Care for the Poor and Underserved, 9(3), 309–321.
Guba E. G., & Lincoln Y. S. (1981). Effective evaluation: Improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco, CA: Jossey-Bass.
Hill T., & Lewicki P. (2007). STATISTICS methods and applications. Tulsa, OK: StatSoft.
Jato M., Simbakalia C., Tarasevich J., Awasum D., Kohings C., & Ngirwamungu E. (1999). The impact of multimedia family planning promotion on the contraceptive behavior of women in Tanzania. International Family Planning Perspectives, 25(2), 60–67.
Katz E. (1957). The two-step flow of communication: An up-to-date report on a hypothesis. Public Opinion Quarterly, 21, 61–78.
Kerrigan D., Telles P., Torres H., Overs C., & Castle C. (2008). Community development and HIV/STI-related vulnerability among female sex workers in Rio de Janeiro, Brazil. Health Education Research, 23(1), 137–145.
Kirkwood B. R., Cousens S. N., Victora C. G., & de Zoysa I. (1997). Issues in the design and interpretation of studies to evaluate the impact of community-based interventions. Tropical Medicine and International Health, I, 1022–1029.
Mohr L. B. (1992). Impact analysis for program evaluation. Newbury Park, CA: Sage.
Murray D. M. (1998). Design and analysis of group-randomized trials. New York: Oxford University Press.
OLeary A., Jemmott L., & Jemmott J. (2008). Mediation analysis of an effective sexual risk-reduction intervention for women: The importance of self-efficacy. Health Psychology, 27(2s), 180–184.
Piotrow P. T., Kincaid D. L., Rimon J., & Rinehart W. (1997). Health communication: Lessons for public health. New York: Praeger.
Rogers E. M., Vaughan P. W., Swalehe R. A., Rao N., Svenkerud P., Sood S., & Alfred K. (1999). Effects of an entertainment-education radio soap opera on family planning behavior in Tanzania. Studies in Family Planning, 30, 193–211.
Rossi P. H., Freeman H. E., & Lipsey M. (1999). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.
Shadish W. R., Cook T. D., & Leviton L. C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.
Snyder L. B., Hamilton M. A., Mitchell E. W., Kiwanuka-Tondo J., Fleming-Milici F., & Proctor D. (2004). A meta-analysis of the effect of mediated health communication campaigns on behavior change in the United States. Journal of Health Communication, 9(1), 71–96.
Solomon R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.
Sonneville K., La Pelle N., Taveras E., Gillman M., & Prosser L. (2009). Economic and other barriers to adopting recommendations to prevent childhood obesity: Results of a focus group study with parents. BMC Pediatrics, 9(81), 1–7.
Valente T. W. (1995). Network models of the diffusion of innovations. Cresskill, NJ: Hampton Press.
Valente T. W. (1996). Social network thresholds in the diffusion of innovations. Social Networks, 18, 69–79.
Valente T. W. (2002). Evaluating health promotion programs. New York: Oxford University Press.
Valente T. W., & Bharath U. (1999). An evaluation of the use of drama to communicate HIV/AIDS information. AIDS Education and Prevention, 11, 203–211.
Valente T. W., Poppe P. R., Alva M. E., de Briceno V, & Cases D. (1994). Street theater as a tool to reduce family planning misinformation. International Quarterly of Community Health and Education, 15(3), 279–289.
Valente T. W., & Saba W. (1997). Reproductive health is in your hands: The national media campaign in Bolivia. SIECUS Rep, 25(2), 10–13.
Valente T. W., & Saba W. P. (1998). Mass media and interpersonal influence in the Bolivia National Reproductive Health Campaign. Communication Research, 25, 96–124.
Valente T. W., & Saba W. (2001). Campaign recognition and interpersonal communication as factors in contraceptive use in Bolivia. Journal of Health Communication, 6(4), 1–20.
Victora C. G., Habicht J., & Bryce J. (2004). Evidence-based public health: Moving beyond randomized trials. American Journal of Public Health, 94(3), 400–405.
Weiss C. (1972). Evaluation research: Methods for assessing program effectiveness. Englewood Cliffs, NJ: Prentice Hall.
4
Measuring Communication Effectiveness Across Diverse Backgrounds and Missions
As with any program, measuring the effectiveness of communication is vital to understanding its value and learning how to shape it for greater effectiveness.
Brig. Gen. Kathleen Cook expressed to the committee a desire for definable, measurable, results-based outcomes, using the approach that “we are trying to change the knowledge, attitudes, or beliefs of a specific audience to achieve a specific action.” This clarity, she said, would not only inform the message, but also provide a construct under which to align the U.S. Air Force’s (USAF’s) weight of effort and inform what they should be measuring themselves against.
DEFINING “EFFECTIVE COMMUNICATION” IN THE AIR FORCE CONTEXT
What is “effective” communication? Akin to weapons deployment, it can be described as communication that reaches its target, impacts its audience, and achieves the intended objective for that particular audience. Several participants stated that whether in the context of an individual message or an Air Force-wide communication strategy, this requires setting objectives, planning, and acting in a deliberate and organized fashion.
The discussion among several of the workshop participants highlighted that in the process of planning the elements of a communication strategy, leadership should first define what they hope to achieve. These participants acknowledged that communication requires measurement, and the selected measures must be specific, measurable, attainable, relevant, and timely. To that end, they noted, leadership needs a clear and consistent vision of what success looks like; this vision of success should be tied to goals.
Ms. Katie Paine, chief executive officer of Paine Publishing, established the context for discussing this topic at the workshop by summarizing the Barcelona Principles, a set of industry standards for measuring communications effectiveness and outcomes, developed by an assembly of communication-measurement professionals in Spain.
1
A few workshop participants discussed that an Air Force communication “baseline” could be defined as instituting within the communication culture a sense that the Air Force must measure its communications in order to understand their outcomes clearly.
_______________
1
For more information on the standards, see Paine Publishing, “Standards Central,”
http://www.painepublishing.com/standards-central
, accessed October 5, 2015.
Page 24
Suggested Citation:4 Measuring Communication Effectiveness Across Diverse Backgrounds and Missions. National Academies of Sciences, Engineering, and Medicine. 2016. Strategies to Enhance Air Force Communication with Internal and External Audiences: A Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/21876.
×
Top of Form
Save
Cancel
Bottom of Form
MEASURES OF EFFECTIVENESS AND SUCCESS
Ms. Paine described six steps to achieving a quality of measurement that complies with the Barcelona Principals:
1. Define your goals—Identify the role of communication in reaching the goal, defining the activity metrics (e.g., percentage increase in visits to
AF.mil
), and determining the outcome metrics (e.g., percentage growth in program funding or recruits) in terms of the mission.
2. Understanding stakeholders—”Connect the dots” between stakeholders, prioritize them, and understand their need and interests.
3. Define your benchmarks—Compare past performance over time or compare results with those of other services. The most important entity to measure against, Ms. Paine said, is “whatever keeps the generals up at night.”
4. Define your metrics—Pick the right metric. The ideal index is actionable, is there when you need it, continuously improves your processes, and gets the results you need.
5. Select data collection tools—Select the right tool(s) depending on what you are measuring.
6. Use the data to make better decisions—Ask “So what?” three times, in order to understand what the data actually mean in real-world terms.
Ms. Wendi Strong asked how one might measure messages in advance (i.e., pre-launch testing). Ms. Paine recommended focus-group studies around the country for this purpose.
Col. Sean Monogue asked how one measures trust. Ms. Paine replied that it is mainly done with surveys. Turning the survey into a game is a useful technique, she said, because it fits with the gaming proclivities of today’s employees. It tests institutional knowledge in a way that makes the survey fun.
AVAILABLE ANALYTICAL TOOLS AND METHODS
A variety of specific tools and techniques used to measure communication effectiveness were presented and discussed. The appropriate tool depends on the objective:
· Content analysis can be used to evaluate messaging, positioning, themes, or sentiment.
· Survey research can be used to measure awareness, perception, relationships, or preference.
· Web analytics can be used to measure engagement, action, or purchases.
As one example, Mr. Ed Brill said that IBM conducts a biennial “climate survey” of its employees. It finds considerable ambiguity in views of what IBM is. He said that, in contrast to the Air Force, lower-level employees tend to report that they have more information about IBM than managers do.
Tools also exist for mapping social networks and analyzing activities in social media. Ms. Paine described a method for measuring engagement with social media. Additionally, Foghound’s Ms. Lois Kelly had pointed out the need for a “predictive analytics capability” for measuring social media (see discussion in
Chapter 3
). Lastly, Mr. Robert Harles described technically sophisticated techniques for monitoring social media use and analyzing engagement, which together can form a “social media radar” for predictive capability.
KEY CONSIDERATIONS
In Mr. Harles’ view, setting measurable goals up front is critical—that is, how do you drive strategic impact? For example, he suggested focusing less on building large numbers of followers and more on building meaningful relationships with those who contribute valuable insight or influence others toward achieving stated objectives.
Ms. Paine said that it is essential not only to agree on metrics of success, as defined by senior leadership, but also to then tie rewards to them. In business, bonuses and salaries are tied to measurable outcomes.
Page 25
Suggested Citation:4 Measuring Communication Effectiveness Across Diverse Backgrounds and Missions. National Academies of Sciences, Engineering, and Medicine. 2016. Strategies to Enhance Air Force Communication with Internal and External Audiences: A Workshop Report. Washington, DC: The National Academies Press. doi: 10.17226/21876.
×
Top of Form
Save
Cancel
Bottom of Form
A key role of employee feedback surveys, she continued, is to ascertain whether employees are able to connect their roles directly back to the corporate vision; the surveys should reveal whether the communication strategy is resulting in a culture of communication. Ms. Paine stated that value must be measured in real terms. In other words, data on reach, engagement, and impressions are less important than driving actionable insights, changing user behaviors, or influencing policy and strategic outcomes.
Several participants noted that the point of measuring communication is to have a basis on which to take action. For example, such data might allow the Air Force to determine the best way to deal with crises by overcoming negative or extraneous external communications. They can help determine what internal or external information is being used effectively and what is not. With these tools, the Air Force can use the data to inform important decisions. However, as Mr. Harles noted, “When you have all the analytics, you still have to synthesize that information”—or, as Ms. Paine put it, “Research without insight is just trivia.”
Theme 9—Measurement. Several measurement experts presented established tools and techniques for measuring the effectiveness of communication in an organization. As discussed, the process begins with senior leadership setting goals and determining desired outcomes—not simply outputs. Communication excellence can then be measured against quantifiable objectives and resulting performance, and the data can be used to improve decision making. Measuring social media use and analyzing how employees are communicating provides important awareness of the communication networks and key influencers within the organization. There are experts in industry who guide organizations in confidently and effectively planning and implementing a well-integrated communication strategy using the best available tools.
Executive Summary
Goal
To communicate the actions the company will take to improve the safety of products and quickly restore trust with target audiences
Target Audiences
· Customers: <<audience metrics>>
· Employees: <<audience metrics>>
Measures of Effectiveness
1. Message Content
a. Does the content of the messages inform each target audience of the company’s proposed solution?
b. Does the content of the messages align with each target audience’s values, interests, and lifestyle?
2. Delivery Method
a. Does the message reach the target audiences within the intended time frame?
b. Why are the delivery methods appropriate or inappropriate for each target audience?
3. <<Measure of Effectiveness>>
a. <<considerations>>
b. <<considerations>>
Evaluation
Measures of Effectiveness
Magazine Interview
Social Media Post
Customers
Employees
Customers
Employees
Message Content
Delivery Method
<<measure of effectiveness>>
From: Chantel E. Ollinger
Date: xx/xx/xxxx 11:52 p.m.
Subject: High-Priority Review
Hello,
I am busy responding to the numerous media requests we’ve received regarding the incident, so thank you for stepping up to help review the campaign messages. Sharon from manufacturing and I sat down this evening to create a communication plan and draft messages that we are planning to release to the public ASAP. Our goal is to communicate the actions the company will take to improve the safety of our products and quickly restore trust with our target audiences. For part of our efforts, we organized a last-minute focus group to gather metrics based on our target audiences. Please review our
communication plan
and provide feedback, in an executive summary, on the effectiveness of each message related to each target audience. I would especially like to know whether the message content and delivery method we are proposing will be effective. Please evaluate the messages using other appropriate measures of effectiveness based on our target audiences and communication goals as you see fit.
Please let me know if you have any questions.
Thanks,
Chantel
[email protected]
CEO | Toys For All
CATEGORIES
Economics
Nursing
Applied Sciences
Psychology
Science
Management
Computer Science
Human Resource Management
Accounting
Information Systems
English
Anatomy
Operations Management
Sociology
Literature
Education
Business & Finance
Marketing
Engineering
Statistics
Biology
Political Science
Reading
History
Financial markets
Philosophy
Mathematics
Law
Criminal
Architecture and Design
Government
Social Science
World history
Chemistry
Humanities
Business Finance
Writing
Programming
Telecommunications Engineering
Geography
Physics
Spanish
ach
e. Embedded Entrepreneurship
f. Three Social Entrepreneurship Models
g. Social-Founder Identity
h. Micros-enterprise Development
Outcomes
Subset 2. Indigenous Entrepreneurship Approaches (Outside of Canada)
a. Indigenous Australian Entrepreneurs Exami
Calculus
(people influence of
others) processes that you perceived occurs in this specific Institution Select one of the forms of stratification highlighted (focus on inter the intersectionalities
of these three) to reflect and analyze the potential ways these (
American history
Pharmacology
Ancient history
. Also
Numerical analysis
Environmental science
Electrical Engineering
Precalculus
Physiology
Civil Engineering
Electronic Engineering
ness Horizons
Algebra
Geology
Physical chemistry
nt
When considering both O
lassrooms
Civil
Probability
ions
Identify a specific consumer product that you or your family have used for quite some time. This might be a branded smartphone (if you have used several versions over the years)
or the court to consider in its deliberations. Locard’s exchange principle argues that during the commission of a crime
Chemical Engineering
Ecology
aragraphs (meaning 25 sentences or more). Your assignment may be more than 5 paragraphs but not less.
INSTRUCTIONS:
To access the FNU Online Library for journals and articles you can go the FNU library link here:
https://www.fnu.edu/library/
In order to
n that draws upon the theoretical reading to explain and contextualize the design choices. Be sure to directly quote or paraphrase the reading
ce to the vaccine. Your campaign must educate and inform the audience on the benefits but also create for safe and open dialogue. A key metric of your campaign will be the direct increase in numbers.
Key outcomes: The approach that you take must be clear
Mechanical Engineering
Organic chemistry
Geometry
nment
Topic
You will need to pick one topic for your project (5 pts)
Literature search
You will need to perform a literature search for your topic
Geophysics
you been involved with a company doing a redesign of business processes
Communication on Customer Relations. Discuss how two-way communication on social media channels impacts businesses both positively and negatively. Provide any personal examples from your experience
od pressure and hypertension via a community-wide intervention that targets the problem across the lifespan (i.e. includes all ages).
Develop a community-wide intervention to reduce elevated blood pressure and hypertension in the State of Alabama that in
in body of the report
Conclusions
References (8 References Minimum)
*** Words count = 2000 words.
*** In-Text Citations and References using Harvard style.
*** In Task section I’ve chose (Economic issues in overseas contracting)"
Electromagnetism
w or quality improvement; it was just all part of good nursing care. The goal for quality improvement is to monitor patient outcomes using statistics for comparison to standards of care for different diseases
e a 1 to 2 slide Microsoft PowerPoint presentation on the different models of case management. Include speaker notes... .....Describe three different models of case management.
visual representations of information. They can include numbers
SSAY
ame workbook for all 3 milestones. You do not need to download a new copy for Milestones 2 or 3. When you submit Milestone 3
pages):
Provide a description of an existing intervention in Canada
making the appropriate buying decisions in an ethical and professional manner.
Topic: Purchasing and Technology
You read about blockchain ledger technology. Now do some additional research out on the Internet and share your URL with the rest of the class
be aware of which features their competitors are opting to include so the product development teams can design similar or enhanced features to attract more of the market. The more unique
low (The Top Health Industry Trends to Watch in 2015) to assist you with this discussion.
https://youtu.be/fRym_jyuBc0
Next year the $2.8 trillion U.S. healthcare industry will finally begin to look and feel more like the rest of the business wo
evidence-based primary care curriculum. Throughout your nurse practitioner program
Vignette
Understanding Gender Fluidity
Providing Inclusive Quality Care
Affirming Clinical Encounters
Conclusion
References
Nurse Practitioner Knowledge
Mechanics
and word limit is unit as a guide only.
The assessment may be re-attempted on two further occasions (maximum three attempts in total). All assessments must be resubmitted 3 days within receiving your unsatisfactory grade. You must clearly indicate “Re-su
Trigonometry
Article writing
Other
5. June 29
After the components sending to the manufacturing house
1. In 1972 the Furman v. Georgia case resulted in a decision that would put action into motion. Furman was originally sentenced to death because of a murder he committed in Georgia but the court debated whether or not this was a violation of his 8th amend
One of the first conflicts that would need to be investigated would be whether the human service professional followed the responsibility to client ethical standard. While developing a relationship with client it is important to clarify that if danger or
Ethical behavior is a critical topic in the workplace because the impact of it can make or break a business
No matter which type of health care organization
With a direct sale
During the pandemic
Computers are being used to monitor the spread of outbreaks in different areas of the world and with this record
3. Furman v. Georgia is a U.S Supreme Court case that resolves around the Eighth Amendments ban on cruel and unsual punishment in death penalty cases. The Furman v. Georgia case was based on Furman being convicted of murder in Georgia. Furman was caught i
One major ethical conflict that may arise in my investigation is the Responsibility to Client in both Standard 3 and Standard 4 of the Ethical Standards for Human Service Professionals (2015). Making sure we do not disclose information without consent ev
4. Identify two examples of real world problems that you have observed in your personal
Summary & Evaluation: Reference & 188. Academic Search Ultimate
Ethics
We can mention at least one example of how the violation of ethical standards can be prevented. Many organizations promote ethical self-regulation by creating moral codes to help direct their business activities
*DDB is used for the first three years
For example
The inbound logistics for William Instrument refer to purchase components from various electronic firms. During the purchase process William need to consider the quality and price of the components. In this case
4. A U.S. Supreme Court case known as Furman v. Georgia (1972) is a landmark case that involved Eighth Amendment’s ban of unusual and cruel punishment in death penalty cases (Furman v. Georgia (1972)
With covid coming into place
In my opinion
with
Not necessarily all home buyers are the same! When you choose to work with we buy ugly houses Baltimore & nationwide USA
The ability to view ourselves from an unbiased perspective allows us to critically assess our personal strengths and weaknesses. This is an important step in the process of finding the right resources for our personal learning style. Ego and pride can be
· By Day 1 of this week
While you must form your answers to the questions below from our assigned reading material
CliftonLarsonAllen LLP (2013)
5 The family dynamic is awkward at first since the most outgoing and straight forward person in the family in Linda
Urien
The most important benefit of my statistical analysis would be the accuracy with which I interpret the data. The greatest obstacle
From a similar but larger point of view
4 In order to get the entire family to come back for another session I would suggest coming in on a day the restaurant is not open
When seeking to identify a patient’s health condition
After viewing the you tube videos on prayer
Your paper must be at least two pages in length (not counting the title and reference pages)
The word assimilate is negative to me. I believe everyone should learn about a country that they are going to live in. It doesnt mean that they have to believe that everything in America is better than where they came from. It means that they care enough
Data collection
Single Subject Chris is a social worker in a geriatric case management program located in a midsize Northeastern town. She has an MSW and is part of a team of case managers that likes to continuously improve on its practice. The team is currently using an
I would start off with Linda on repeating her options for the child and going over what she is feeling with each option. I would want to find out what she is afraid of. I would avoid asking her any “why” questions because I want her to be in the here an
Summarize the advantages and disadvantages of using an Internet site as means of collecting data for psychological research (Comp 2.1) 25.0\% Summarization of the advantages and disadvantages of using an Internet site as means of collecting data for psych
Identify the type of research used in a chosen study
Compose a 1
Optics
effect relationship becomes more difficult—as the researcher cannot enact total control of another person even in an experimental environment. Social workers serve clients in highly complex real-world environments. Clients often implement recommended inte
I think knowing more about you will allow you to be able to choose the right resources
Be 4 pages in length
soft MB-920 dumps review and documentation and high-quality listing pdf MB-920 braindumps also recommended and approved by Microsoft experts. The practical test
g
One thing you will need to do in college is learn how to find and use references. References support your ideas. College-level work must be supported by research. You are expected to do that for this paper. You will research
Elaborate on any potential confounds or ethical concerns while participating in the psychological study 20.0\% Elaboration on any potential confounds or ethical concerns while participating in the psychological study is missing. Elaboration on any potenti
3 The first thing I would do in the family’s first session is develop a genogram of the family to get an idea of all the individuals who play a major role in Linda’s life. After establishing where each member is in relation to the family
A Health in All Policies approach
Note: The requirements outlined below correspond to the grading criteria in the scoring guide. At a minimum
Chen
Read Connecting Communities and Complexity: A Case Study in Creating the Conditions for Transformational Change
Read Reflections on Cultural Humility
Read A Basic Guide to ABCD Community Organizing
Use the bolded black section and sub-section titles below to organize your paper. For each section
Losinski forwarded the article on a priority basis to Mary Scott
Losinksi wanted details on use of the ED at CGH. He asked the administrative resident