This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Certification

PUA 725: Midterm Assignment

This essay is written by:

Louis PHD Verified writer

Finished papers: 5822

4.75

Proficient in:

Psychology, English, Economics, Sociology, Management, and Nursing

You can get writing help to write an essay on these topics
100% plagiarism-free

Hire This Writer

 

 

 

 

 

 

PUA 725: Midterm Assignment

Student’s Name

Institutional Affiliation

 

 

 

Question 2

A logic model is a detailed visual representation of a program and its components. The inputs (resources), activities, outputs, and outcomes (short, medium, and long term) identified in logic models help program evaluators. Stakeholders understand a few things: how an organization functions, important variables, and, most importantly, the mission. A good logic model will provide perspective into the day-to-day operations of an organization and help focus on possible areas of improvement.

Program logic models present how you intend to carry out an initiative. It also explains why your initiative is going to work. The presentation of the how and why in a model sheds light on the strengths and weaknesses of the plan that is being implemented. This allows further planning to be carried out and adjustments to be made to meet the goals that have already been laid out. In this way, a program model reduces the amount of time that is wasted in the rectification of plans and late adjustments. Defining goals can allow all efforts to be directed towards their achievement, assuring everyone involved about positive outcomes and minimizing resource wastage and misuse.

The competence of the program evaluator and planner can be demonstrated through the program logic model that they have presented. In order for any organization to succeed in its objectives, the competence of those involved in plan formulation, management, and execution must be sufficient. By assessing the program logic model, stakeholders can determine whether they would like to continue their association with the persons they have tasked to plan their initiative (Snow & Snow, 2017). They can achieve this because a program logic model includes clearly defined inputs, outputs, and outcomes. The way they have been presented is crucial to the success of the initiative in question because it gives stakeholders insight into the goals of the planner. These have to be aligned with the core values and vision of the initiative.

Dialog and collaboration are critical components of every plan. The stakeholders need to communicate with each other to ensure the effectiveness of the plan. A logic model can be a conversation starter and driver, enabling everyone to provide their input throughout the project (Snow & Snow, 2017). Program evaluators can gain from the ensuing conversations to ensure that all the goals set are included in the plan and are executed throughout. Concerns can be adequately addressed, and issues of ethics and inclusivity can be tackled more efficiently using the program logic model.

Program evaluators can also utilize the logic model to formulate their reports and media relevant to the program. They can also use it to take note of achievements and to gauge progress throughout the execution of the program. Many times, the program is not an accurate representation of what happens in reality. Evaluators can thus use the program logic model to distinguish between expectations and reality and to convey these findings to the stakeholders. Furthermore, the logic program model enables evaluators to describe how a project will be executed to stakeholders and to assign roles to everyone involved. In these ways, a program model is essential in the evaluation phase of the program.

Although a logic model can serve as a great foundation to really understand how a program works, there are limitations. First, logic models only serve as a step in the evaluation process and represent a program’s intent, not a standalone method, or even a reality. Logic models are constantly changing and adapting along with an organization; change can be dependent upon the audience, the data provided, and the purpose of the evaluation.

The first step in creating a logic model is to ask: why the evaluation is being done, who the intended audience will be, and what type of information (data) they are seeking. An evaluator must ask if the logic model will be centered around a theory, activity, or a particular result. Afterward, an evaluator will create an outline depicting the key components of the program, such as:

  1. Inputs – these are the resources an organization uses to carry out its activities. When I first heard of “inputs” during the lecture, I was only thinking of funding. However, inputs can also be staff, training, and even research done by volunteers of the program.
  2. Activities – these are the tools, actions, and, most importantly, the processes that are used to achieve the organization’s mission. Activities can include preparation, workshops, and referrals.
  3. Outputs – these are the byproducts stemming from a program’s activities and are (usually) quantitative targets of services that a program delivers.
  4. Outcomes – these are the expected short, medium, and long term changes a program hopes to achieve through their activities.
    1. Short-term outcomes are changes in attitude, skills, and knowledge.
    2. Medium-term outcomes are changes in actions and behaviors.
    3. Long-term outcomes are changes in condition or status.

During the lecture, I was creating (and reading) the logic models provided from left to right. However, they can also be created in reverse from right to left. Instead of using but/how scenarios, I used an if/then approach, this allowed me to start with the end result or the big picture of what the program wanted to achieve, and discuss the steps to getting there in reverse. I also noticed that the recommendations my group and I discussed correlated with the data provided from step 3, or the outputs evaluated.

When designing a program logic model, it is important to consider a number of things as a possible dos and don’ts list:

  1. Negative outcomes must be minimized as much as possible.
  2. Inputs must be correctly identified and suitably applied.
  3. Outcomes must be achievable and set realistically.
  4. Do not confuse outputs with outcomes.
  5. The model also has to be logical in that it is backed by research and makes sense.
  6. The research must also confirm that the short, middle, and long term outcomes are achievable if the model is followed.

Question 3

Formal education is that education that is institutionalized, intentional, planned, and provided by approved institutions. To measure the level of formal education of a program/participant in an evaluation of an employee training program, I would have to collect data on their education activities in the past using a questionnaire. The questionnaire would require participants to indicate their level of formal education from one of the following categories:

  1. Doctoral or professional degree
  2. Master’s degree
  3. Bachelor’s degree
  4. Associate’s degree
  5. Postsecondary non-degree award
  6. Some college, no degree
  7. High school diploma or equivalent
  8. No formal educational credentials

Because it would be easy for participants to lie on such a test, I would need additional information concerning their education levels. To this end, I would require the participants to provide names of the institutions which they attended and where possible, attach verifiable certifications, and valid references. This would reduce the rate of dishonesty in the responses. For example, if a participant claimed that their level of formal education is a Bachelor’s degree, I would require them to produce the certification awarded by the university they claim to have attended. Documents like these can be forged, so on top of verifying the document, I would require them to produce a reference that can be contacted to verify their story. Reassuring the participants that providing the correct information will be beneficial in their skills assessment might also reduce dishonesty.

To ensure that the participants have the prerequisite skills necessary for the employment positions that they wish to occupy, I would design a test to confirm this. The test would be designed in a way that only a person qualified to fill that position would successfully answer. These will include technical terms and skills assessments. Hypothetical scenarios likely to be encountered in that employment position should feature heavily in this test to check the participant’s knowledge of the subject matter and to gauge whether they have acquired any previous experience. For example, in an employment training program for teachers, I would base hypothetical questions on possible interactions with students in a classroom. The participants would also be tested on their personality traits and their interpersonal skills to check whether these are aligned with the goals of the institution.

During the employment training program, I would utilize the Test-Retest Reliability as the appropriate measuring instrument to figure out how effective the training was. I would first create a survey that contains a definition for concepts used within the employment training program to ensure participants understand what is being asked. Then, I would administer a survey in class (to avoid cheating) amongst the participants at the beginning of the employment training program to test what knowledge they came in with, and administer the same survey at the end testing the concepts discussed in training. Finally, I would compute a correlation between the initial test and the retest.

A high correlation will indicate reliability and depict if participants grasped the concepts taught while being in the employment training program. This method has been described as “face validity.” Ultimately, face validity is established simply reviewing the surveys rather than a statistical analysis.

Question 4

In randomly controlled trials, the persons involved in a program are randomly selected. The implementers of the program are responsible for ensuring that participants are not in control of their own access to the program. Some evaluators argue that this method of experimentation is the best way to define the cause-effect relationship between input and outcome in a program (Bondemark & Ruf, 2015). There are many other study designs that evaluators can utilize to show the relationship between inputs and outcomes. However, none of them share the ability to eliminate the input of a third factor. In this way, the randomized control trials continue to be popular among evaluators because of the certainty that they offer.

Conducting a true experiment (a randomized controlled trial) is not the only way to assess program effectiveness; conducting a quasi-experiment and non-evaluation are also options.

  • Quasi-experiments are used when it’s immoral not to use treatment and involve nonequivalent control group designs, before and after designs with a pretest, and a design structure that is either hybrid, individual, or used with a team.
  • Non-evaluation experiments look at the data already on hand and give evaluators more freedom to use either an ex post facto control group design where experimental and comparison groups are purposefully assigned, a cross-sectional design which requires data to be collected at the same time, or a longitudinal design that uses time series analysis or panel studies.

There are many ethical objections to the use of true experimentation despite its potential benefits to people. In some instances, subjects may have to undergo inferior interventions or be denied useful interventions that might be beneficial to them (Bondemerk & Ruf, 2015). For instance, in clinical trials of drugs, patients may receive placebos as opposed to experimental drugs. When these drugs turn out to be effective, patients who have participated in the clinical trial and received placebos will have missed an opportunity to receive much-needed treatment. This raises questions about the morality of denying some people beneficial interventions in an attempt to determine their efficacy. Some people might even argue that it degrades the dignity of subjects and places the acquisition of knowledge above their welfare. On the other hand, failure to perform such randomized trials may result in the widespread use of harmful interventions hence doing more harm than good.

Another concern when carrying out a randomized control trial is its necessity. Before true experimentation begins, it is vital that evaluators determine whether it is going to be cost-effective and whether it is absolutely required (Bondemark & Ruf, 2015). True experimentation can be expensive because it requires a large sample size, and sometimes compensation of parties involved is inevitable. With this in mind, an evaluator should consider the importance of carrying out a randomized trial before they embark on one. They should also explore alternative experimentation methods before settling on a randomized trial if they can be just as effective. If a program is found to be effective, it must be implemented for the whole population. This makes the sacrifice of those who were randomly selected not to receive intervention worthwhile.

When interventions in the program carry the risk of harming participants, it becomes difficult to justify randomized control trials. For example, in an investigation to determine whether a drug harmful side effects to patients, some patients would eventually have to experience negative effects in order to show the link between the input, which in this case is the drug, and the output, which is the side effect. Similarly, in a trial to investigate the role of a substance in the development of a fetus may have devastating effects. In order to find out how the absence of the component in question would affect the development of a fetus, it would mean causing harm to it, which is equally unethical. An evaluator would have a hard time justifying the need for such a trial to be carried out. Some trials even pose a risk to the lives of participants, further complicating the matter.

Trials with subtler effects may, however, be easier to justify. An example is a trial to investigate the effects of using computers in the classroom on learning in children. Chances of harming the children in such a process are minimal, although the issue of withholding intervention to another group may arise. What this means is that randomized controlled trials must ensure that the potential risk is greatly minimized while the potential benefits that may arise from it should be maximized in order to be ethically justifiable.

To address some concerns such as the cost-effectiveness of randomized controlled trials, evaluators should strive to select the samples carefully to reduce wastage of resources and to make the most out of the available ones. In the event that there is likely to be harmful to the subjects, it would be wiser to choose the alternative experimentation methods that minimize this risk (Bondemark & Ruff, 2015). Informed consent is a key element when humans are the subject of an experiment. It is vital that evaluators remember that humans are autonomous beings that should have control over their own destinies. An evaluator cannot take away this right and must keep the subjects informed. In the event that the subjects are no longer comfortable participating in the experiment, it should remain within their power to exit.

Although true experiments are frowned upon because of the volunteer bias, expenses (time and money), and most importantly, the potential ethical issues that can arise, I find them to be the most effective method of producing unbiased results. Unless the experiment is conducting tests (placebos vs. non-placebos) that surround life-threatening matters such as diseases, pregnancy/birth, etc., I think that that the results generated from randomized controlled trials can be very educational and helpful for the greater good.

Question 5

In their book, Newcomer, Hatry, & Wholey (2015) explain that program logic models will have casual linkages, but how does an evaluator test all of them? By isolating each casual linkage and observing the relationship each has with the program’s outputs and outcomes. Defining causal linkages in a program is difficult, especially due to the self-selection bias. In most programs that an evaluator will analyze, there will be a tendency for some people to participate while others do not. Those who choose to participate may have qualities that affect the outcome in ways that differ from those who are hesitant. An example of such a quality would be motivation levels. An evaluation should hold into account such factors in order to eliminate the bias and identify causal linkages more accurately.

During evaluations, it is important to use the random selection of participants as an additional measure to eliminate bias. Randomly selecting participants allows the program to go on without interference of additional factors such as motivation of participants or lack thereof. This goes back to the randomized controlled trials, which, as indicated above, are one of the most important experimentation techniques. The relationship between input and outcomes becomes clearer with random samples.

Not all programs can be analyzed using random selection. Therefore, it is important to employ alternative assessment measures in such scenarios (Newcomer, Hatry & Wholey, 2015). One such strategy is the impact assessment of the entire program. This strategy requires the evaluator to define the outcomes of the program and relate them to the program with consideration of the input. In such programs, it is difficult to show causal linkages clearly because external influences cannot be confidently ruled out, and direct relationships cannot be proven. When projects are significantly large, however, it is possible to use statistical analysis to show causal linkages. The data can be used to show that interference of external factors is unlikely, further strengthening associations between input and outcomes.

Newcomer continues to explain that if an evaluator only focuses on efficiency and effectiveness, they may miss the most important question at hand – is the program working? Understanding the organization’s intricacies displayed in the logic model is the best way to determine whether the way a program functions is successful or not. Many factors come into play in the running of a program, and its success might not always be directly linked to its efficiency. How well resources are being used should not be the sole focus of a good program evaluator. Instead, the evaluator should be open-minded and look at the other facets of the program, which are also important.

By focusing too much on the outcomes, undue pressure may be applied to participants, and resources may be misdirected to the acquisition of more inputs. In so doing, an evaluator may neglect other important aspects of the program, such as sustainability, impact, and coherence. All these contribute to the overall success of the program in their own ways. Looking into the sustainability of the project tells us whether the benefits will last. Impact assessment indicates the effect of the project on participants and stakeholders. Neglecting these aspects may be detrimental to the program by preventing the program managers from focusing their energies on the right targets.

In addition to paying attention to causal linkages, evaluators should set their sights on stakeholders as they play a vital role in determining the effectiveness of a program (Wright, 2018). Stakeholders help keep the focus on bigger picture items such as a program’s outcomes and resolution to the research question by:

  • Offering their perspective. Since each evaluator has its own experience, background, and understanding of what is depicted on the logic model, a stakeholder’s perspective may eliminate mistakes and provide clarity or solutions.
  • . Since logic models are depicted after programs that assist the community in one way or another, adhering to “the full range” of stakeholders’ interests is the ethical thing to do, especially since they have a vested interest (monetary) in the program.

You see, the value of logic models in this spectrum is that they really clarify a program’s mission, allowing stakeholders to be engaged and build ownership. When picking the criteria to use in the evaluation of a program, it is essential to consider the purpose. The evaluation should suit the needs of the stakeholders and should be contextualized to fit the current situation (Wright, 2018). The resources available to the evaluator and the data that is presented from the program are some of the most important determinants of how the evaluations will be carried out. The evaluation questions should direct the methods that the evaluator will employ to suit the stakeholders involved and the interventions that are being sought.

 

 

 

 

 

 

References

Bondemark, L., & Ruf, S. (2015). Randomized Controlled Trial: The Gold Standard or an Unobtainable Fallacy? The European Journal of Orthodontics37(5), 457-461.

Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2015). Handbook of Practical Program Evaluation (4th ed.). Hoboken, NJ, United States: Wiley.

Wright, B. (2018, October 17). Six Steps to Effective Program Evaluation: Collaborate with Stakeholders (Part 1). Retrieved from https://charitychannel.com/program-evaluation-collaborate-with-stakeholders/

Snow, M. E., & Snow, N. (2017) “Interactive Logic Models: Using Design And Technology To Explore The Effects Of Dynamic Situations On Program Logic.” Evaluation Journal of Australasia 17.2: 20-28.

 

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask