Environmental Education Standardized Survey Methods
Introduction
Today, society requires high-quality environmental education curriculum that is effective in restructuring values and changing ethics in the guidance of ecological conservation and viability (Palmer 88). Efficient and applicable evaluation provides an effective means of enhancing the education programs which permits them to succeed in attaining more of their goals.
Programmers strive for more efficient methods to analyze the effectiveness of environmental education. Unfortunately, evaluation methods are often poorly perceived by competent workers who offer environmental education campaigns. A study of academic professionals revealed that there is a scarcity of ways to evaluate more challenging results, for example, environmental advantages, behavior change and the shift in values (Olli, Grendstad and Wollebaek 181).
In today’s world, environmental professionals are increasingly being challenged by the student body and their financiers to prove their results. Moreover, performance measurement metrics and accountability techniques are progressively being emphasized. A favorable evaluation program can fill this need among others. Careful analysis and scrutiny of the results of an effective evaluation program can offer environmental education teachers with applicable schemes on how to enhance the management and overall performance of their catalogs (Astin 189). Hence, a proper evaluation program can elevate the education that students obtain.
Evaluating Environmental Education Surveys that Rank Student Learning Outcomes
Environmental education is a mechanism that develops awareness and comprehension of the conjunction between human beings and their various environments which are technological, natural, cultural or human-made. Environmental education consists of attitudes, knowledge, and values and its aim to create responsible environmental behavior (Rickinson 207). Exceptional environmental education programs have the following characteristics:
They are reputable and credible based on solid facts, science or traditional philosophy. Assumptions, values, and biases are definitively made.
They develop comprehension and learning on political, ecological, economic and social theories as well as demonstrate the interconnection between human wellness, a favorable environment, and a stable economy.
They apply a cycle of perpetual improvement that incorporates the system of restructuring, evaluation, design, and delivery.
They involve a real-world background that is dependent on curriculum, place, and age and encourage a relationship with the environment through practical out-door experiences and the practice of ethical care.
They offer innovative knowledge experiences that are student-based and hands-on whereby students can be each other’s tutors and educators are coordinators and mentors (Olli et al. 184). The experiences advocate for greater order reasoning and offer a collegial situation for study and evaluation.
They develop enticing and enjoyable learning ambience where educators teach all learning techniques to promote long-term learning and honor nature’s beauty.
They analyze environmental issues comprehensively, therefore, incorporating moral, ethical and social aspects, campaign for value clarification and respect value diversity that are present in society.
They motivate and legitimize students through the prearrangement of correct action techniques, enabling students to implement strategies for legal citizenship by employing their skills and knowledge while working collaboratively to resolve an environmental issue (Jakeman, Letcher and Norton 602).
They commission the student in a long-term mentoring accord, remodeling them as they evaluate their individual behaviors, values, feelings and attitudes.
They encourage comprehension of the past, understanding of the present and a definite prospect of the future, publicize a level of commitment in the student to assist create a robust environmental and an imperishable home, society, and planet (Palmer 91).
The framework for environmental education surveys is based on the following factors: deciding the goal of evaluation, designing an evaluation scheme to suit the program, choosing a suitable measurement technique, deciding the participants of the assessment, determining when the review will be conducted and obtaining, analyzing and interpreting the findings (Rickinson 305). The survey methods ought to be anonymous to maintain confidentiality, privacy, and security of information received from the research participants.
Outcome-dependent Evaluation Model
The evaluation is effective for assessing environmental education dockets in non-profit organizations. The method is increasingly becoming popular in the finance community and among non-profit parties. Results-based evaluation analyses the effects, advantages or changes to the students as a result of educator efforts during or after implementation of the program. The technique also helps professionals evaluate whether they are carrying out the correct program exercises to attain some pre-identified results (Astin 192). Furthermore, outcome-based evaluation is a technique that applies a program logic model, which is the quantification of program success by the measurement of various categories of the logic model system.
Program Thesis Model
The model is a technique for planning, implementing and managing projects to help the researchers determine the aspects of the project and the changes to be implemented (Jakeman et al. 603). The system categories are logically interrelated thus inputs are dependent on activities while activities need to occur before results are attained. Though indicators do not compose the logic model, they are vital for the survey process.
Effect/Goals
Results/Objectives
Outputs
Activities
Inputs
Activities: The goal of the project is described under the headings, training, promotion, advocacy and networking.
Outputs: They describe the most immediate outcomes of the project which directly relate to the project’s activities. Productions develop the potential for sought-after results and create an environment for issues to be attained.
Outcomes: They outline the actual changes that occur to the students, organizations, and communities due to the impact of the program. The changes are expressed in form of behavior, skills, knowledge and values. The outcomes of student learning are expressed in terms of enhanced knowledge for example improved learning, enhanced techniques or a positive impact in consciousness or attitude (Rickinson 215).
Impact: It describes the researcher’s vision of the desired future and the importance of the project. Impact also outlines the long-term change the project is designed to help develop.
Evaluating Outcomes
The success of a survey is evaluated through meters that quantify all or any of the three inference categories of output, outcome, and impact. The indicators of outcomes are quantified through apparatus such as surveys, interviews or questionnaires that may be qualitative or quantitative (Olli et al. 194). The outcome targets dictate the number of results the projects hope to attain.
Phase 1: It involves identifying the program of evaluation. An appropriate program is one that specifies the group of students and offers reasonable methods to provide services to them. A grant or permission from the institution to support the development of the evaluation schedule can be considered, but it is not necessary. Hence, it is not a requirement for the survey to go through the institutional review board. However, it may be beneficial and intuitive to consider evaluation expertise to audit the evaluation plans. Many funders usually assign approximately 15% of the evaluation’s overall expenditure to an external assessment.
Phase 2: Choosing outcomes: A priority list of the results is created; however, if resources and time are limited, the researchers may identify two to four most critical effects. It is essential to specify the timeframe for example; the knowledge and techniques can be evaluated in 0-6 months, behaviors within 3-9 months and values and perspective evaluated within 6-12 months. The short-term results (0-3 months) should be interconnected with the long-term effects (6-12 months) (Palmer 93).
Phase 3: Identify indicators: The indicators of every outcome should be enumerated. For instance “40 of the 350 students who participate in the Environmental Forever (EF) Program will portray one new environmental conversancy activity within 3 months of the program”.
Phase 4: Acquiring data: For every indicator, information needs to be specified on how they will be collected to assess it. The applicability of the data collection methods needs to be considered as well as the most appropriate time for collecting data and available tools. The tools include observations, questionnaires, surveys, focus groups among others.
Phase 5: Pilot test: The first year of the evaluation pilot process is the most crucial whereby the problems are identified, and solutions for improvement are documented (Astin 196). The improvement strategies are implemented in the 2nd year of the evaluation program for a new and enhanced process.
Phase 6: Interpreting and reporting: Qualitative data for example comments are analyzed by reading through and organizing into same categories such as strengths, suggestions, and concerns. The groups are then labeled for example implications, and interests and similar associations, casual associations, and patterns in the themes are identified (Rickinson 222). The evaluation results are reported.
If resources are available, outside expertise can be sought to help with the evaluation process to avoid costly errors. The influence of an external consultant will assist by challenging the evaluation’s assumptions thus bringing a valuable prospect to what the evaluation process is trying to attain (Jakeman et al. 608).
An Example of an Evaluation Table for Environment Forever (EF) an Environmental Conservation Program
Output and Outcome
Dimension indicators
Method of data collection
Information source
The party that collected data
When data was collected
Outputs
1
Compare completed questionnaires to original materials data.
Comparison of questionnaires.
Questionnaires
The EF staff
June 2017
2
Check emails and mail records.
Simple statistics recapitulation.
Surveys
The EF staff
July 2017
Outcomes
4,6,7,8
Lecturers dispense pre and post-program student questionnaires which interrogate on behavior and value-based questions utilizing the Likert scale.
The EF staff mail and assess pre and post surveys and students answer and return the questionnaires.
Surveys
The EF staff
By June 2017. Analysis by August 2017.
9
Lecturers formulate and implement lesson schedules and activities incorporating the EF program.
The EF staff sends mail and evaluates surveys; lecturers fill out and return questionnaires; selected lecturers participate in interviews.
Interviews and surveys
the EF staff
By June 2018. Analysis by August 2018.
Why Environmental Education Program are Difficult to Evaluate
The survey process of environmental education can only measure simple aspects which are not the actual situations. The simple measure of things, prior treatment surveys, and short-term modulation are not adequate to measure real dimensions (Olli et al. 190).
The actual impact of environmental education surveys is more subtle therefore difficult to measure the degree to which they change a person’s life. To measure the efficiency of environmental education, one can ask questions about connotation, influence, percussion to evaluate the things that may not be visible short-term but become pronounced over the long-term (Palmer 100). The Alternative Approach helps to evaluate a sufficient environmental education program by creating a checklist of program factors. The more the points of positive environmental education which are contained in a program, the greater the likelihood of habit change and positive alacrity (Rickinson 230).
Most environmental education professionals are talented individuals with a passion for nature and interpreting environmental knowledge to students. However, most professionals have not attained evaluation training nor drawn to evaluation programs (Jakeman et al. 612). They would rather be in the lecture room or the field.
Works Cited
Astin, Alexander W. Assessment for excellence: The philosophy and practice of assessment and
evaluation in higher education. Rowman & Littlefield Publishers, 2012.
Jakeman, Anthony J., Rebecca A. Letcher, and John P. Norton. “Ten iterative steps in
development and evaluation of environmental models.” Environmental Modelling & Software 21.5 (2006): 602-614.
Olli, Eero, Gunnar Grendstad, and Dag Wollebaek. “Correlates of environmental behaviors:
Bringing back social context.” Environment and behavior 33.2 (2001): 181-208.
Palmer, Joy A. Environmental education in the 21st century: Theory, practice, progress and
promise. Routledge, 2002.
Rickinson, Mark. “Learners and learning in environmental education: A critical review of the
evidence.” Environmental education research 7.3 (2001): 207-320.