The impact measurement of how our program completers contribute to expected level of student-learning growth comes from data collected in partnership with an urban PA cooperating Public School District. This data was collected by that Public-School District (as directed by the Pennsylvania Department of Education) and shared with us. The student learning and growth data was defined by the Pennsylvania Department of Education using PVASS, a validated and "reliable" value-added measure that gauges the extent to which students gained or lost ground compared to their peers when holding constant students' prior assessment results.
Teachers were also rated by this Public-school’s supervisors (principals) using a Danielson-based observation rubric. SRU scored higher than "All Hires" in managing student behavior and engaging students in learning. SRU scored "About the Same" as all hires in all other areas. SRU was "Proficient" or higher with average scores of 200 or better in every area.
As for "Overall Performance," teachers received an annual rating based on combining the above measures collected by the School District. The combined effectiveness measure (CEM) is a score between 0-300, with each score translating to an overall level of performance in the following way: Distinguished (210-300), Proficient (150-209), Needs Improvement (140-149), and Failing (0-139). SRU's CEM scores for 2018 was 212. Therefore, we see, from this Public School's perspective, that SRU hires are "Distinguished" in their performance as measured by PVASS and principal classroom evaluations.
Another measure of our graduates' impact on their students learning is a sampling of our graduates' performance on their Pennsylvania Act 82 Teacher Evaluations. All teachers employed in PA undergo annual classroom observations that are related to student achievement that comprise fifty percent (50%) of their overall rating in each of the following Danielson areas: Planning and Preparation; Classroom environment; Instruction; Professional responsibilities. Student performance, which comprises the other fifty percent (50%) of the overall rating is based on multiple measures of student achievement, otherwise known as the Student Learning Outcome (SLO) assessment. We contacted recent graduates of our programs to obtain their SLO's.
Those responding first to our query were four (4) recent graduates, a secondary (9th grade) English teacher, Kindergarten teacher, first-grade reading teacher, and P-8 Special Education teacher. Their SLO reports showed that they improved their students' performance on the vast-majority of their students' pre-test, post-test criteria (showing improvement after having taught Pennsylvania Department of Education Standard-based content). Results of two (2) examples are shown in the AIMS Artifacts area. One SLO shows that all students demonstrated growth during an after-school reading intervention program as measured by such assessments as the Flynt-Cooter/Fluency and Comprehension tool. The other shows that 21 of 24 9th grade English students demonstrated growth concerning comprehension of "cold-reading" of literary nonfiction as measured within the SLO project. Again, these results appear to support the fact that our graduates have a positive impact on their students' learning when entering the teaching profession.
Our case study approach also produced data showing our completers’ positive impact on P-12 learning and development (please see Measure #2 for more details).
Because Pennsylvania does not provide EPPs with data such as value-added measures or student-growth percentiles, Slippery Rock University chose to conduct case studies of their graduates to establish on measure of our completers’ impact on their students learning. To do so, we continued to foster a relationship with our program completers in their induction into the profession. We designed a coaching-style Case Study to both observe and promote best practices for recent completers.
The following summarizes our case study design, data and analysis, and conclusions.
Faculty Observers and Teacher Participants
The case study design was agreed upon by the CAEP Committee, which comprises of representatives from each initial licensure program. Committee representatives shared the design with the entire College of Education faculty at an all-college meeting. Faculty were invited to volunteer to as observers. Recent completers (i.e., within 5 years) were invited individually by the faculty observers in each program. A summary of participants and the aggregated data is shown for Data Cycle 1 & 2 is shown in this table.
Rubric Development & Data Sources
The state pf Pennsylvania and SRU's teaching programs use the Danielson Framework to evaluate all Stage III field students and student teachers. This framework is also used by the state of Pennsylvania. Therefore, this rubric (validated and shown reliable) was also used for the Completer Case Study. The accreditation team decided to include 13 of the 22 rubric indicators, eliminating from data collection indicators that were less relevant to our central focus of completers' impact on student learning. Streamlining the rubric made the process more focused for observers and allowed for observers to offer a goal-focused coaching cycle to our novice completers. That rubric is also linked below.
For ease and consistency in data collection, observers used a pre-teaching discussion guide, an observation guide, and a post-teaching discussion guide. These guides prompted observers and teachers to include previous evaluation data and evidence of student learning in addition to the data collected via observation.
Data Collection
Each faculty observer conducted three processes with the completer participants: a pre-teaching conference, a synchronous observation, and a post-teaching conference.
During the pre-teaching conference, the observer and participant reviewed existing data that demonstrated the teacher's effectiveness and discussed the teacher's strengths and areas for growth. In keeping with the coaching cycle design, the observer and participant also worked together to establish a focus for the observation so the faculty member could specifically support each participant in an area of practice targeted for growth.
The observation of teaching was aligned with the participant's goals. For example, if a teacher wished to critically examine her questioning strategy, the observer would collect data about the type and frequency of questions observed during the lesson. Additionally, the observer rated the teacher according to the adapted rubric of 13 indicators from the Danielson Framework.
Following the observation, the observer and teacher discussed the lesson and outcomes. A discussion guide prompted each dyad to discuss evidence of student learning, reflections on challenges and how these might inform future plans, and ideas and resources for moving forward. Additional questions asked participants to share their thoughts about how SRU prepared them to be a teacher. The guide also included prompts for the observer to share ideas and resources for moving forward. Observers were encouraged to focus the post-observation discussion on the participants' own goals for their instruction, to recognize the teachers' professionalism and support their autonomy in directing their professional growth.
Scoring and Analysis
Each observer scored the teacher participant on the 13 rubric indicators, on a scale of 1-3. The observation guide also prompted observers to note the evidence supporting their rating. Observation notes regarding each teacher's individual goal were also captured on the observation guide, though these practices were not scored. Additionally, observers took notes according to the discussion guides for the pre- and post-teaching observations.
The documents from each case were compiled by the accreditation committee, who summarized the quantitative findings (i.e., rubric scores) and qualitative findings (i.e., evidence notes and question responses) across the cases. The faculty observers were then invited to participate in the cross-case analysis by looking for themes and patterns in the data as well as convergence and divergence of the qualitative and quantitative findings. Analyses were then presented to the Unit accreditation committee for discussion about connections with other data sources ---- e.g., student teaching evaluations, Student Learning Outcomes projects, and completer and employer satisfaction survey--- all in the service of looking for implications for program improvement.
Findings
This table shows disaggregated data by program and summarizes the quantitative findings (I.e., the mean scores of all teacher participants on each rubric indicator). Findings indicate that teachers scored high on most rubric indicators. Scores in areas in domain 1 - demonstrating knowledge of content and pedagogy, knowledge of students, and setting instructional outcomes - were slightly lower (m=2.66) than domains 2, 3, and 4. One notable exception is indicator 3b, using questioning and discussion techniques, with a mean score of 2.70.