Does your school have a “dashboard” by which it evaluates whether or not it is successful in meeting its goals? If so, what indicators are used to compile data for the dashboard and how is it presented to the broader school community? In my experience, these are some of the indicators that schools might use to compile data for their dashboards:
- Average performance on statewide summative assessments (mostly in public or charter schools)
- Ranking of performance on statewide summative assessments as compared to peer schools (mostly in public or charter schools)
- A college and career performance index
- Percentage of students who scored above a 3 on Advanced Placement tests
- Mean SAT or ACT scores at a grade level
- Performance on other standardized assessments, like Measures of Academic Progress (MAP)
- Student attendance data
- Number of finalists in the statewide Science Fair contest (or similar competition)
- Percentage of the senior class accepted into college
- Percentage of those accepted into college that go to Ivy League schools
- Number of National Merit Semifinalist or Finalists
There may be others, but this list probably covers the majority of indicators that most schools use to compile a dashboard that communicates their success. Of course, independent, public, and charter schools will use different indicators given unique accountability policies.
Is this list comprehensive if we want to measure school success through the lens of the student experience? Most of the indicators listed above are summative in nature, that is they occur after learning is over. From a student’s experience, school is a daily occurrence in which their successes and failures are being measured hour-to-hour, day-to-day, or week-to-week. Rarely do students measure their school success or failure based on how well they do at the end of the learning journey. In fact, they often measure their success and failure in school based on situations, assessed and non-assessed, that happen each day.
So what about using indicators such as level of engagement, motivation, or enjoyment? What about indicators, such as classroom formative assessments, that could be gathering information while learning is happening in real-time? What about end-of-week formative assessments using a rating scale that ask students how they enjoyed their week in school? What if teachers were expected to collect data on a regular basis that discovered how students felt about their learning experiences? Finally, what if teachers were expected to make adjustments in their teaching based on feedback from these “softer” indicators of student success or failure?
While some teachers and administrators might conclude that these “softer” indicators are not valid measures of overall school success and failure, I would advocate that this information is extremely relevant to whether students are experiencing school as an enjoyable, engaging, and safe place to be and learn. We know that a student’s degree of comfort in school impacts their readiness to learn the knowledge and skills we expect of them. So why wouldn’t we collect this data for our dashboard, report on it to the broader school community, and make adjustments in the school experience based on what we learn?
John Hattie, in Visible Learning for Teachers: Maximizing Impact on Learning, writes this about data dashboards:
As always, the key component is providing quality evidence to create the right debates, the systems do not resolve the debates. Professional judgement is key and it is important to focus the accountability more on the overall teacher judgements that are made about progress. The two key questions are: what is the quality of evidence that informs the teacher judgement, and what is the quality of the consequences for the teaching and learning from this evidence? (page 153)
For example, if we make a judgement that students are overly concerned with their grades on quizzes and tests, and that we are making little progress in helping them come to appreciate and understand the more enduring nature of learning, then we should ask what is the quality of evidence we are using to inform our judgement? Is it just their performance on quizzes and tests or their inability to take our feedback and make lasting changes in how they approach their learning? It might be that students do not see an inherent value in the learning, because our teaching is not geared towards illuminating the importance of transfer of knowledge. It might be that they actually do not enjoy school enjoy to get beyond just worrying about grades on quizzes and tests. Finally, it might be that we put little intentional energy into helping them develop the social-emotional competencies to appreciate what deeper learning can do for them.
If the above example is anywhere close to what Hattie is trying to help us understand, then it might be helpful to collect data for our dashboard that points to students’ social, emotional, and affective experiences with school. While these indicators may be “softer” because they cannot be quantified by numbers, they can open up important windows into students’ experiences with school. With this data in hand, maybe we can improve our schools, not by offering new curriculum, new technologies, or new buildings, but by offering students a more relational, caring, nurturing, and loving school culture. Our dashboards could clearly look a bit different from those we might see in most schools.
References on interesting data dashboard experiments:
- Mount Vernon Presbyterian School’s Innovation Diploma, see Mount Vernon Institute for Innovation (a diploma that clearly contains a unique dashboard of learning intentions and success criteria)
- Master Transcript, a consortium of schools interesting in adopting a new transcript (dashboard) that will more effectively illuminate a student’s learning experience in school. Not yet off the ground, but in process.