Getting MTSS Right: How We Refined Our Tiering Process

Literacy growth depends on how well instruction aligns with students’ strengths and needs. Within an MTSS (Multi-Tiered System of Supports) framework, we use data to place students into instructional tiers. That means student progress is only as strong as the data guiding those decisions.
In Texas, the TEA (Texas Education Agency) mandates which students must receive interventions. But over time, we realized many more students needed support than the guidelines required. This post shares the evolution of a tool we developed to better identify students’ instructional tiers. We hope it helps you better support your students, too.
The first sign of trouble
In our first year implementing MTSS at the middle school level, we only provided reading interventions to students who failed the STAAR, the state’s annual assessment. By year’s end, those students had improved and passed the test. Success, right?
Not quite.
Several students who had not received intervention failed. We asked ourselves: Did we miss something? Could we have caught the need for support for these students?
Looking back at those students’ data, we saw a pattern: most had scored in the “Approaches” band—just above failing—at least one year prior. We realized that once students dipped below expectations, they often needed 1.5 to 2 years of growth to catch up. From that point on, we expanded our intervention group to include students who passed with only a narrow margin.
Casting a wider net
The following year, we adjusted our approach. We continued providing required interventions to students who failed STAAR, but also included those who barely passed.
Most of these students were placed in a reading “selective”—a full-year elective course focused on literacy. We also:
- Added professional development for ELAR teachers to strengthen Tier 1 instruction.
- Created a student-friendly slide deck explaining how state test data informs decisions, in hopes that students would take the assessments more seriously.
These changes helped. But we still relied heavily on a single score—STAAR. As a result, some students were “misplaced” in intervention. We had a successful year, but wondered: Were we over-supporting some students who didn’t really need it?
A more focused approach
That same year, a neighboring district shared a rubric they used to guide tier placement. It wasn’t a perfect fit for us, but it gave us a great starting point.
We developed a rubric that:
- Pulled together multiple data points
- Helped us more accurately identify students who truly needed support
This tool worked well for a few years, largely because our MTSS team had deep experience with reading data and assessments. But we knew this process needed to be more foolproof, especially as staffing changed.
Moving to weighted data
We started to evaluate each data point based on how valid and reliable it was for middle school readers—especially considering how much motivation affects assessment performance.
Eventually, we developed a weighted rubric. For example, on the 7th grade rubric, TMSFA (Texas Middle School Fluency Assessment) scores carry more weight than STAAR or IXL.
Why? Because:
- TMSFA is a one-on-one measure, so students were interacting directly with their teacher, not a faceless computer
- Oral reading fluency + comprehension is a proven, strong indicator of literacy growth
- It offers consistent reliability across student populations
This version of our rubric is the one we use now.

A few caveats
Our rubric helps us make informed decisions in the best interest of students, but it’s clear that there’s no perfect tool.
We refine it nearly every year as new assessments are added or state requirements change. For example, with upcoming changes to STAAR, we’ll revise it again soon.
Important considerations:
- These rubrics can be time-consuming and are best kept organized and confidential. (We store them in binders by grade level, in a locked office.)
- The assessments we use are not diagnostic. They are universal screeners given to all students.
- Once students are placed in intervention, diagnostic tools are used to tailor instruction. (Want more information about those assessments? This post explains some of them.)
Our hope for you
As you plan your MTSS program, feel free to adapt the templates we’ve shared.
Here are some questions to ask as you get started with your modifications:
- Are you having a difficult time accurately identifying students who need support?
- What assessments are you required to administer, and what data can you cull from those?
- To what degree do those data points align to student progress? (Experience is a strong benefit here.)
- Which assessments are more valid and reliable?
- What additional measures influence your student population’s success rate? How can you address those measures in a fair and supportive way?
Let us know if our rubric or process helps your team refine its MTSS approach. We’re all learning together, and every step toward better alignment helps more students grow into confident readers!
To access resources that support your literacy intervention, subscribe to our bi-monthly newsletter! Not only will you get immediate access to our Freebies, but you’ll receive timely, informative emails every other week!