By Philip Arcuria & Maryrose Chaaban

Rubrics: A Definition

Rubrics have become a highly touted and ubiquitous tool in the proverbial assessment toolbox of higher education instructors. Rubrics can provide a wide range of benefits, from providing consistent feedback to students to decreasing overall grading time.

So, what is a rubric? Formally defined, a rubric is a “…coherent set of criteria for students’ work that includes descriptions of levels of performance quality on the criteria” (Brookhart, 2013, p. 4). In short, rubrics distinguish between levels of student performance on a given activity.

More broadly, a rubric is an evaluation tool that has three distinguishing features: evaluative criteria, quality definitions, and a scoring strategy (Popham, 2000).

  • Evaluative criteria represent the dimensions on which a student activity or artifact (e.g., an assignment) is evaluated.
  • Quality definitions comprise qualitative descriptions that distinguish student performance across a continuum for a given criterion.
  • The scoring strategy articulates the process of converting the qualitative evaluations of student performance related to each criterion into an overall judgement of the quality of the artifact.

Benefits of a Rubric

Rubrics can be used to provide objective, meaningful, and substantive feedback on a variety of assignments including papers, presentations, discussions, and projects. A carefully designed rubric can provide benefits to instructors and students alike.

Rubrics can help instructors:

  • reduce the amount of time spent grading
  • ensure consistency and objectivity in grading
  • reduce uncertainty and complaints about grades
  • adjust instruction or provide additional resources based on the overall performance of an entire class

Rubrics can also help students:

  • understand an instructor’s expectations on an assignment
  • understand how the assignment aligns to the course objectives
  • improve their performance by integrating instructor feedback
  • evaluate their own work

Getting Started with Designing a Rubric

There are two main types of rubrics instructors can design: holistic rubrics and analytic rubrics. Holistic rubrics provide one overall score and do not provide students with feedback on how they performed on each individual assignment criterion. Conversely, analytic rubrics provide students with a score on each criterion. This article focuses on analytic rubrics, which tend to be preferable for formative assessments given they provide students with specific guidance and feedback related to each relevant criterion (Brookhart, 2013).

Analytic rubrics can be broken down into three parts:

  1. Performance criteria are the factors being measured (e.g., Organization of Essay, Thesis Statement, etc.) and are commonly represented as the rows of a rubric.
  2. Performance levels represent gradations of performance and typically take the form of the column headings of a rubric. The performance labels can be numeric (e.g., 1, 2, 3, 4) or textual (e.g., Poor, Acceptable, Good, Excellent).
  3. Performance level descriptors articulate observable characteristics of performance for the intersection of a given criteria and performance level and comprise the cells of a rubric.

Best Practices when Designing Rubric

One of the first steps in designing a quality rubric is to identify the skills and knowledge students should demonstrate in the assignment based on the overall course or module learning objectives.

Building off of the recommendations of van Leusen (2013), you can use the following questions to get started:

  • What knowledge and skills is the assignment designed to assess? (Learning Objective)
  • What observable criteria represent those knowledge and skills? (Performance Criteria)
  • How can you best divide those criteria to represent distinct and meaningful levels of student performance? (Performance Levels)
  • What observable characteristics of students’ work differentiate among the performance levels for each criterion? (Performance Level Descriptors)

Addressing these questions can go a long way in helping you pinpoint the criteria you should include in a high-quality rubric. Once you have identified your criteria, you can start designing your rubric. Generally speaking, a high-quality analytic rubric should:

  1. Consist of 3-5 performance levels (Popham, 2000; Suskie, 2009).
  2. Include two or more performance criteria, and the labels for the criteria should be distinct, clear, and meaningful (Brookhart, 2013; Nitko & Brookhart, 2007; Popham, 2000; Suskie, 2009).
  3. Include performance level descriptors that: distinguish between qualitative differences in performance that are observable and measurable; are consistent within each criterion; and clearly articulate the expectations for each performance level (Banta & Palomba, 2015; Brookhart, 2013; Nitko & Brookhart, 2007; Popham, 2000; Suskie, 2009).

Evaluating Your Rubric

Once you have created a rubric, you can use the following checklist to evaluate its level of quality. This checklist is based on research by the ASU EdPlus Action Lab, where it is used for automated scoring of rubrics to assess basic structure.

Yes No
Performance Levels (columns)
There are 3-5 performance levels
The labels/descriptions of the performance levels are distinct, clear and meaningful
Performance Criteria (rows)
There are 2 or more performance criteria
The labels/ descriptions of the performance criteria are distinct, clear and meaningful
Performance Level Descriptors (cells)
The descriptors describe differences in performance that are observable and measurable
The descriptors clearly articulate what the expectations are for each performance level for a given criterion
For a given row, the descriptors evaluate the same criterion across all performance levels
The descriptors represent meaningful differences in performance across the performance levels for a given criterion

Click the following link to download the checklist: Analytic Rubric Checklist

Examples of Exemplary Rubrics

The Association of American Colleges and Universities created a series of high-quality rubrics entitled VALUE rubrics that span intellectual and practical skills (i.e. critical thinking, written communication, teamwork), personal and social responsibility (i.e. civic engagement, global learning, and ethical reasoning), and integrative learning.

These rubrics are open source and available to download directly from their website. To view these rubrics, please visit the Association of American Colleges and Universities – VALUE Rubrics.

Additional Resources

Here is a list of additional resources regarding rubrics:


Arcuria, P., Morgan, W., & Fikes, T. G. (2019). Validating the use of LMS-derived rubric structural features to facilitate automated measurement of rubric quality. International Conference on Learning Analytics and Knowledge.

Banta, T. W., & Palomba, C. A. (2015). Assessment essentials: planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey-Bass.

Brookhart, Susan M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, VA: Association for Supervision & Curriculum Development.

Nitko, A. J., & Brookhart, S. M. (2007). Educational Assessment of Students (5th ed.). Upper Saddle River, NJ: Pearson Education.

Popham, W. J. (2000). Modern educational measurement: Practical guidelines for educational leaders (3rd ed.). Boston: Allyn and Bacon.

Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco, CA: Jossey-Bass.

van Leusen, P. (2013). Assessments with rubrics. ASU TeachOnline. Retreived from

About the Action Lab

The Action Lab, a dedicated digital teaching and learning laboratory within EdPlus, engages in deep learning analytics, leveraging expertise in learning, cognitive, social, and data sciences to provide continuous program improvement that drives student success. Our mission is to make technology- enabled education research useful for systemic, scalable and radical advancement in digital teaching and learning.

Special Thanks

We would like to thank Tom Fikes, Director of Research, EdPlus Action Lab, and Julie Allen, Sr. Instructional Designer, EdPlus Instructional Design & New Media for their editing prowess.