This article is a part of the What Does it Mean to Design for Scale? series.

“How do you know they know?” is a critical course-design question to answer to ensure  learners provide evidence of their understanding and competence. And for all intents and purposes here, the question becomes, “How do you know they know… at scale?” 

Assessment at scale isn’t an all-or-nothing proposition; it doesn’t mean sacrificing learner-instructor connection, communication, and community. But it does mean an adjustment in mindset for faculty, learners, and instructional designers (IDs) alike; i.e., the need to think more broadly about the assessment and feedback models required to accommodate the reality of large enrollment courses, and have the courage to try things out and objectively evaluate them. 

What follows here isn’t a prescription, but rather an introduction. Some food for thought to help grapple with the challenges inherent in scaling assessments. Key among them are rigor, academic integrity, instructor presence / learner expectations (e.g., accustomed to smaller class size, individualized support and attention), accessibility, and, of course, mastery.

The Usual Suspects

By and large, many scalable assessment types themselves aren’t wildly different from the norm, and you and your learners may already be well acquainted with many of the usual suspects. Autograded high, low, and no-stakes assessments are still very much in the mix.

For example:

  • No-stakes knowledge checks, quizzes and exams scale easily, and are familiar standbys. Make sure to reinforce concepts with explanations for correct and incorrect responses, so that learners can resolve any misunderstandings either on their own or by reaching out on the discussion board. It will be critical, too, to have large exam pools for the sake of maintaining academic integrity, and for high-stakes exams, the same, with the addition of proctoring. 
  • Adaptive learning technologies are another option. Tech vendors are leveraging artificial intelligence (AI) left, right, and center. This not only improves scalability, but also helps learners achieve mastery through automated feedback, by supplying tutoring / coaching, and supplemental instruction for various fields of study. CogBooks and InSpark are among several adaptive tools integrated into ASU courses. You may already be using others, such as Perusall for collaborative annotations, Gradarius and McGraw Hill’s ALEKS for math, Wiley’s zyLabs for computer programming, or Cerego Content Mastery for concept review and retention. (Again, integrate feedback into questions / problems so learners can solidify their understanding.)
  • Custom assessments: There’s always the possibility to work with a development team to custom-design scalable assessments and practice exercises (e.g., HTML5, JavaScript problems, interactive tools, etc.) to fit the academic bill. 

(Graded) Participation, Personal Connection, and Community

Designing for scale and fostering a learning community aren’t mutually exclusive; scalability must, and still does leave plenty of room for student-to-content, student-to-student, and student-to-instructor interaction. Again, you may very well be using some of these technologies and practices already.

  • Formative feedback and community-building experiences via discussion tools: By increasing assessment scalability, you may find the less time you spend grading, the more time you have to provide the hundreds (or thousands!) of learners with the targeted, just-in-time academic support they need. 

The discussion board can become a one-stop-shop for information-sharing, formative feedback, and deeper dives into the content, making for a rich learning experience. Apart from the learning management system’s built-in board, you might also consider tools such as Yellowdig, whose points system incentivizes participation, or InScribe, which uses AI to aggregate topics, frequently-asked-questions, and resources.

  • Office hours via Zoom are also useful to clarify key concepts, reduce or eliminate pain points, and otherwise assist learners and build community. 

The Hard-to-Solve Problem of Project-based Learning at Scale

As mentioned at the outset, how, what, and for whom to scale is fluid and exploratory, and involves a certain amount of trying things on for size given evidence-based, purposeful design, subject matter expertise, objectives, and the like. 

Implicit in this problem, or better put, challenge, is an invitation, a chance to ask, “What does high quality, project-based learning at scale look like?” Though you may not realize it, you probably have a few answers at the ready right now, chief among them that

  • it should provide learners the opportunity to deepen their knowledge via topically relevant real-world scenarios;
  • it should have rigorous, well-defined assessment criteria (i.e., a rubric);
  • it should allow for objective peer and self assessment and feedback;
  • it should provide opportunities to reflect on peer feedback and one’s own work; and
  • it should still allow for faculty to intervene, providing support and expertise [2].

Armed with those and any other thoughts, the next (and harder-to-answer) question is, “How do you scale high quality, project-based learning?” Here’s one instructor’s response for consideration.

Full Scale Ahead! 

In 2016, Dr. Adam Pacton, writing program administrator and lecturer at ASU’s College of Integrative Sciences and Arts worked with an instructional design (ID) team to design, develop, and deliver ENG 101: English Composition, for ASU’s Global Freshman Academy (GFA) on edX. ENG 102: English Composition: Research and Writing, followed shortly thereafter. The assignments were instructor graded, and a team of faculty associates, managed by fellow writing program administrator and course manager Jamie Merriman-Pacton, graded each one. (Both courses currently also run under ASU’s Universal Learner Courses (ULC) umbrella.) 

As GFA and ULC programs grew, so did the numbers, making the existing delivery model too restrictive in the long term. So Adam researched, reimagined, and redesigned both ENG 101 and 102 from 2019-2020, again, in partnership with an ID team, to launch the first two fully scaled, credit bearing English Composition courses. Take a closer look at his fundamental approach to assessment and feedback.

From 360° Review to 180° Remodel 

Both of Dr. Pacton’s original courses already employed scaffolded Writing Projects (including drafting and peer review), reflective Writers Journals, and a learner-created ePortfolio as a medium for them to not only showcase their work but also demonstrate their digital literacy / accessibility knowledge, skills, and abilities. The redesign maintained these components, added others, and altogether refocused its point of view on existing notions of English composition and competency. It concentrated on the tools, techniques, and technologies of writing, along with the conceptual, rhetorical, and transferrable dimensions of college composition [3]. It shifted the scale from a handful of high-stakes writing assignments to these main pathways [4]:

  • Objective knowledge assessment: This consists of autograded Cerego Content Mastery exercises and weekly quizzes on rhetorical concepts, critical thinking, knowledge of conventions, interactions between media and text, discourse analysis, and other knowledge domains. 
  • Technology of writing assessment: Learners develop and improve their digital literacy by creating an ePortfolio to warehouse their Writers Journals and Writing Projects. In doing so, they learn about (and must incorporate) accessibility / Universal Design for Learning principles. As with the Journals and Projects, ePortfolio design is a scaffolded, iterative process. 
  • Writing assignment assessment: All assignments are self assessed; there is no instructor grading. Self assessment, along with peer review, allows learners to become expert evaluators of both their own work and others’, a critical skill in any discipline. This model also encourages experimentation and growth.
  • Provision of feedback: Redesigning for scale also means rethinking and reconsidering what feedback is, why it is valuable, and  how it is incorporated. Feedback mechanisms in these English courses are, therefore, manifold (including, but not limited to peer / self review and automated formats noted above). 

Similar to the assignments themselves, ENG 101 and 102’s feedback mechanisms are designed to encourage learners to revisit and rediscover their work and the course content. And they don’t exclude subject matter expertise. In fact, Adam and Jamie are able to better balance the competing priorities of grading assignments versus providing focused, just-in-time assistance on the discussion board. This high-touch approach allows them to spend more time mentoring and coaching all facets of online learning–from the writing process to troubleshooting tech issues–and offer authentic, formative feedback on-demand [5].

The GFA and ULC English Composition courses have ushered in approximately 200,000 learners and counting. The current model accounts for academic integrity review, and the faculty are engaged in an ongoing data-driven efficacy evaluation to prove out their design model, along with its optimization. Repeated outcomes assessment has internally validated the writing assessment model, with students performing at or above the level of their counterparts in smaller-section offerings. 

In Conclusion…

Paradigm shifts happen incrementally, over time, not all at once, overnight. Regardless of your take on any of the ideas expressed here, just keep on asking yourself (your colleagues, and your IDs), “How do you know they know at scale?” And take full advantage of your unique position to offer access to top-notch research and thinking to anyone who wants it, anywhere in the world. You’ll be glad you did.

Jill Roter is a Senior Instructional Designer at Arizona State University.

[1] Leo, L. and Roter, J. (2020). Unique challenges and successes in developing open scale courses [PowerPoint slides]. 

[2] HQPBL.org. (2018). A framework for high quality project based learning. https://hqpbl.org/wp-content/uploads/2018/03/FrameworkforHQPBL.pdf

[3] Pacton, A. (personal communication, August 28, 2020). 

[4] Pacton, A. (2019). English composition redesign [scaling proposal], College of Integrative Sciences and Arts, Arizona State University.

[5] Pacton, A. (2019). English composition redesign [scaling proposal], College of Integrative Sciences and Arts, Arizona State University.