© Bigstockphoto.com/Stmool

One of the most frustrating situations I have faced with my students is when I have spent time providing feedback on their test answers, only to see them glance at the overall grade and then ignore everything else. To me, it seems they regard their learning as finished, and my comments are unneeded. Despite the time I had taken to review their work and provide what I hoped was thoughtful feedback, I quickly learned that my comments were largely ignored.

However, I often needed my students to continue to engage with the material, even when they might want to move on to the next concept or activity, and I also wanted to include the students in the review process to increase their buy-in. In short, I needed a reasonably automated process to augment my direct comments that could provide feedback in the moment to students about their test performance.

When students can examine feedback given on assessments aligned to course objectives, they can quickly see where they have achieved proficiency in the subject and where they have shortcomings. Beyond a simple right or wrong evaluation, structured feedback conveys the level of understanding of the objectives, facilitates remediation, and assists in preparation for future assessments.However, generating feedback of this depth requires time that is not often available to teachers. In addition, the time required by students to critically examine their feedback is limited and, in my experience as a teacher, often not perceived as valuable.

Creating a method of automated feedback

To tackle the time constraints and other obstacles, I developed an automated system via Google Forms and Google Sheets for my high school introductory chemistry course. The system aims to give students leveled feedback aligned to performance, and allows them to see their overall score on different types of questions. I give four tests throughout the course, and each contains a combination of multiple-choice, short answer, problem-solving, and free-response questions.

As the course progresses and increases in difficulty, the number of multiple-choice questions on each test decrease, while problem-solving and free-response questions increase. Because test questions were not initially written to align with a classification system and the number of questions was not expressly monitored, when I created my own automated feedback system, I aligned the questions with Bloom’s Taxonomy. Bloom’s structure places recall as the least complex type of question, apply and analyze in the middle level of difficulty, and synthesis and evaluation as the most difficult question types.Examples of the different levels of questions across multiple units of instruction are included in Table 1.

Taxon Level

Example Questions

Recall

“The atomic number represents…”

“Given the following balanced equation what is the mole ratio of A to B?”

Apply

“Draw the Lewis dot structure for element A.”

“Convert X number of moles to atoms of an element.”

Analyze

“Given a set of lab data, determine which measurements should be taken again and explain why.”

“Circle all isotopes of element A where A is given in atomic notation.”

Synthesis

“Predict the products for the following reactions, then balance the equations.”

“Discuss how intermolecular forces and Kinetic Molecular Theory explain the arrangement of particles in a solid compared to a gas.”

Evaluation

“What is the percent yield of D for the reaction A + B C + D given X grams of A and Y grams of B.”

“Explain why the Lewis dot diagram for A is incorrect.”

Table 1. Sample questions aligned to Bloom's taxonomy.

Using Bloom, I create general and tiered feedback to designate the desired level of proficiency. In the analysis, I score multiple-choice questions based on the number of correct answers versus the total number of questions, because the questions cross many levels of Bloom and have low point values. I also include performance percentages on other sections of the assessment, including application, problem-solving, and synthesis in addition to the leveled feedback.

Note that key takeaways from this process of assessment analysis include observing the transition from questions of lower taxa to more application and synthesis questions as the course progresses from properties of matter to stoichiometry. Additionally, determining boundaries of proficiency scores required a higher degree of thoughtfulness, as key concepts introduced early in the course are later considered fundamental.

Using Google for automated feedback

When my students have an assessment returned to them, they follow a link to a Google Form. Students then self-report their answers to multiple-choice questions as either correct or incorrect, while they report their answers to free-response questions as correct, partial credit, or incorrect (see Figure 1). While it would be easy for students to be dishonest in this process, over the past two semesters I have not encountered instances of such behavior. Students also have an option to enter their own reflection or feedback for the multiple-choice questions as a whole and for each free-response question.

The Google Form collects student emails to match form responses to individual students for export to a Google Sheet. A Google Sheet specific to each student is generated for their review. Students can then see if they are proficient in the content of that question, and I can see if there are any issues with their answers to multiple-choice and problem-solving questions, as well as review any feedback students may have left to help themselves to improve in these areas.

Figure 1. Gathering student responses and reflection. The author collects students’ self-reported analysis via a simple Google Form. Multiple-choice questions require the student to record whether they answered the questions correctly or incorrectly, while free-response questions include an option for partial credit. Students can also include a self-reflection to write notes to their “future selves” to better prepare for the next assessment

To populate the Google Sheet, I established proficiency scores with corresponding feedback for each section of the assessment. I also created four levels of feedback with descriptors, as shown in Figure 2. Percentages for the various levels of feedback were determined for each assessment section in the course’s four tests. This ensured that feedback was leveled appropriately as tests became more difficult. For example, early Synthesis-level questions such as, Discuss how intermolecular forces and Kinetic Molecular Theory explain the arrangement of particles in a solid compared to a gas, had low point values and warranted a low cutoff percentage for Proficient. Later Synthesis level questions such as, Predict the products for the following reactions, then balance the equations, had higher point values, and had an expectation of recall of previously-learned material, making the Proficiency percentage cutoff much higher.

Figure 2. Examples of leveled feedback construction. Based on each student’s self-reported scores, a Google Sheet utilizes a lookup to include scripted feedback based on the score and type of question.

The feedback redirects students to three tiered sections for optional use: extension, progression, and remediation. Feedback of “Great Job” directs students to the extension portion that contains resources representing more difficult concepts (such as those found in AP Chemistry), interesting historical perspectives, or current topics in the content area that are beyond the scope of the introductory course. Similarly, the “Adequate” feedback references the progression section offering videos, simulations, and readings that support the objectives of the content. I’m also doing further research to examine whether students use the LMS in conjunction with the assessment feedback, as I try develop a way to track student usage of specific links in the LMS.

The “Danger Zone” feedback represents a rallying call to the student. Too often, students pass an assessment with an above-average score, yet still remain less than proficient in key areas. By directing the student to meet with the teacher, I can hopefully help them see beyond the passing grade and recognize the gaps in their understanding. Over the two semesters I’ve used this approach, I had two students with “Danger Zone” feedback from a problem-solving question schedule a meeting. I need to gather additional data to determine if students receiving the “Danger Zone” feedback follow the directive, augment their learning in some other form, or require a mandatory meeting notation.

After automated population of the Google Sheet (see “Automated Data Processing: Some Nuts and Bolts,” below), I provided students with an individual-specific link to the spreadsheet. The spreadsheet continued to populate throughout the course’s four units (Figure 3 shows all of the assessments and feedback for the four units). As mentioned previously, I included an area for self-reflection in each section to facilitate student engagement and ownership of performance. One surprising result was that most students added “note-to-self” messages, which ranged from simple congratulatory messages to reminders of what not to do next time.

Figure 3. Complete artifact by the end of the introductory chemistry course.

The author gives students a unique link to a posted webpage that includes the learning objectives of each assessment, the percent correct they scored on the multiple-choice section, and stock prescriptive feedback based on the accuracy of their answers. It also includes space for the student’s self-reflection (highlighted in yellow), and similar feedback for the free-response questions, broken down by the taxonomy of the questions (recall, analysis, or synthesis).


While watching students review their tests, I observed students taking care in their responses and proceeding page-by-page through their tests. I could see comments like, “I forgot Avogadro’s number,” and “study ionization energy,” as well as “STUDY THESE!!!” and “Check work again, but should be good.” I was excited to see that even students with high scores completed the process with comments such as, “Did great, don’t need to change anything,” and “Did good.” Although I provided students with only about 10 minutes of class time to complete the process, I could see them engaging with my comments, their scores, the automated feedback, and their own comments.

In generating a digital artifact, students have access to their performance and reflection notes at any point in the course. This means that students cannot accidentally misplace their reflections, and can see an overall picture of their progression through the course. By providing students with quick access to feedback and a reflective artifact, students I can help students determine what their grade means in terms of their learning, and can focus on weaker content areas and practice specific skills. Ultimately, I hope that students can see a path forward in their introductory chemistry course by taking ownership of their learning.

General feedback gathered at the end of both semesters indicated that students thought the system was “helpful for recap,” provided a “general overview of what has been going well and what isn’t as stable,” and “shows where you went wrong. Sometimes when I am just looking over my test after it is graded, I don’t always see the repetition.” Other student comments included that the process helped them know what to study instead of “spending time on random things,” and to see which areas were the greatest struggle. One student expressed satisfaction at being able to “see how I’m doing and what would be helpful to go back to,” especially in reference the course’s final exam.

While the comments were generally positive for the 36 students across both semesters, it should be noted that one student expressed that “maybe this is useful for some people, probably not for me,” while another noted, “feedback could be more specific.” I’ll continue to gather feedback at the end of the course, as well as during student follow-ups to Danger Zone warnings, to determine the utility of the system and guide its further development, including creating more specific feedback tools, and determining why students may feel the system is not useful for them.

Automated Processing of Data: Some Nuts and Bolts
  • The author created a Google Form for each assessment that aligned to the number and types of questions that the students answered on the assessment. The Form included a mechanism for students to self-report the questions that they had answered correctly as well as incorrectly, along with an opportunity to add reflections on how and why they found success or were challenged.
  • Each Google Form was linked to a Google Sheet containing several “backend” worksheets that automate the filtering of individual student responses, provide pre-determined comments that correspond with the student’s performance on the assessment, and collate the feedback for each student. Each student had their own worksheet which was then published to the web as a read-only document that served as reference as they were preparing for the next assessment.
  • There is an initial set-up at the start of each new course, as well as some updating for each new assessment. The overall time commitment for the author was about four hours, which included keying tests to Bloom’s taxonomy, creating the Google Form and backend Google Sheet, and populating students’ Google Sheets. Updating for a new semester requires less than 15 minutes.
Author’s Acknowledgement

This work could not have been completed without the generous support of Josh Berberian, Director of Institutional Research and Coordinator of the Center for Teaching and Learning.

References

Dickson, B.; Housiaux, A. “Feedback in Practice: Research for Teachers.” Presentation materials prepared for the Tang Institute at Andover. Available at https://stage-tang.andover.edu/files/Feedback-in-Practice-1.pdf (accessed Oct 30, 2024).

Bloom, B. S.; Engelhart, M. D.; Furst, E. J.; Hill, W. H.; Krathwohl, D. R. Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. New York: Longman, 1956.