Digital Feedback Methods
0905
2022
978-3-8233-9532-4
978-3-8233-8532-5
Gunter Narr Verlag
Jennifer Schluer
10.24053/9783823395324
The crucial role of feedback in the learning process is undisputed. But how can feedback be exchanged in the digital age? This book equips teachers and learners with a research-based overview of digital feedback methods. This includes, for instance, feedback in text editors, cloud documents, chats, forums, wikis, surveys, mails as well as multimodal feedback in video conferences and recorded audio, video and screencast feedback. The book discusses the advantages and limitations of each digital feedback method and offers suggestions for their practical application in the classroom. They can be utilized in online teaching as well as to enrich on-site teaching. The book also provides ideas for combining different feedback methods synergistically and closes with recommendations for developing dynamic digital feedback literacies among teachers and students.
<?page no="0"?> Digital Feedback Methods Jennifer Schluer <?page no="1"?> Dr. Jennifer Schluer is an Assistant Professor for TESOL (Teaching English to Speakers of Other Languages) at Chemnitz University of Technology, Germany. As an applied linguist and teacher educator, she specializes in digital teaching and digital feedback processes as well as language awareness and culture learning. Zusatzmaterial Zu diesem Band gibt es Zusatzmaterialien, die Sie kostenfrei online abrufen können. Jedes Zusatzmaterial ist im Buch mit einem eindeutigen Hinweis am Seitenrand und einer Zusatzmaterialien-ID gekennzeichnet. Erstellen Sie gleich einen persönlichen Account auf unserer eLibrary und erhalten Sie mit Ihrem Gutscheincode kostenfreien Zugriff auf das eBook und die Zusatzmaterialien zum Buch. So geht’s gutschein.narr.digital besuchen den Schritten zum Aktivieren des Gutscheincodes folgen Zusatzmaterialien kostenfrei herunterladen Ihr Gutscheincode zum Zusatzmaterial <?page no="4"?> Jennifer Schluer Digital Feedback Methods <?page no="5"?> DOI: https: / / doi.org/ 10.24053/ 9783823395324 © 2022 · Narr Francke Attempto Verlag GmbH + Co. KG Dischingerweg 5 · D-72070 Tübingen Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetztes ist ohne Zustimmung des Verlages unzulässig und strafbar. Das gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikro‐ verfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen. Alle Informationen in diesem Buch wurden mit großer Sorgfalt erstellt. Fehler können dennoch nicht völlig ausgeschlossen werden. Weder Verlag noch Autor: innen oder Herausgeber: innen übernehmen deshalb eine Gewährleistung für die Korrektheit des Inhaltes und haften nicht für fehlerhafte Angaben und deren Folgen. Diese Publikation enthält gegebenenfalls Links zu externen Inhalten Dritter, auf die weder Verlag noch Autor: innen oder Herausgeber: innen Einfluss haben. Für die Inhalte der verlinkten Seiten sind stets die jeweiligen Anbieter oder Betreibenden der Seiten verantwortlich. Internet: www.narr.de eMail: info@narr.de CPI books GmbH, Leck ISSN 0941-8105 ISBN 978-3-8233-8532-5 (Print) ISBN 978-3-8233-9532-4 (ePDF) ISBN 978-3-8233-0380-0 (ePub) Bibliografische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbi‐ bliografie; detaillierte bibliografische Daten sind im Internet über http: / / dnb.dnb.de abrufbar. www.fsc.org MIX Papier aus verantwortungsvollen Quellen FSC ® C083411 ® www.fsc.org MIX Papier aus verantwortungsvollen Quellen FSC ® C083411 ® <?page no="6"?> 11 1 13 2 15 2.1 15 2.1.1 16 2.1.2 20 2.1.3 23 2.1.4 24 2.1.5 29 2.1.6 30 2.1.7 32 2.2 34 2.2.1 35 2.2.2 37 2.2.3 38 2.2.4 42 2.3 45 2.3.1 45 2.3.2 47 2.3.3 49 2.4 50 2.4.1 51 2.4.2 53 2.4.3 54 3 59 3.1 59 3.1.1 59 3.1.2 60 3.1.3 61 3.1.4 64 Contents Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theoretical frameworks and foundations . . . . . . . . . . . . . . . . . . . . . . . . . Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining and contextualizing the role of feedback in the learning process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback language and commenting style . . . . . . . . . . . . . . . . . . . . . . . Feedback structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Characteristics of effective feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback modes, methods and media . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback directions and dialogues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback literacies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seeking feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exchanging feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Utilizing feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digital literacies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining digital literacies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Developing digital literacies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Questions/ Tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Digital feedback terminology and overview . . . . . . . . . . . . . . . . . . Technology-generated and technology-mediated digital feedback . . . Synchronous and asynchronous digital feedback . . . . . . . . . . . . . . . . . Overview of digital feedback methods discussed in this book . . . . . . . Digital feedback methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Written electronic feedback in (offline) text editors (Text editor feedback) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . <?page no="7"?> 3.1.5 68 3.1.6 68 3.1.7 75 3.1.8 77 3.1.9 78 3.2 78 3.2.1 78 3.2.2 80 3.2.3 81 3.2.4 83 3.2.5 86 3.2.6 88 3.2.7 90 3.2.8 90 3.2.9 91 3.3 92 3.3.1 92 3.3.2 93 3.3.3 93 3.3.4 96 3.3.5 98 3.3.6 99 3.3.7 103 3.3.8 104 3.3.9 104 3.4 104 3.4.1 104 3.4.2 105 3.4.3 105 3.4.4 106 3.4.5 107 3.4.6 107 3.4.7 109 3.4.8 109 3.4.9 109 3.5 110 3.5.1 110 3.5.2 111 3.5.3 111 3.5.4 113 3.5.5 114 Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Automated writing evaluation (AWE) . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback in cloud editors (cloud-based editing applications) . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback in online discussion forums (Forum feedback) . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback in wikis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Contents <?page no="8"?> 3.5.6 114 3.5.7 115 3.5.8 115 3.5.9 115 3.6 116 3.6.1 116 3.6.2 117 3.6.3 117 3.6.4 119 3.6.5 121 3.6.6 121 3.6.7 122 3.6.8 122 3.6.9 123 3.7 124 3.7.1 124 3.7.2 125 3.7.3 126 3.7.4 129 3.7.5 132 3.7.6 132 3.7.7 133 3.7.8 134 3.7.9 135 3.8 136 3.8.1 136 3.8.2 136 3.8.3 137 3.8.4 139 3.8.5 142 3.8.6 142 3.8.7 145 3.8.8 146 3.8.9 147 3.9 148 3.9.1 148 3.9.2 148 3.9.3 149 3.9.4 150 3.9.5 151 3.9.6 153 Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback in blogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Chat/ Messenger feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . E-mail feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Survey feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Contents <?page no="9"?> 3.9.7 154 3.9.8 155 3.9.9 155 3.10 156 3.10.1 156 3.10.2 156 3.10.3 157 3.10.4 159 3.10.5 160 3.10.6 161 3.10.7 162 3.10.8 162 3.10.9 162 3.11 163 3.11.1 163 3.11.2 164 3.11.3 165 3.11.4 167 3.11.5 170 3.11.6 171 3.11.7 172 3.11.8 173 3.11.9 174 3.12 174 3.12.1 174 3.12.2 176 3.12.3 177 3.12.4 179 3.12.5 181 3.12.6 182 3.12.7 184 3.12.8 184 3.12.9 185 3.13 185 3.13.1 185 3.13.2 187 3.13.3 188 3.13.4 192 3.13.5 195 3.13.6 197 3.13.7 204 Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback via live polls (Poll feedback, ARS) . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Audio feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Video feedback (talking head, no screen) . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Screencast feedback (screen only, talking head optional) . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Contents <?page no="10"?> 3.13.8 206 3.13.9 207 3.14 209 3.14.1 209 3.14.2 210 3.14.3 211 3.14.4 213 3.14.5 215 3.14.6 217 3.14.7 220 3.14.8 221 3.14.9 221 3.15 222 3.15.1 222 3.15.2 223 3.15.3 224 3.15.4 226 3.15.5 226 3.15.6 227 3.15.7 228 3.15.8 229 3.15.9 229 3.16 230 4 231 4.1 231 4.1.1 232 4.1.2 234 4.1.3 236 4.2 238 4.2.1 240 4.2.2 242 4.2.3 244 Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback in web conferences (Videoconference feedback) . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Feedback in e-portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition and alternative terms/ variants . . . . . . . . . . . . . . . . . . . . . . . Contexts of use, purposes and examples . . . . . . . . . . . . . . . . . . . . . . . . . Advantages of the feedback method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations/ disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementation (how to) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advice for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with other feedback methods . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) . . . . . . . . . . . . . . . . . . . . . Questions/ tasks (for teachers and students) after heaving read this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion and outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations of feedback methods . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with written feedback to address higherand lower-level aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations with synchronous feedback to foster feedback dialogues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Combinations of digital and analog feedback methods for follow-up exchanges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Towards dynamic digital feedback literacies . . . . . . . . . . . . . . . . . Knowledge of feedback methods and of their use . . . . . . . . . . . . . . . . . Attitudes and flexible adaptation skills (digital agility) . . . . . . . . . . . . Stepwise familiarization with digital feedback methods and continuous training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Contents <?page no="11"?> 4.2.4 248 4.2.5 249 4.3 250 5 251 Learner-orientation and development of digital learner feedback literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Questions/ Tasks (for teachers and students) after having read this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Questions/ Tasks (for teachers and students) after having read the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Contents <?page no="12"?> Foreword When I started to incorporate digital feedback and assessment methods in my teacher education courses several years ago, my students (pre-service English language teachers) were definitely highly interested and engaged in trying out different digital methods. However, from the perspective of prospective teachers, some of them were less enthused. A student’s comment during the course evaluation in late January 2020 might be representative of such a conservative stance: “Well, this is definitely very interesting, but as teachers, all we need is pen and paper.” While not everybody agreed, this statement nevertheless reflects a frequent reservation that was echoed by many teachers and scholars in the pre-Covid-19 era. Three months later, when the first digital semester had begun due to the lockdown of educational institutions during Covid-19, the echo had changed. My pre-service teachers said, “it’s great that you have responded to the current requirements so quickly and offer such a highly relevant seminar about digital feedback methods.” In fact, I had done it for years already; it is the context that has changed and that has heightened the relevance of digital methods. After two years of teaching under Covid-19 conditions, I felt the need to share important research findings and best practices about digital feedback methods in the book that is currently displayed on your screen, opened on your desk or held in your hands. The suggested digital feedback methods can be used in online and hybrid teaching settings, but also in technology-enriched face-to-face classrooms as well as in other blended learning contexts. Definitely, this book is not meant to be fully exhaustive since digital developments are dynamic and ongoing. Rather, the book should serve as a source of inspiration and practical guidance for students and teaching staff at schools and universities. Teacher educators and other higher education staff can use it to widen their repertoire of digital feedback and assessment methods. Likewise, the book or its individual chapters can be utilized as a resource for seminars or training sessions about digital feedback. Students may consult it to practice new feedback methods themselves to improve their own learning (e.g. digital peer feedback) and teaching, e.g. during placements at schools or other educational institutions or even at companies. Students and educators can test and expand their knowledge and skills by solving the tasks in the individual chapters and by consulting the supplemental (online) materials. As feedback is ideally understood as an ongoing dialogue about learning, the readers are invited to share their experiences with me by contacting me via e-mail or by contributing to the following electronic document: https: / / padlet.com/ JSchluer/ DigiFeedBookSchluer22 At this point, I would like to thank my students for their contributions and constructive comments in my seminars. I hope that many further students will benefit from the ideas presented in this book. Special thanks go to the University of Kassel <?page no="13"?> for some initial funding at the very start of my explorations in the field of screencast feedback as well as to the Saxon State Ministry for Higher Education, Research and the Arts (Sächsisches Staatsministerium für Wissenschaft, Kultur und Tourismus) for granting me a digital fellowship in 2021/ 22. Last but not least, I would like to thank my student assistants: Shanqing Gao for assisting me in up-dated literature searches, Jo-Luise Fröhlich for helping me in the process of creating the handouts for the online supplement, and Lucian Thom for contributing some first drafts of the sketches that illustrate the different digital feedback methods. Questions/ Tasks before reading the book Knowledge and reflection questions: (1) For pre-service teachers: In how far could this book help you prepare for your future job as a teacher? What are your wishes, needs and expectations? (2) For students in other degree programs: What role do feedback and digitalization play in your studies? What do you expect to learn from this book? (3) In-service teachers and higher-education staff: As a teacher/ educator, you have probably given and received feedback in multiple ways and you might also have some knowledge of digital methods due to your professional or free-time activities. What do you expect to learn from this book? 12 Foreword <?page no="14"?> 1 Introduction Feedback can help optimize learning and teaching processes if implemented effectively (Hattie, 2009; 2012; Wisniewski, Zierer, & Hattie, 2020). However, most of the research has concentrated on rather traditional feedback methods, such as oral feedback in the classroom or (hand-)written feedback for written assignments. Despite the ubiquity of digital technologies in our lives and the shift to digital teaching due to the Covid-19 pandemic, many teachers and lecturers only seize a small portion of its affordances, especially when it comes to digital feedback methods (for Germany see e.g. Forsa, 2020a; 2020b; Wildemann & Hosenfeld, 2020). The reasons for this can be various, such as lacking equipment or support, but also unwillingness and unawareness of the possibilities that exist. Therefore, the aim of this book is to introduce a variety of methods that teachers (and students! ) can use to exchange feedback in digital ways. The focus is set on what teachers and learners can do and create themselves (technologymediated feedback), not on what a pre-configured software such as a learning app or game may offer (technology-generated feedback). Thus, the term “method” has been chosen deliberately, as the emphasis will be placed on the ways in which teachers and learners can actively engage in the digital feedback process. The stress will not be put on specific tools, software, apps or instruments, but rather on didactic design recommendations based on a thorough review of the empirical findings and best practices. Nevertheless, different software programs will be cited to give the readers an orientation or inspiration for selecting the apps that they find most suitable and convenient for their purposes. In fact, some learning management systems (LMS) already exist that incorporate tools for creating digital feedback in various ways (e.g. Canvas, Moodle, Blackboard) (cf. Winstone & Carless, 2020, p. 75). Since, however, not every school or university uses the same platform, several alternative tools will be suggested. In that respect, it needs to be borne in mind that software changes dynamically with technological innovations and market demands. Hence, on the one hand, some cited software could no longer be available at the time of reading this book (e.g. as a free software); on the other hand, software updates might now include additional functions that compensate for the shortcomings described in the book. The idea of continuous professional development is therefore not only integral to this book, but even more so to the teaching profession (Redecker & Punie, 2017). Ideas for experimenting with the different digital methods and discussing them in light of dynamically shifting learning environments will therefore be provided at numerous points in order to foster digital feedback literacies. To reach this aim in a stepwise manner, the book will be structured as follows: After an introduction to the relevance of feedback in general and digital feedback in particular, an overview of different digital feedback methods will be provided. Each method will be described and defined before its advantages and challenges will be <?page no="15"?> discussed. Based on existing studies and best practices, recommendations for their implementation will be derived. Different feedback directions and combinations will be suggested to create optimal learning conditions. Their exact use, however, will depend on the learning goal and the specific educational environments teachers and learners find themselves in. The book will consider both learners’ and teachers’ perspectives, since feedback will only be successful if it is understood and acted upon (Gibbs & Simpson, 2005, pp. 23-25; Hattie & Clarke, 2019, p. 121; Winstone & Carless, 2020, p. 28). Special emphasis will therefore be given to practical strategies for teachers and students during feedback provision and reception, respectively (cf. Hattie & Clarke, 2019, pp. 79, 169). Beyond that, the readers are invited to experiment with digital feedback in their own classrooms in pedagogically motivated and meaningful ways. Lastly, the book will conclude with suggestions for the development of digital feedback literacies in a dynamically changing world to expand the theoretical frame that is presented in the next chapter. 14 1 Introduction <?page no="16"?> 2 Theoretical frameworks and foundations The present chapter lays important theoretical and conceptual foundations for the discussion of the different digital feedback methods and their possible combinations in the subsequent chapters. First, it clarifies the understanding of feedback that builds the basis for the following argumentation. Section 2.1 addresses the role of feedback in the learning process and gives suggestions regarding the contents, language and style as well structures of feedback messages. Moreover, a summary of essential feedback characteristics is offered. All of them are relevant for engaging in learning-oriented feedback dialogues and require feedback literacies from teachers and learners alike. This key term will be explained in section 2.2 before the emphasis will be shifted to digital aspects. In that regard, section 2.3 discusses the notion of digital literacies and presents some popular frameworks for teachers. Finally, section 2.4 previews the contents of the ensuing chapter by categorizing different types of digital feedback. Readers who are already familiar with these theoretical constructs and frameworks may skip the next sections and move on to the digital implementation in chapter 3. 2.1 Feedback Questions/ Tasks before reading the chapter Knowledge and reflection questions: (1) How would you define feedback? (2) Learners’ perspective: a. How important is feedback for you as a learner (e.g. as a language learner or as a student at university)? Please explain. b. In what ways has feedback been provided to you so far? Please distinguish between the following contexts: as a pupil at school; as a student at university; in life outside of school or university (e.g. feedback by parents or friends). (3) Teachers’ perspective: a. How important is feedback for you as a (prospective) teacher? Please explain. b. As a (prospective) teacher, in what ways have you provided feedback so far? Please give examples and explain. c. What other ways of giving feedback would you like to try out? <?page no="17"?> 2.1.1 Defining and contextualizing the role of feedback in the learning process Feedback plays “multiple and multifaceted roles in learning” (Butler & Winne, 1995, p. 246). Hattie and Timperley (2007) even underline that “[f]eedback is one of the most powerful influences on learning and achievement” (p. 81, emphasis omitted; see also Hattie, 2009; 2012; Wisniewski et al., 2020). In educational settings, feedback has been investigated for more than a century, which has led to a vast and even contradictory body of research (cf. Kluger & DeNisi, 1996, p. 255; Narciss, 2008, p. 126). Table 1 briefly shows how the conceptualizations of feedback have changed. Feedback as reinforcement ■ This view is rooted in “behaviourist stimulus-response models of learning” (Sadler, 2010, p. 535). ■ It was believed that positive reinforcement for correct behavior and negative reinforcement (or punishment) for incorrect behavior would result in improved performance (Thorndike, 1913, cited by Kluger & DeNisi, 1996, p. 258; see also Kulhavy & Stock, 1989, p. 280). ■ However, the reinforcement did not lead to a systematic learning gain (see Kulhavy & Wager, 1993, p. 4) because input (e.g. by teachers) does not equal intake by learners. Feedback as information (transmission) ■ From a cognitivist lens, feedback was understood as information about the gap between the learners’ current level (performance) and the reference level (learning goal) that is needed to close the gap (Ramaprasad, 1983, p. 4; Sadler, 1989, pp. 120-121). ■ Rather than a negotiation of meaning or dialogue, this information transmission was seen as unidirectional (cf. the reviews by Ajjawi & Boud, 2015, p. 253, and Wood, 2019, p. 22). ■ The focus mostly was on “providing corrective information” to the learners (Mory, 2004, p. 771), while more recent conceptualizations also foregrounded learners’ self-directed noticing of the gap (Swain, 2000, p. 100, cited by Fulcher & Owen, 2016, p. 110). Feedback as (co-construc‐ tive) dialogue ■ Current perspectives put a stronger emphasis on learners as active agents as well as on the co-constructive and socioculturally embedded nature of learning (Carless & Boud, 2018, p. 1316; Mory, 2004, p. 770; Nicol & Macfarlane-Dick, 2006, pp. 200-202; Winstone & Carless, 2020, pp. 6-9). ■ Feedback is seen as an interdependent and ongoing dialogic process “flow[ing] bi-directionally between learners and teachers” (Nicol, 2010, p. 503). In that regard, the active role and actual use of the feedback by the students is stressed (Boud & Molloy, 2013, pp. 700-701; Carless, 2020). ■ For this, the development of self-regulated learning (Nicol & Macfar‐ lane-Dick, 2006, p. 202) and feedback literacies is important (see also an earlier paper by Butler & Winne, 1995). Table 1: Changing views of feedback In the first two views, learners’ active role in the feedback process was not sufficiently recognized. From today’s viewpoint, however, the notions of learner agency and multidirectional dialogues are central to the effectiveness of feedback. Only if students 16 2 Theoretical frameworks and foundations <?page no="18"?> actively participate in the entire feedback process, i.e. seek feedback, give feedback and engage with the feedback they receive, can learning be improved (e.g. Carless, 2020; Winstone & Carless, 2020; see section 2.2). For now, we may define feedback as an interactive process of exchanging information that persons can use in order to regulate their further learning and thus to improve their performance (cf. the review by Carless, 2019, p. 708). The exact definition of feedback is, however, contested and probably highly dependent on contextual conditions and developments (for a recent review see Lipnevich & Panadero, 2021). Altogether, we may note that the changing conceptualizations of feedback correlate with the more general “shift from teaching to learning” in education (Welbers & Gaus, 2005, cited by Bauer, 2017, p. 162) and the altered teacher role “from the sage on the stage to the guide on the side” (King, 1993, p. 30, quoted by Rau, 2017, p. 143). Clearly, feedback should be regarded as an integral part of the daily teaching and learning practices (cf. Huba & Freed, 2000, p. 8). Without feedback, learners do not know whether their learning was successful and what they could do in order to improve (cf. Hattie & Timperley, 2007). Likewise, without feedback, teachers do not know whether their teaching had the desired effect and what they could do in order to support further learning (e.g. Hattie, 2009; 2012; Hattie & Clarke, 2019). Poehner and Infante (2016) therefore argued for seeing teaching and assessment “as dialectically related features of a single educational activity intended to promote learner development” (p. 275). At this point, a remark on the relation between feedback and assessment might be helpful. Generally speaking, assessment is the evaluation of or judgment about someone’s work, performance or progress (Sadler, 1989, p. 120; cf. Fulcher & Owen, 2016, p. 109). To accomplish this, the assessment process involves “a variety of procedures for describing, collecting, documenting, scoring, and interpreting information about teaching and learning” (Khan, 2018, p. 2) that stem “from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences” (Huba & Freed, 2000, p. 8, emphasis omitted). Feedback is one of these procedures and it has typically been associated with formative assessment rather than summative assessment. Some important characteristics are outlined in Table 2. Formative assessment Summative assessment ■ Usually done during the learning process (Boraie, 2018, p. 1) ■ Typically given at the end of a learning unit (Boraie, 2018, p. 1) ■ “Formative” means “modify[ing]” (Shute, 2008, p. 154), “forming or moulding some‐ thing, usually to achieve a desired end” (Sadler, 1989, p. 120), e.g. to help learners improve in a particular competence area. ■ Summarizes the current status of achieve‐ ment (Sadler, 1989, p. 120) and thus shows whether or to what extent the learners have reached the learning goal. Often in the form of a grade for a course or class test (Franke & Handke, 2012, p. 151; Wallace, 2015a). 17 2.1 Feedback <?page no="19"?> Formative assessment Summative assessment ■ Commonly associated with feedback as one way of providing in-process support to learners. However, also summative assessment can be utilized formatively for improving learning (cf. Jenkins, 2010, pp. 566-567). ■ Closely connected to testing, which is “[t]he process of assessing and measuring a learner’s attainment in a task, a lesson, a subject, or a programme of study” (Wal‐ lace, 2015b, n.p.). ■ “assessment for learning” (Bokhove, 2010, p. 122) ■ “assessment of learning” (Bokhove, 2010, p. 122) Table 2: Formative and summative assessment Through formative assessment, learners thus obtain information about the progress they are making, which may help them to re-focus their attention and re-direct their utilization of learning strategies, for instance (Franke & Handke, 2012, p. 151). It presupposes an accurate diagnosis of the students’ learning progress and ideally incorporates helpful guidance or scaffolded support (Evans, 2013, p. 102; Weaver, 2006, p. 388). Hence, even though “feedback” is a common term, it actually comprises much more than the word itself would imply. More precisely, feedback should address the following three questions that were proposed by Hattie and Timperley (2007, p. 86): (1) Where am I going? (What are the goals? ) = feed up (2) How am I going? (What progress is being made toward the goal? ) = feed back (3) Where to next? (What activities need to be undertaken to make better progress? ) = feed forward In other words, any feedback message should be clear about (1) the learning goals that are to be reached, (2) the learners’ current position in their learning journey, and (3) the steps the learners could take in order to move forward towards the desired target. Importantly, feedback is not only information or advice about what still needs to be done, but also an acknowledgment of what learners have already achieved and the efforts they have invested. For instance, assessors may outline the progress learners have made as compared to a previous point in time (progress feedback according to Voerman, Meijer, Korthagen, & Simons, 2012, p. 1108). This and other kinds of positive feedback can have important motivational functions (cf. Mory, 2004, p. 766). It may enhance learners’ self-efficacy, i.e. the level of confidence they have in themselves to reach their goals or fulfill a particular task (Hattie & Clarke, 2019, p. 82; see also Hoska, 1993, p. 117; Mory, 2004, p. 766). Accordingly, their persistence in continuing learning and investing further effort can be strengthened (cf. Narciss, 2008, p. 134). Cultivating such a growth mindset in the classroom means that students learn to believe in growth, i.e. that they can improve their learning by investing effort (Dweck, 1999, cited in Nicol & Macfarlane-Dick, 2006, p. 212). This involves challenging themselves, working hard, persisting and not being afraid of making mistakes (Hattie & Clarke, 18 2 Theoretical frameworks and foundations <?page no="20"?> 2019, p. 14; Hoska, 1993, p. 107). In that regard, their view of errors might need to be modified. Errors should be considered as opportunities for learning (Hattie, 2012, pp. 115, 124; Hattie & Clarke, 2019, pp. 29, 47; Hattie & Timperley, 2007, p. 104) and as a natural part of the learning process (Hoska, 1993, p. 112). They provide insight into learners’ progress and understanding of a subject, notion or language and are thus helpful for teachers to scaffold the further learning process accordingly (cf. Corder, 1967, p. 167; Diaz Maggioli, 2018, p. 2; Hattie, 2012, p. 16). In short, positive and negative feedback are comments about strengths and weaknesses (Vogt & Froehlich, 2018, p. 139). Positive feedback is information about learners’ correct task fulfillment, the progress they have made or useful strategies they have applied. By contrast, negative feedback points out aspects of performance that are erroneous or worth of improvement and provides construc‐ tive advice for further development. Hence, not only positive feedback, but also negative feedback can fulfill motivational functions. However, care must be taken that the comments refer to a specific task or task-related processes instead of learners’ personality, lack of ability or performance in relation to others. In that respect, Hattie and Timperley (2007, pp. 86-87, 90) distinguished between four different levels at which feedback could be directed: (1) Feedback about the task (FT), i.e. learners’ understanding of tasks and their task performance, (2) Feedback about the processing of the task (FP), i.e. learners’ processes and strategies of understanding and performing in a task, (3) Feedback about self-regulation (FR), i.e. learners’ metacognitive processing and selfregulatory actions, (4) Feedback about the self as a person (FS), i.e. a personal (often affective) evaluation of the learner that is unrelated to the task. Examples for the last point are praising someone as “a great student” who has given “an intelligent response” (Hattie & Timperley, 2007, p. 90) or blaming him/ her for being the opposite (Hoska, 1993, p. 113). Hattie and Timperley (2007, p. 90) argue that feedback at this self-level (FS) is least effective and should be avoided for various reasons. The remaining three areas address different levels of cognitive complexity (Wisniewski et al., 2020, p. 2): from surface-information and knowledge about the task (FT) to task-related processes and strategy use (FP) and finally to greater ownership of one’s learning, including self-regulated feedback-seeking and feedback-provision (FR) (Hattie & Clarke, 2019, pp. 76-78). To encourage continuous development, feedback intends to “always pus[h] the boundaries” to increase learning (Hattie & Clarke, 2019, p. 17). As Hattie (2009) puts it, “[t]he art is to provide the right form of feedback at, or just above, the level where the student is working” (p. 177). This “plus one” feedback (Hattie & Clarke, 2019, p. 76) right 19 2.1 Feedback <?page no="21"?> above the current level resonates with Vygotsky’s (1978) idea of the Zone of Proximal Development (ZPD). The ZPD is the difference between a learner’s potential level of performance (that they could attain through sufficient assistance, e.g. by a teacher or more proficient peer) and their present performance (Vygotsky, 1978). Successful feedback thus needs to strike a balance between the learner’s prior knowledge and the target performance by moving them forward in their ZPD (cf. Grotjahn & Kleppin, 2017a, p. 282). Therein, feedback serves as a scaffolding tool that enables learners to improve and to engage in more advanced tasks (Shute, 2008, p. 162). It might ultimately encourage life-long and self-regulated learning (SRL). SRL is a multidimensional construct that describes the degree to which learners can regulate different cognitive, affective, metacognitive and (inter-)actional aspects (Nicol & Macfarlane-Dick, 2006). In that regard, Nicol and Macfarlane-Dick (2006) suggested “seven principles of good feedback practice that support self-regulation” (p. 199, emphasis omitted). It should (1) help clarify what good performance is (goals, criteria, expected standards), (2) facilitate the development of self-assessment (reflection) in learning, (3) deliver high quality information to students about their learning, (4) encourage teacher and peer dialogue around learning, (5) encourage positive motivational beliefs and self-esteem, (6) provide opportunities to close the gap between current and desired performance, (7) provide information to teachers that can be used to help shape the teaching (p. 203). This list already brings to the fore many important criteria of effective feedback that will be synthesized in the next sections. However, the list does not fully reflect the importance of continuous dialogue for all seven elements, not just for the fourth one. Section 2.1.7 will therefore be devoted more deeply to the dialogic nature of feedback, including its multiple directions and the multifaceted literacies that are required from everyone who is involved. 2.1.2 Feedback contents As with teaching in general, the content and manner of feedback messages should align with the learning goal and the student needs (Hattie & Clarke, 2019, p. 53; Mory, 2004, p. 759). There is thus no universally valid, “best” type of feedback, but assessors need to make purposeful decisions regarding the contents, scope and modes of feedback provision (e.g. Narciss, 2013, p. 14). The present and the following sections will therefore showcase several general options that assessors have regarding feedback provision. Quite often, a distinction is made between the evaluative and elaborated components of feedback (Kulhavy & Stock, 1989, p. 285; Narciss, 2008). The evaluative component (verification feedback according to Kulhavy & Stock, 1989, p. 285) indicates whether or not the desired performance has been achieved. For instance, assessors state whether a learner’s response was correct or incorrect or they give the total percentage of correct solutions (for a more detailed classification see Narciss, 2008, p. 132; 2013, p. 14). 20 2 Theoretical frameworks and foundations <?page no="22"?> feedback = evaluation + elaboration In contrast to the simple evaluative feedback, elaborated feedback means “anything more than ‘yes-no’ or ‘right-wrong’” (Kulhavy & Stock, 1989, p. 285). This could comprise reasons for the (in-)correctness of a response (Dempsey, Driscoll, & Swindell, 1993, p. 25), explanations about the task, suggestions, hints or strategic advice (Narciss, 2008, pp. 135-136; 2013, pp. 14-15). Certainly, the degree of elaboration can vary (cf. Kulhavy & Stock, 1989, pp. 286-287; Narciss, 2008, p. 132), and feedback complexity thus arises from the amount and type of information that is given in a feedback message (Mory, 2004, pp. 752-753; cf. Dempsey et al., 1993, pp. 24-25, 47). Intuitively, one might think that the more elaboration, the better, but there is no conclusive evidence that suggests so (Kulhavy, 1977, p. 212; Kulhavy & Stock, 1989, p. 285). Rather, the critical point is “how well and how proactively the student engages with this information” (Nash & Winstone, 2017, p. 3). Indeed, if feedback messages contain more information than necessary, they might distract or overwhelm learners, thus hampering their learning process (Mory, 2004, p. 753, with reference to Phye, 1979). What is more, the many criteria according to which a piece of work can be assessed may result in excessively long feedback. However, not only from a motivational perspective (learners’ encouragement), but also from a practical perspective (teachers’ time), it is usually impossible to judge every piece of work according to all the different criteria in full detail (Sadler, 1989, p. 131). It has therefore been recommended to limit the feedback to a few outstanding error types (Ellis, 2009a, p. 6) and “fine-tun[e] [it] to the needs of the learners” (as reviewed by Ene & Upton, 2018, p. 10). For example, some recommend a maximum of three different aspects or skills, while each of them may contain positive remarks and constructive criticism (e.g. Whitehurst, 2014, and van der Zijden, Scheerens, & Wijsman, 2021, p. 51, regarding screencast feedback). Such focused feedback stands in contrast to unfocused feedback, which addresses (nearly) all errors a learner has committed (Ellis, 2009a, p. 6). Assessors are thus advised to prioritize particular areas when giving feedback ( Jug, Jiang, & Bean, 2019, p. 246; cf. also the review by Lee, 2014, p. 2). Quite often, they tend to focus on the “remark”-able (Sadler, 1989, p. 133), i.e. they only comment on those aspects that deviate from the norm. Of course, this does not need to be something purely negative, but it could likewise be something “outstanding” in the positive sense. At the same time, focused feedback will help students notice the areas that need further attention and set priorities for their further learning (cf. Diaz Maggioli, 2018, p. 6). Overall, learning processes and products can be assessed according to numerous criteria, which can be grouped in various ways (for overviews see e.g. Biber, Nekrasova, & Horn, 2011, p. 13; Campbell & Schumm Fauster, 2013, p. 62; Grotjahn & Kleppin, 2017a, pp. 275-276; 2017b, p. 125). For instance, one may distinguish between the surface-structure (e.g. typos, word choice, punctuation, formatting), the micro-mean‐ 21 2.1 Feedback <?page no="23"?> ing (organization within a paragraph) and the macro-meaning (organization across multiple paragraphs) (Cho & Cho, 2011, p. 635). Most often, at least a general distinction is made between so-called “global” and “local” (e.g. Nelson & Schunn, 2009, p. 380) or “higher-order” and “lower-order” issues. Higher-order criteria comprise coherence, argumentation, organization and idea development etc., whereas lower-order issues include mechanical aspects of spelling and punctuation as well as grammar and word choice (e.g. Chang, 2016, p. 82; Min, 2005, p. 298). The exact manifestation of the criteria will depend on the learning goal, the type of assignment and learners’ performance. For written assignments, usually a three-partite classification of content, form and language is adopted, which contain several sub-aspects, such as the following ones: CONTENT □ Breadth/ scope □ Depth (e.g. depth of analysis) □ Line of reasoning/ argumentation/ flow/ coherence, incl. structure of para‐ graphs (topic sentence and supporting evidence etc.) □ Understanding of topic and literature/ correctness of contents □ Use of examples □ Relevance FORM □ Formatting □ Integration of resources and correct citation LANGUAGE □ Grammar □ Word choice □ Style/ voice (e.g. conciseness, register) □ Cohesion (e.g. use of transitional devices) These aspects are particularly relevant for written assignments, which means that they will differ for other tasks. Prior to feedback provision, it is therefore important to identify the assessment criteria that are crucial for a particular assignment. In that regard, rubrics can fulfill meaningful functions for teachers and learners. They are “systematic scoring guidelines to evaluate students’ performance […] through the use of a detailed description of performance standards” (Zimmaro, 2007, p. 1, cited in Thouësny, 2012, p. 287). On the one hand, they help assessors “structure their feedback appropriately and concisely” (West & Turner, 2016, p. 406) and “to maintain a relatively constant grading style” (Thouësny, 2012, p. 286). On the other hand, they allow learners to better interpret their performance and pinpoint areas for improvement (McCarthy, 2015, p. 163; 2020, p. 187; see e.g. Moore & Filling, 2012, pp. 13-14, for examples). Accordingly, they can utilize them as a checklist for selfand peer-assessments (see section 2.2.3) and for producing similar assignments in a more self-regulated manner (cf. Winstone & Carless, 2020, p. 81). The rubrics may accompany feedback that is given in other ways, e.g. as audio or video messages (see sections 3.11 to 3.13). In those audiovisual messages, assessors could then concentrate on the most important issues that are deemed relevant for a particular learner because all other points will be synthesized in the rubric (cf. Jug et 22 2 Theoretical frameworks and foundations <?page no="24"?> al., 2019, p. 246). Moreover, depending on the learners’ prior knowledge as well as the kind of error, conscious decisions about the commenting style need to be made. This will be discussed in the following section. 2.1.3 Feedback language and commenting style For all the different assessment areas, error indication, correction and commentary can be done in more or less direct ways (e.g. Elola & Oskoz, 2016, p. 60; Thompson & Lee, 2012). On the one hand, an error could be pointed out explicitly by giving a direct correction (Bitchener, Young, & Cameron, 2005, p. 193; Ellis, 2009b, p. 99; Ferris & Roberts, 2001, p. 163; Porsch, 2010, p. 13; Sheen & Ellis, 2011, p. 593) and maybe even an additional metalinguistic explanation of the underlying rule (Sheen, 2007, p. 275; cf. the review by Ene & Upton, 2018, p. 2). On the other hand, it might also be pointed out implicitly, e.g. by drawing on various input enhancement techniques (Ranta & Lyster, 2018, p. 43). For example, assessors may localize an error through gesturing, vocal (volume, modulation and tone of voice) or visual emphases (highlighting, underlining, coloring etc.), error codes or color codes or some other hint instead of supplying the correct solution directly (Ellis, 2009a, pp. 7, 9; 2009b, p. 100; cf. e.g. Grotjahn & Kleppin, 2017a, p. 269; Hyland & Hyland, 2006, p. 85). They might repeat the learner utterance and stress the faulty word or word part (Sheen & Ellis, 2011, p. 594). What is more, metalanguage could be used (Ellis, 2009b, p. 100; Sheen & Ellis, 2011, p. 594) to make the learners reflect on their performance, e.g. by saying “Watch out for tense use”. This is cognitively more demanding for the learners than the direct provision of the correct form (Ferris, 2004, p. 60, cited by Tanveer, Malghani, Khosa, & Khosa, 2018, p. 170), but it gives the learners the chance to discover the right solution themselves (Corder, 1967, p. 168; Porsch, 2010, pp. 13-14). Proceeding on the idea of feedback as dialogue, i.e. as a social process and com‐ municative act (Ajjawi & Boud, 2018, p. 1108), we note that a feedback message can be formulated in many different ways. For instance, Nurmukhamedov and Kim (2010, p. 272) distinguished between statements (of students’ problems), imperatives (requiring learners to change, delete or add something), questions (raising doubt, showing uncertainty, asking for further details) and hedged suggestions (implying or suggesting to avoid direct comments) (cf. McGarrell & Alvira, 2013, p. 53). The latter two options are often used to reduce a potentially negative uptake and save the learners’ face when problematic aspects are addressed (cf. Hyland & Hyland, 2001, pp. 200-201; Nelson & Schunn, 2009, p. 380). To mitigate, i.e. tone down, the pragmatic force of the feedback message, the use of question forms (e.g. “What about adding further examples? ”), hedges (“It might be a good idea to include more examples.”) and personal attribution (“I think that further examples might strengthen your argument.”) can be beneficial (cf. Hyland & Hyland, 2001, pp. 185, 198; Stannard & Mann, 2018, pp. 98-99; see also Watling & Lingard, 2019, for several useful language suggestions; cf. Kerr & McLaughlin, 2008, p. 12, for a 23 2.1 Feedback <?page no="25"?> sample script). They may encourage reflection without being too directive or imposing (cf. Nurmukhamedov & Kim, 2010, p. 273; Silva, 2012, p. 7). This way, the learners’ authority (e.g. as writers) is respected (cf. Dagen et al., 2008, cited in Vincelette & Bostic, 2013, p. 271; Hyland & Hyland, 2001, p. 194; see also Cunningham, 2017a, p. 478; 2017b, pp. 100-101, 147) and at the same time they are seen as active agents in the learning and revision process (Brookhart, 2008, cited in Campbell & Feldmann, 2017, p. 5; cf. Cunningham, 2019b, p. 97). In that respect, the use of strong action-oriented verbs in the feedback message can foster learners’ engagement with the contents (Watling & Lingard, 2019, p. 26), for instance by writing “What about adding/ changing/ omitting XY? ” (cf. Rottermond & Gabrion, 2021, p. 42). However, previous research has shown that students may find mitigated feedback confusing if the underlying pragmatic purpose remains unclear (Hyland & Hyland, 2001, pp. 206-208; 2006, p. 87; Nurmukhamedov & Kim, 2010, pp. 273, 280). This can have cultural and linguistic reasons, especially in secondor foreign-language learning settings (Ferris, Pezone, Tade, & Tinti, 1997, pp. 175-176; Hyland & Hyland, 2019). Therefore, it is important to make learners aware of the underlying pragmatic functions of the different comment types, i.e. that hedged statements or questions likewise constitute requests for revisions and thus have a similar function as imperatives (Ferris, 1997, pp. 331-332; Nurmukhamedov & Kim, 2010, p. 281). For teachers, this presupposes critical awareness of their own commenting style and comment types as well as their potential impact (cf. Hyland & Hyland, 2001, pp. 207-208). They need to strike a balance between managing social relations (Hyland & Hyland, 2001, pp. 194, 201; 2019) and communicating feedback effectively. In that regard, also the structural make-up of feedback messages should be considered, as will be done in the subsequent section. 2.1.4 Feedback structures The structure of feedback exchanges is partly conditioned by the modes and tools that are utilized. However, we may also identify some commonalities that might be relevant for any feedback message. Crucially, feedback is not only a transmission of contents, but also relational work (e.g. Ajjawi & Boud, 2018, p. 1106; Winstone & Carless, 2020, pp. 149-165). In that regard, several scholars have highlighted the importance of a personal address or greeting at the start of a feedback message, e.g. by saying the first name of the student (Bakla, 2017, pp. 326-327; Henderson & Phillips, 2014, p. 7; McLeod, Kim, & Resua, 2019, pp. 196-197; White, 2021). Beyond that, it might be a good idea to repeat the name a few times at different points of the feedback, especially if the message is relatively long (see e.g. Huett, 2004, p. 41, for email feedback). After the greeting, assessors may continue with the relational work by outlining the progress a student has already made (if the student is familiar to the instructor; e.g. Alvira, 2016, p. 84; see progress feedback in section 2.1.1) or by emphasizing the effort the learner has put into the work (Henderson & Phillips, 2014, p. 7; cf. growth mindset in section 2.1.1). Moreover, the feedback provider may thank the student 24 2 Theoretical frameworks and foundations <?page no="26"?> for submitting the assignment (Cavaleri, Kawaguchi, Di Biase, & Power, 2019, p. 14; Cranny, 2016, p. 29118; Henderson & Phillips, 2014, p. 7; Whitehurst, 2014). At that point, assessors could include a reminder of the learning goals that were targeted by the task (cf. Walker, 2017, p. 361). Related to that, they might explain the purpose and focus of the feedback message and provide an overall evaluative summary of the learners’ task performance (Henderson & Phillips, 2014, p. 7; Nelson & Schunn, 2009, pp. 397-399; Phillips, Ryan, & Henderson, 2017, p. 365; Schluer, 2021d, p. 166). This will help to lay a common foundation for the more specific positive feedback and constructive criticism that will follow subsequently. Assessors might already preview the ensuing contents by mentioning their structural sequencing, e.g. by pinpointing the assessment criteria that will be focused on in the main body of the feedback message (e.g. Edwards, Dujardin, & Williams, 2012, pp. 107, 109; cf. the “pre-training principle” by Mayer, 2002, p. 28; see section 2.1.2 on focused feedback). This is particularly useful for complex assessments. In the main part, it is generally advisable to talk about positive aspects first before proceeding to the negative ones or areas for improvement (e.g. Bakla, 2017, p. 326; Glei, 2016; Henderson & Phillips, 2014, p. 7). In that regard, a frequently applied technique to structure a feedback message is the feedback sandwich (or feedback burger) shown in Figure 1. Figure 1: Feedback sandwich In the feedback sandwich, negative feedback is sandwiched between positive comments (LeBaron & Jernick, 2000, p. 14). Hence, the common structure is “make positive comments; provide critique; end with positive comments” (Parkes, Abercrombie, & McCarty, 2013, p. 397) or conclude with “a direction for growth” (LeBaron & Jernick, 2000, p. 14). Especially for negative feedback, it is important to “give reasons or evidence” to support their arguments (Clayton, 2018a, n.p.). This way, learners become aware of the consequences of the problem that needs to be overcome (Clayton, 2018a). 25 2.1 Feedback <?page no="27"?> In addition, assessors should “suggest a possible solution or recommendation” (Clayton, 2018a, n.p.) that is specific and actionable for the learners (Glei, 2016). In a more detailed manner, Nelson and Schunn (2009, pp. 397-399) recommended the following steps when talking about learners’ fulfillment of a particular assessment criterion or learning goal: (1) Summarize what you are going to talk about first (e.g. that you will be focusing on grammatical aspects). (2) Specify a concrete aspect (e.g. use of apostrophes in the reviewed paper). (3) Illustrate this aspect with concrete examples from the reviewed paper and localize them (e.g. by indicating them visually in the learner’s assignment). (4) Explain the point you are making (giving reasons, e.g. stating the rule for the use of apostrophes in English). (5) Suggest a solution (e.g. by giving hints to the recipient or offering concrete recom‐ mendations). Both for the overall structure of a feedback message at the macro-level as well as for individual feedback sequences at the micro-level, some common patterns thus are “praise-criticism”, “criticism-suggestion” and “praise-criticism-suggestion” (Hyland & Hyland, 2001, p. 196). However, as Parkes et al. (2013) explain, students who are familiar with this technique could perceive it as “clichéd” and “insincere” (p. 399, with reference to Schwenk & Whitman, 1987). Even worse, the learners might be anxiously awaiting the criticism that would follow after the first part of the feedback sandwich ( Jug et al., 2019, p. 247). All too often, the negative part is introduced by the conjunction “but” ( Jug et al., 2019, p. 247). The use of this word can be perceived as a negation of the positive feedback that has preceded it, calling attention to all the bad aspects that are in dire need of improvement ( Jug et al., 2019, p. 247). Jug et al. (2019) therefore recommend using the conjunction “and” instead (p. 247). Still, care must be taken that the criticism is perceived as such, i.e. that the two positive slices do not dilute the negative part of the message ( Jug et al., 2019, p. 247; Parkes et al., 2013, p. 398). Otherwise, learners might not be sure how they should interpret the feedback message (Parkes et al., 2013, p. 398), both regarding their current performance and the improvements they could make. Therefore, learners’ cognitive involvement and assessors’ “constructive support” (Kulgemeyer, 2018, p. 130) are crucial for the success of any feedback message. This might be achieved through a conversational language and comments that encourage reflection (e.g. by activating prior knowledge and asking questions for reflection) and engagement (e.g. by giving prompts for further action, such as consulting addi‐ tional resources for further information). Furthermore, transfer tasks (or follow-up learning tasks) give learners the chance to apply the newly gained information (cf. Martínez-Arboleda, 2018). Quite often, assessors ask learners for a revision (Marshall, Love, & Scott, 2020, p. 6), but the learning gain could also be shown in a (similar) subsequent assignment. To ease the implementation for the learners, it might be useful 26 2 Theoretical frameworks and foundations <?page no="28"?> to summarize the most important points towards the end of the feedback message (Henderson & Phillips, 2014, p. 7) and to highlight further sources of support (White, 2021). This may help the learners to create an action plan that is conducive for reaching the learning goals (McLeod et al., 2019, pp. 196-197). To verify learners’ understanding and uptake, learners might also be requested to write a reflective booklet in which they keep track of their changes or development (Soltanpour & Valizadeh, 2018, pp. 129-130; cf. section 3.15). However, feedback replies can be done in many other ways, as shown in section 2.2.4. The closing comments may consequently contain a reminder to revise and resubmit the draft (Ali, 2016, p. 112). In more general terms, the assessors can state that they look forward to the progress the learner will be making in the future or in subsequent assignments (cf. Henderson & Phillips, 2014, p. 7, as cited by Phillips et al., 2017, p. 365) and that reconsulting the feedback might be helpful in that respect (Kerr & McLaughlin, 2008, p. 4). In line with this future-orientation (see section 2.1.1), they may also invite learners to engage in follow-up discussions and feedback interactions with the teacher or peers (Harding, 2018, p. 14; Henderson & Phillips, 2014, p. 7; Phillips et al., 2017, p. 365). This could be done as part of the next class meeting (McLeod et al., 2019, pp. 196-197), either face-to-face or in a video conference, or else via phone or mail (Yu & Yu, 2002, p. 120), or in any other way that seems suitable for a particular learning context (see the suggestions in chapter 3). Finally, the assessors should once again thank the learners for sharing their work and for the effort they have put in it (Bakla, 2017, p. 326; McLeod et al., 2019, p. 197; Whitehurst, 2014), so to end the feedback on a positive note. The final point is thus again mainly relational work, including an invitation to engage in further dialogue and to implement the feedback (Henderson & Phillips, 2014, p. 7; 2015, p. 56). The structural suggestions are summarized in Figure 2 (adapted from Clayton, 2018b; Henderson & Phillips, 2014, p. 7; McLeod et al., 2019, p. 197; Olsen, 2022; Phillips et al., 2017, p. 365; Schluer, 2021d, p. 166). 27 2.1 Feedback <?page no="29"?> Dear Simon, I hope you are doing well! Thank you for submitting the written assignment. Its purpose was to practice what we had learned in the last three lectures about argumentative essays. While reading your submission, I noticed that you have completed this task in an excellent manner. Congratulations! Your writing and expository skills are at a very high standard already and with this feedback, I would like to give you some additional recommendations for your future learning. In your essay, you present a coherent line of thoughts and develop a detailed discussion around the single arguments. You incorporate supportive evidence and considered counter-arguments, which is very commendable. You use very fitting examples to support each of your arguments, and you cite relevant academic literature, which gives the essay a high degree of professionalism and credibility. Content-wise, you have thus done an excellent job of fulfilling the requirements of this writing assignment. Regarding your language use, I noticed that some expressions could have been more concise. For example, a few sentences were quite long so that I had to read them twice to understand what you mean. There are times when the wrong use of some words can easily create ambiguities and could lead to misunderstandings. You can find more specific annotations and suggestions in the attachment of the email. I hope that they will help you to advance your foreign language skills further, Simon. I would suggest that you consult with the College's Writing Center, where you can get more detailed help and tutoring for grammar and word choice in academic writing. I hope you can check and revise your paper again in conjunction with the email and the attached annotations. This will be very helpful to further improve your argumentative writing. If there is anything you don’t understand, you are welcome to communicate with me by email or by booking an office hour. Thank you again for uploading this assignment so promptly and for completing this assignment without overlooking any of the points we had mentioned in class. I look forward to seeing you in our next session! Best regards, Julie Smith Greeting Thanks for submission Reminder of learning goals Evaluative summary as preview Positive comments: task-related with explanations (here: content aspects) Negative comments: constructive criticism with explanations and suggestions (here: linguistic aspects) Reminder of revision Invitation to follow-up dialogue Giving thanks again Further relational work Figure 2: Example of feedback structure 28 2 Theoretical frameworks and foundations <?page no="30"?> However, these structures may vary with the error type and feedback mode as well as many further factors. Overall, it becomes clear that “feedback […] is a common yet complicated practice” (Cunningham, 2017b, p. 31). Its effectiveness results from and depends on a complex interplay between individual variables pertaining to the feedback provider and to the feedback recipient(s), their interpersonal relationship as well as several contextual factors (cf. Chong, 2021; Jug et al., 2019, p. 244). Some important characteristics will be summarized in the next section, though. 2.1.5 Characteristics of effective feedback The criteria for effective feedback are contested due to the many factors that shape learning environments (cf. Hattie, 2012, p. 134; Hattie & Timperley, 2007, p. 81). Notably, learners need to be actively involved in the entire feedback process and have the skills, will and opportunities to act upon the feedback (Carless & Boud, 2018; Nash & Winstone, 2017; Winstone & Carless, 2020; see section 2.2.4). Otherwise, even the most carefully crafted feedback will be futile (Nash & Winstone, 2017, p. 3). The following checklist should therefore be read with this caveat in mind. It is based on a review of relevant prior literature (including Gibbs & Simpson, 2005; Green, 2018; Hattie, 2012; Hattie & Clarke, 2019; Hattie & Timperley, 2007; Nicol, 2010; Nicol & Macfarlane-Dick, 2006; Nicol, Thomson, & Breslin, 2014; Ryan, Henderson, & Phillips, 2016; Sheen & Ellis, 2011; Shute, 2008; Vogt & Froehlich, 2018). Effective feedback should … □ be clear and comprehensible to the learners, □ be based on assessment criteria that are transparent to the learners, □ be specific and contain concrete examples, □ be structured and easy to follow, □ be balanced between positive comments and areas that need improvement, □ be communicated in a constructive, encouraging and emotionally sensitive man‐ ner, □ concentrate on task-related observable behavior or actions that can be changed instead of the learners’ personal characteristics, □ give reasons and explain why something is correct or incorrect and suggest how it could be improved, □ be future-oriented and help the learner to move forward (feed forward), for instance by encouraging reflection as well as by giving hints and suggestions as scaffolds to support self-regulated learning, □ involve students actively in the processing and application of the feedback, for instance by incorporating concrete tasks for knowledge transfer or by encouraging the creation of an action plan, □ be personal and individualized by making concrete references to the present work, but also by “referring to what is already known about the student and her or his previous work” (Nicol, 2010, p. 513), 29 2.1 Feedback <?page no="31"?> □ be provided in a timely manner and on a regular basis, □ vary in its degree of explicitness or implicitness, depending on the learners’ prior knowledge (e.g. recurring errors might indicate a need for more thorough explanations and direct instructions than small lapses), □ set a focus, as “‘more’ feedback does not always equal ‘more’ learning” (Price, Handley, Millar, & O’Donovan, 2010, p. 278), □ create a relationship with the learner. In that respect, credibility of the feedback provider and mutual trust as well as respect are decisive, □ encourage further dialogue (e.g. with the teacher or peers). This is in line with the central idea of feedback as dialogue, □ be communicated through different modes and media that suit the particular learner and learning goal. The last aspect is of primary importance for the present book and will therefore be explained next. 2.1.6 Feedback modes, methods and media In the previous sections, the contents and structures of feedback messages were surveyed, while the impact of the presentation mode (Narciss, 2013, p. 15; 2018, pp. 18-19) has so far been discussed very generally only. This final and - for the present book - most important facet, i.e. its digital realization, will be addressed in the remainder of this book. Modes are different semiotic resources, such as writing, speech, images, colors, sounds or gestures (Kress, 2004, pp. 22, 35-36), that can be utilized to communicate meaning. Different media make use of these modes to varying extents. For example, a medium such as a book typically contains writing and pictures, whereas a video is usually based on animated pictures, sounds or music and spoken language (Kress, 2004, p. 22). If several modes are drawn on, a medium can be called multimodal (Kress, 2004, p. 46). Due to technological advancements, there are various multimodal possibilities, insofar that also texts can be enriched by means of sounds, speech, animations and so on (Kress, 2004, p. 46). Likewise, feedback can be presented in one mode only (unimodal feedback) or in several modes (multimodal feedback) (Narciss, 2008, p. 139). Different modes have specific advantages but also limitations since they realize meaning in different ways (Kress, 2004, p. 107). Awareness of these affordances is the prerequisite for making informed “design decisions” to express meanings (Kress, 2004, p. 49), but also for interpreting and understanding the messages (cf. Kress, 2004, p. 169). Hence, awareness and training is not only important for teachers, but also for learners as active participants in the feedback process (Wood, 2021, p. 13; see sections 2.2 and 4.2). However, research about feedback “as a dynamic multimodal activity” (Silva, 2017, pp. 327, 342) in digital environments hardly exists (see also Chang, Cunningham, Satar, & Strobl, 2018, p. 405), even though it has gained in relevance in the past years. 30 2 Theoretical frameworks and foundations <?page no="32"?> For this reason, the present book will bring together research findings as well as practical advice for making informed decisions about the use of digital feedback. The term method will foreground the didactic perspective. It is generally defined as a “way of doing something” (Dictionary.com, 2021), usually to realize a plan or to fulfill a certain goal, in our case to support learning by means of feedback. “Method” is often associated with an organized or systematic procedure that may lead to the completion of the goal (Dictionary.com, 2021; Merriam-Webster Online, 2021a). To reach this goal, several strategies and techniques can be applied and different tools and instruments might be used. Hence, we note that method is an overarching term for these systematic (sets of) procedures that are deemed beneficial for particular purposes. Figure 3 illustrates these interrelationships (adapted from Dotse, 2017). Figure 3: Techniques, strategies, methods and approaches In the innermost circle, we find techniques. As the word implies, this term often refers to technical procedures, especially in the current context of digital feedback. For instance, opening a website while recording the screen could be called a “technique” that is applied during the production of screencast feedback. Techniques are often subservient to strategies. Continuing with the aforementioned example, the chosen technique would showcase the strategy of utilizing external resources to a student. Overall, strategies may comprise several techniques in the “art of combining” certain actions in a goal-oriented manner (Merriam-Webster Online, 2021b). Similarly, asses‐ sors may use different modes strategically in order to cater for learners’ needs in the best possible way (see Fonseca et al., 2015, p. 62, based on Brookhart, 2008, regarding mode as part of a feedback strategy). In that regard, also the tools (e.g. software programs or apps) need to be selected in a meaningful manner. Some of them might have advantages over others so that awareness of the functions and limitations of different tools is crucial. The section “required equipment” in chapter 3 will be devoted to this issue. Commonly, several strategies and techniques are combined systematically in order to reach a specific objective, e.g. to provide feedback on an aspect of performance. Such a purposeful combination is called a (feedback) method, i.e. a particular “way 31 2.1 Feedback <?page no="33"?> of doing something” (Dictionary.com, 2021). The methods usually instantiate different approaches or paradigms, e.g. an objectivist or (socio-)constructivist view of the world or perspective on a certain phenomenon (cf. Dotse, 2017). From today’s viewpoint, feedback processes should be guided by the socio-constructivist conceptualization; however, much of the literature about feedback, in particular digital feedback, has been driven by a cognitivist paradigm of unidirectional information transmission from teachers to students (Winstone & Carless, 2020). Especially since most digital methods are still relatively new and because digital developments are highly dynamic (e.g. new softand hardware), the situated, contex‐ tually embedded nature of feedback processes needs to be acknowledged sufficiently (cf. Chong, 2021). Consequently, the strategies and tools should be purposefully chosen according to the learning objective, the learning environment and learners’ needs (e.g. Harris & Hofer, 2011, p. 214; Kultusministerkonferenz [KMK], 2016, p. 51). This may even give rise to new methods or combinations of methods (see section 4.1) that are deemed valuable for a particular purpose. Many of the digital feedback methods suggested in this book should therefore be regarded as fluid, as they can be drawn on and combined flexibly to seize their affordances or overcome the limitations of others. This conscious choice is a crucial component of learners’ and teachers’ feedback literacies and digital literacies (sections 2.2 and 2.3). Indeed, learners and teachers can participate in feedback exchanges in many directions, as the next subsection will demonstrate. 2.1.7 Feedback directions and dialogues Feedback can take place in multiple directions. Often, feedback is provided by a teacher to a student (teacher feedback) or from a peer to a peer (peer feedback among learners or teachers), but it can also come from another person, such as a parent, friend or tutor, or even from a digital tool (such as a software program or learning app) or another medium (e.g. a textbook with an answer key) or from oneself (self-feedback) (cf. Biber et al., 2011, pp. 9, 13; Carless & Boud, 2018, p. 1316; Hartung, 2017, p. 205; Hattie, 2009, p. 174; Hattie & Timperley, 2007, p. 81; Johnson & Johnson, 1993, pp. 135, 154; Nicol & Macfarlane-Dick, 2006, pp. 200, 207-208; Voerman et al., 2012, p. 1108). In more general terms, Narciss (2008, p. 127) differentiated between external and internal feedback, i.e. feedback deriving from an external (peer, parent, teacher etc.) or internal source of information (self-feedback). Several directions thus suggest themselves, as shown in Figure 4. 32 2 Theoretical frameworks and foundations <?page no="34"?> peer feedback peer feedback Parents Friends Teacher Learners Tutor/ Software peer feedback Figure 4: Feedback directions Further, it should be pointed out that feedback can be directed at an individual person or a group (Evans, 2013, p. 85). The latter is sometimes called collective feedback (Zourou, 2011, p. 231) or generic feedback (cf. e.g. Kay, 2020, p. 5; O’Malley, 2011, p. 30; Stannard, 2007). As feedback is a form of social interaction, every feedback exchange is shaped by the individuals partaking in them and by further contextual conditions (Chong, 2021; Evans, 2013, p. 100; cf. Canale, 2013, p. 4). This comprises e.g. situational variables and task factors but also personal learning motivations (Bitchener et al., 2005, p. 202). Quite often, however, the complexity and manifold relations that have an impact on the success of such feedback dialogues are not fully considered or a shared commitment of all parties is lacking (Nash & Winstone, 2017). Therefore, Nicol (2010) argued that students’ dissatisfaction with the feedback they receive and lecturers’ complaints about the low responsiveness to the feedback they provide “are all symptoms of impoverished dialogue” (p. 501) between learners and teachers (see also p. 503). Critics even fear that the outdated transmission view (see section 2.1.1) still transpires in the use of the words “provides” and “receives” that can be read frequently in the feedback literature (Pitt & Winstone, 2020, p. 84; cf. “transmits”, “gives”, “sends”, “delivers” and “conveys” as lexical alternatives). The crucial point, though, is that the feedback process should not stop after the “provision” and “reception” of feedback, but that they occur in a looplike or spiral manner of (co-)constructive dialogue in which meaning is negotiated 33 2.1 Feedback <?page no="35"?> (Carless, 2019). As Ali (2016) put it, “[e]ffective feedback is a two-way process and a continuous dialogue between teacher and students” (p. 106), i.e. not only teachers give feedback to learners, but also learners to teachers (Bowen & Ellis, 2015). All parties are thus actively involved in their mutual endeavor to optimize the conditions for learning (cf. Bowen & Ellis, 2015; cf. Picardo, 2017, on “feedback as collaboration”). To achieve this, feedback literacies need to be developed among the teachers and learners in a synergistic manner (Nash & Winstone, 2017). The next section will be more detailed about this. 2.2 Feedback literacies Quite often in the literature, assessment literacy has been equated with teacher as‐ sessment literacy. For instance, when Stiggins (1991) introduced the term assessment literacy, he defined it as “the knowledge assessors need to possess” in order to perform assessment-related actions (as cited by Inbar-Lourie, 2013, p. 301). The emphasis thus was on “the central role that teachers play in the assessment process” (Inbar-Lourie, 2017, p. 259). Successful interactions, however, always depend on both sides, the teachers’ and the learners’, or the assessors’ and the assessees’ literacy (cf. Price et al., 2010, p. 288). In that regard, Winstone and colleagues (Carless & Winstone, 2020; Nash & Winstone, 2017) emphasize the importance of responsibility-sharing in the feedback process. Teachers, for instance, are responsible for designing learning environments which are conducive to feedback exchanges, while students proactively contribute to this process by requesting, creating and utilizing feedback (Carless & Winstone, 2020, p. 2). What is more, also other stakeholders’ understanding of assessment plays an essential role, e.g. by policymakers, researchers, test designers, educational institutions at the mesoand macro-levels, but also by the general public and the learners’ parents (cf. Taylor, 2013, p. 409). Hence, assessment literacy needs to be seen within a larger social and political framework (see Fulcher, 2012; cf. Chong, 2021). A broader definition of assessment literacy was therefore suggested by Popham who conceptualized it as “an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions” (2011, p. 267, quoted in Gebril, Boraie, & Arrigoni, 2018, p. 1). In that regard, Fulcher (2012) derived a tripartite model of contexts, principles and practices: (1) Contexts: historical, social, political and philosophical frameworks, (2) Principles: processes, principles and concepts as guidance for practice, (3) Practices: knowledge, skills and abilities applied in assessment practices. Thus, assessment literacy is not only the “what” or knowledge about assessment, but also the “why” (principles) and the “how to” (skills) to apply the knowledge (cf. Davies, 2008, p. 335, and Inbar-Lourie, 2008, cited by Harding & Kremmel, 2016, p. 418). 34 2 Theoretical frameworks and foundations <?page no="36"?> Crucially, any understanding of assessment literacy has to start with the learners’ perspective. Indeed, it is a paramount task for teachers to develop their students’ feedback literacy in order to enable multidirectional feedback dialogues and foster their self-regulated learning (cf. Carless & Boud, 2018; Carless, Salter, Yang, & Lam, 2011, p. 397). For this reason, Carless (2020, p. 3, drawing on Carless & Winstone, 2020) expanded the definition of teacher feedback literacy by stressing their responsibility in developing learners’ feedback literacy: Teacher feedback literacy comprises design, relational and pragmatic aspects in that educators need to design feedback processes for student uptake, be sensitive to the interpersonal aspects of feedback exchanges and manage pragmatic compromises, such as workload implications. Learners, and hence learner feedback literacy, are therefore at the heart of modern conceptualizations of feedback (Carless, 2020). They are seen as active agents in the entire feedback process instead of recipients of feedback information only (Boud & Molloy, 2013, p. 702; Carless, 2020; Carless & Boud, 2018; Molloy, Boud, & Henderson, 2020, p. 528; Winstone & Carless, 2020). The notion of learner feedback literacy foregrounds the knowledge, dispositions (attitudes and willingness) as well as the capacities learners need to solicit, produce, understand and use feedback (Carless & Boud, 2018, p. 1316). In that respect, Carless and Boud (2018) proposed four dimensions: (1) Appreciating feedback: learners recognize the value and importance of feedback and understand their own active role in the feedback process, (2) Making judgments: learners are able to evaluate the quality of their own and other’s work, (3) Managing affect: learners are open towards feedback, proactively engage in trustful feedback exchanges and can deal with negative emotions that may arise from critical feedback, (4) Taking actions: learners act upon the feedback, clarify understandings and develop strategies to regulate their further learning. In short, student feedback literacy requires learners to actively seek, generate, under‐ stand and utilize feedback information in order to improve their learning (cf. Carless, 2020; Molloy et al., 2020; Winstone & Carless, 2020). Some strategies for seeking, exchanging and utilizing feedback will be suggested in the next paragraphs. For this, adequate preparation, guidance and sufficient opportunities for practice are crucial. 2.2.1 Preparation As a preparatory condition, the purpose of feedback should become clear to the learners (Price et al., 2010, p. 278). For instance, students may have previously seen feedback as error correction or a “justification of the grade” they received (Price et al., 2010, p. 283) instead of part of formative assessment that should help them improve in the future. In that respect, teachers should strive to create an atmosphere in which it is 35 2.2 Feedback literacies <?page no="37"?> okay to make errors and mistakes (Bowen & Ellis, 2015; Hatzipanagos & Rochon, 2010, p. 491). These are then handled in a constructive manner so that the learners will not only know what to improve and how to do so, but also be motivated to do so (Bowen & Ellis, 2015). Second, learners need to recognize the active role that they will play in the feedback process (Evans, 2013, p. 79; Nash & Winstone, 2017). This already starts with the identification and communication of their own learning needs as well as their understandings of assessment standards and criteria (Molloy et al., 2020, p. 530; cf. Nicol, 2010, p. 504). In that regard, it is argued that co-constructing the criteria together with the learners is beneficial (cf. e.g. Hattie & Clarke, 2019, pp. 13, 49). It can help them gain a better understanding of the underlying concepts and might result in the creation of a learner-friendly rubric (Brookhart, 2017, p. 91). It can then be utilized as a checklist for later assignments (cf. Cartney, 2010, p. 556) or as part of self-assessment or peer feedback. Assessors may also choose to check students’ understanding of the learning goals and assessment criteria, e.g. by conducting a quiz (see Lee & Ho, 2021, pp. 130-131, who utilized Google Forms for this purpose; see also section 3.10 on live polls). Third, a technological familiarization might be necessary. Learners should be familiarized with the particularities of different (digital) feedback methods and the tool features of the specific applications that are used (cf. e.g. Zaky, 2021, p. 82). For this, chapter 3 lays important foundations. For instance, there are differences between asynchronous and synchronous feedback exchanges (see section 2.4.2), but also between particular types of written, oral or multimodal feedback. Teachers should coach the students in feedback provision by showing examples and modelling the procedure, for example when annotating electronic documents (section 3.1) or creating screencast feedback (section 3.13). They might also provide manuals or guidelines that would ease the implementation for the students, for instance by uploading them onto the LMS (cf. Gedera, 2012, pp. 22-23). Afterwards, learners should be given the chance to try them out. As with any tool that is utilized for the first time for a particular purpose, it would be important to reflect on it together with peers or in the whole class (Zaky, 2021, p. 83). These initial steps have been termed as “training” (Sun & Doman, 2018, p. 4, for peer review) or “preparatory guidance” (Beaumont, O’Doherty, & Shannon, 2011, p. 674, for feedback in general) by some scholars. This phase may also include analyses of good (and poor) work samples (e.g. from previous years) and discussions about model answers (Beaumont et al., 2011, pp. 674-675; Carless & Boud, 2018, pp. 1320-1321; cf. Hattie & Clarke, 2019, pp. 31, 49, 53, 67; see also Evans, 2013, p. 79). Hattie and Clarke (2019) add that the modeling extends to teachers “being active recipients and listeners to feedback about their impact” (p. 171). At the same time, this means that students transform into feedback providers as well as active seekers of feedback information instead of passive recipients (Molloy et al., 2020, pp. 531, 533; Winstone & Carless, 2020). 36 2 Theoretical frameworks and foundations <?page no="38"?> 2.2.2 Seeking feedback To seek feedback, learners could formulate a “feedback request” (Winstone & Carless, 2020, p. 110; Wood, 2021, p. 11, citing Jönsson & Panadero, 2017), e.g. by writing a cover letter that they attach to their assignment. Therein, they delineate the areas for which they would like to receive feedback (Killingback, Ahmed, & Williams, 2019, p. 37; Stannard, 2019, p. 68). To scaffold the formulation of the feedback request for the learners, teachers might provide language prompts and guiding questions. For example, students could be asked to answer the following questions on the cover page of their assignment (adapted from Winstone & Carless, 2020, pp. 108, 110): (1) What are the areas I feel confident about in my assignment? (What do you think are the strongest aspects of your assignment? ) (2) What are the areas I am unsure of in my assignment? (What areas of your assignment do you think need to be improved? ) (3) In particular, I would like feedback on (list up to three specific areas): … With the last question, they indicate “a preference for the kinds of feedback they would like” to obtain (Elbow & Sorcinelli, 2006, cited by Nicol, 2010, pp. 507-508). Typically, this refers to the feedback focus (specific assessment criteria, e.g. if they felt insecure about grammar; see section 2.1.2), but they may likewise specify their preferred feedback modality (audio, video, written etc.) for feedback provision (cf. Carless & Winstone, 2020, p. 9). Moreover, the feedback request can be created in a modality of their choice, e.g. in written or video format (McDowell, 2020b, p. 21). At any rate, these questions help the learners to perform a self-evaluation of their task solution and thus to engage with the assessment criteria and learning objectives more deeply (Ajjawi & Boud, 2015, p. 256; 2018, pp. 1108, 1116; Barton, Schofield, McAleer, & Ajjawi, 2016, p. 7). Furthermore, they learn to initiate a feedback dialogue (Winstone & Carless, 2020, pp. 97, 113) to which the assessors would respond (Ajjawi & Boud, 2015, pp. 256-257; Barton et al., 2016, p. 10). Alternatively, a modified version of the so-called K-W-L charts (“What I know”/ “What I want to know”/ “What I’ve learned”) could be adapted for feedback purposes (Khan, 2018, p. 5). The first section, “What I know”, helps to determine the current knowledge. As Hattie and Clarke (2019) pointed out, “[p]rior knowledge is the starting point for feedback” (p. 171), and thus, an accurate diagnosis of learners’ current knowledge, skills and understanding is crucial. It helps assessors to tailor the feedback message in a way that assists learners in “closing the gap between their current and desired learning” (Hattie & Clarke, 2019, p. 7). Likewise, it can function as a self-assessment for the learners (cf. Gigante, Dell, & Sharkey, 2011, p. 206). The second part, “What I want to know”, would be the feedback request proper, whereas the third section, “What I’ve learned” would be learners’ response to the feedback they had received. In addition, language prompts can fulfill important functions, not only for seeking feedback, but also for providing it (cf. section 2.1.3; see Min, 2005, pp. 306-307, 37 2.2 Feedback literacies <?page no="39"?> and Sun & Doman, 2018, p. 5, for peer review). To practice this procedure, peer feedback approaches have been found valuable. 2.2.3 Exchanging feedback Whenever possible or appropriate, a “peer-then-teacher approach” for exchanging feedback is recommended (Neumann & Kopcha, 2019). This means that learners are given the chance to engage in peer feedback practices before obtaining feedback from their teacher. One major reason is that “teacher’s comments can be seen as ‘the final solution’ which can dampen the willingness of the students to comment further” (Hedin, 2012, p. 2). This does not mean that teachers should refrain from contributing to peer exchanges, but rather their role is one of a mediator or facilitator for triggering, accelerating or deepening the peer conversation, as will be explained further below. Defining and implementing peer feedback: Peer feedback means that learners “take the reciprocal roles of being a provider and a receiver of comments” (Zhu & Carless, 2018, p. 883; cf. Nicol, 2010, p. 511; Nicol et al., 2014, p. 102; Silva, 2017, p. 329). They actively serve as critics of each other’s work with the aim of mutually supporting each other in the learning process. Its impact on performance has varied in prior research, though (cf. Evans, 2013, p. 92). Based on a review of the previous literature, Figure 5 summarizes several benefits and challenges of peer feedback (Brereton & Dunne, 2016; Cartney, 2010; Chang, 2016; Cho & MacArthur, 2010; Demirbilek, 2015; Ekahitanond, 2013; Hyland & Hyland, 2006; Nicol et al., 2014; Kemp et al., 2019; Sadler, 1989; 2010; Sun & Doman, 2018; Zhu & Carless, 2018). 38 2 Theoretical frameworks and foundations <?page no="40"?> Benefits Challenges developing an understanding of highquality work by learning from ﹢ others’ perspectives and solutions (good examples and flaws) ﹢ others’ feedback on others’ errors ﹢ the comments they receive improving feedback skills by ﹢ taking others’ feedback as a model ﹢ developing communication and explanation skills learning gain depends on peers’ skills in error detection and in feedback provision, e.g. the comments are ﹣ vague ﹣ overly critical ﹣ overly positive ﹣ superficial (local errors) ﹣ selective (dependent on peers’ confidence/ areas of expertise) ﹢ developing reflection skills, critical thinking and self-regulated learning ﹢ gaining a deepened understanding of assessment criteria to monitor progress ﹢ feeling solidarity with peers ﹢ accepting critiques more constructively ﹢ positive feedback can boost learners’ motivation and confidence in their skills ﹣ embarrassed or unwilling to share work ﹣ afraid of reluctance, resentment or even humiliation ﹣ feeling uncomfortable with (providing and receiving) negative feedback ﹣ afraid of making direct edits to other’s work ﹣ feeling not competent enough to evaluate others’ work ﹢ learning to take responsibility of own and other’s learning (mutual benefit) ﹢ scaffolding each other’s process from their own experience as learners (teacher comments are too abstract) ﹢ constructing and negotiating meaning collaboratively, thus developing collaboration and communication skills ﹢ creating a sense of a learning community, also for future work anonymous peer feedback: ﹣ discarding as irrelevant (no expert) ﹣ feeling personally attacked (friends) ﹣ awkward (no possibilities for interaction and clarification) non-anonymous peer feedback: ﹣ worried about damaging or losing face, therefore afraid of making critical comments Cognitive developing eflection skills, Metacognitive feeling solidarity with peers Affective Social Figure 5: Benefits and challenges of peer feedback 39 2.2 Feedback literacies <?page no="41"?> To pre-empt possible challenges, sufficient preparatory work, adequate training and guidance as well as regular practice are indispensable (Sadler, 1989, p. 143; see also Hattie & Clarke, 2019, pp. 97, 106, 121; Min, 2005; Sun & Doman, 2018; Winstone & Carless, 2020, pp. 136-146). In that respect, Sun and Doman (2018, p. 4) suggested a three-partite structure of (1) training, (2) modelling/ scaffolding, and (3) practicing. The main steps are thus highly similar to the general procedure, but also show some particularities. First, in (1) the training phase, teachers make learners aware of the specific purpose and relevance of peer assessment (Sun & Doman, 2018, p. 4; see also Svinicki, 2001, p. 22). Moreover, it is important to create a trustful atmosphere (Hattie, 2009, p. 247; Hattie & Clarke, 2019, pp. 4, 12, 29-30) so that the peer assessment process is not impaired by feelings of fear, reluctance, resentment or even humiliation (Cartney, 2010, p. 556; Sadler, 1989, p. 140). To achieve this, learners should be allowed to voice their concerns, e.g. that they feel embarrassed to share their work or that they are afraid of the comments their peers might make (Hattie, 2012, p. 131; cf. Berg, 1999, p. 239; Brereton & Dunne, 2016, p. 29412; Cartney, 2010, p. 555). Likewise, teachers could report about their own professional peer review experience (Berg, 1999, pp. 238-239) and the challenges that they encountered (Winstone & Carless, 2020, p. 137). Second, in (2) the scaffolding/ modeling phase, teachers illustrate the process of peer review to the learners (Sun & Doman, 2018, p. 5; cf. e.g. Chang, 2012, p. 67; Rochera, Engel, & Coll, 2021, p. 4). In that regard, they “provide a model for students to follow in terms of both ‘what to comment on’ (content) and ‘how to comment’ (language)” (Xu & Carless, 2016, p. 1089; cf. Dippold, 2009, p. 33). Particularly in foreign language learning settings, a handout with language suggestions for providing feedback and conducting feedback dialogues would be valuable (cf. Berg, 1999, pp. 223-224; Sun & Doman, 2018, p. 4; see Min, 2005, pp. 306-307, as well as the adapted example by Sun & Doman, 2018, p. 5; see Watling & Lingard, 2019, for further examples and Kerr & McLaughlin, 2008, p. 12, for a sample script). Finally, in (3) the practicing phase, learners have the opportunity to apply the procedure themselves (Sun & Doman, 2018, p. 5). It may start as a whole-class or small-group activity to alleviate some initial anxieties or concerns before individual work begins (Berg, 1999, p. 239; Svinicki, 2001, p. 22). This process is assisted by the teacher (Berg, 1999, p. 239) and the guidelines that had been agreed on beforehand (Sun & Doman, 2018, pp. 5-6). Teachers as moderators of the peer feedback activity: Indeed, teachers play multifaceted roles as moderators of the peer feedback interactions (Dippold, 2009, p. 32; Xie, Ke, & Sharma, 2008, pp. 23-24). In that regard, Lee and Ho (2021, pp. 130-131) differentiated between pre-, whileand post-phases of teacher support during peer review. As part of the preparatory phase, teachers may choose to assign one (Hernandez, Amarles, & Raymundo, 2017, p. 110; Huang, 2016, p. 40) or several partners (Sayed, 40 2 Theoretical frameworks and foundations <?page no="42"?> 2010, p. 60) to the individual students. Many LMS allow for the random allocation to groups for peer review activities, but teachers can likewise do this manually (Demirbilek, 2015, p. 214) or let the students choose their partners (Gedera, 2012, p. 22). For anonymous feedback, teachers and learners need to check whether anonymity is preserved throughout the review process. If peers are anxious about editing other’s work directly, they could resort to more indirect ways, e.g. inserting comments, depending on the application that is used (Ma, 2020, p. 202). Generally, teachers need to clarify the organization of the review activity (Rochera et al., 2021, p. 4). For instance, they might want to specify a minimum number of comments that everybody should contribute (Lee & Ho, 2021, p. 126) in order to involve all students actively. Moreover, it seems advisable to set a deadline, not only for the peer comments, but also for the replies and revisions. For this, the calendar function of some LMS may prove useful (see e.g. Blackboard Help, 2020). For collaborative tasks (such as wiki writing; see section 3.5), it might make sense to start with intra-group feedback before moving on to inter-group feedback (e.g. Kemp et al., 2019; Kitchakarn, 2013, p. 158; Lin & Yang, 2011; Ma, 2020, p. 198). Intra-group feedback is between members of one specific group who create contents together, whereas inter-group feedback occurs between members of different groups who are nevertheless part of the same learning community (Ma, 2020, p. 198). To ease the access to the digital peer contributions, teachers may create a list or site in the LMS in which all hyperlinks to the students’ documents are collected (Kemp et al., 2019, p. 151). They could also open a forum or chat in which the learners can ask questions during the peer review (cf. Kemp et al., 2019, p. 151). After these preparatory phases, the peer review would start and the teachers would monitor students’ progress and challenges (Kemp et al., 2019, p. 152) and prompt or provide further feedback as needed. At an early stage (“acknowledge and encourage”), the teachers should briefly respond to the first posts (Association of College and University Educators [ACUE], 2020). This way, they can easily demonstrate their teaching presence and stimulate further contributions (ACUE, 2020). Moreover, teachers are usually able to quickly identify those who contribute actively and those who do not (Kemp et al., 2019, pp. 151-152). Consequently, they may encourage them to participate (e.g. Çiftçi, 2009, p. 22; Lin & Yang, 2011, p. 94), e.g. by addressing them directly and by providing specific advice. For instance, some peers might be inclined to leave casual or superficial comments rather than informative and critical remarks (Wu, 2006, pp. 132-133; Xie et al., 2008, p. 23). In collaborative feedback environments, this may even lead to an adverse chain effect: If a peer did not thoroughly engage in critical thinking, their partner would do so neither (Xie et al., 2008, p. 23) To avoid this, teachers should “prompt and confirm deeper engagement and think‐ ing” (ACUE, 2020). To exemplify, they could reply to individual student posts and invite further responses by dropping specific questions and suggestions (ACUE, 2020; cf. Hernandez et al., 2017, p. 110). They might also ask the learners to identify similarities 41 2.2 Feedback literacies <?page no="43"?> and differences between the solutions that others have uploaded (if multiple peers are involved) and their own work. Whenever the teachers notice certain difficulties, they should try to assist the students in finding a solution on their own or otherwise clarify important concepts and questions (ACUE, 2020). They might also summarize recurring aspects or challenges and make them available to the whole class (cf. ACUE, 2020). Teacher feedback after peer feedback: Overall, it becomes clear that teachers take on a key role in scaffolding the peer review activity (Woo, Chu, & Li, 2013, p. 302). At the same time, teachers are a central source of feedback for the learners who “provide expertise” by confirming students’ thinking or clarifying misconceptions (ACUE, 2020). Hence, peer review should rather be regarded as a complement of teacher feedback, not as its replacement (Chang, 2016, p. 86; Hyland & Hyland, 2006, p. 91; cf. Hattie, 2009, p. 187). 2.2.4 Utilizing feedback Beyond seeking and providing feedback, learners must be able to process and utilize feedback information (Molloy et al., 2020, pp. 531, 535; Winstone & Carless, 2020, p. 6). This presupposes the ability to understand the feedback message (Sadler, 2010, p. 535). In the end, “[f]eedback can only be effective when the learner understands the feedback and is willing and able to act on it” (Price et al., 2010, p. 279; cf. Carless, 2020; Hartung, 2017, p. 202; Hattie & Clarke, 2019, pp. 51, 174; Nash & Winstone, 2017, p. 8). However, they should not blindly follow the suggestions, but actively reflect on them to arrive at their own conclusions (Molloy et al., 2020, p. 532; cf. Lee & Ho, 2021, p. 131). This “mindful processing” (Winstone & Carless, 2020, p. 42) might require them to ask for further clarification if they are not sure about their interpretation of the feedback message (Molloy et al., 2020, p. 532). Active processing hence involves self-reflection and further communication with the feedback providers ( Jug et al., 2019, p. 248), but also decision-making regarding the implementation of the feedback (Winstone & Carless, 2020, p. 42). For this final step, learners need sufficient opportunities to actually use the feedback information (Gigante et al., 2011, pp. 206-207; Winstone & Carless, 2020, p. 46) as well as “mechanisms […] to show how they had applied feedback” (Price et al., 2010, p. 282). One common possibility is to ask learners to revise their work by implementing the comments they have received. The revision could then be accompanied by a letter (or any other format) in which the students explain the changes they have made or the reasons for not adopting them (Ferris, 1997, p. 331; Winstone & Carless, 2020, p. 84). They could also reconsult their self-evaluation that they had submitted together with their assignment and feedback request (see section 2.2.2). Alternatively, the learners might be given a concrete task or question for reflection that they should answer after the reception of feedback. To scaffold the procedure, some of the following guiding questions can be 42 2 Theoretical frameworks and foundations <?page no="44"?> used (1, 2, 3, 4, 5 adapted from Barton et al., 2016, p. 5; 3a, 3b and 6 adapted from Boaler, 2016, cited in Hattie & Clarke, 2019, p. 134): (1) In how far does the teacher/ peer feedback correspond to your self-evaluation? (2) What is your (main) learning gain from the feedback? (3) Are there any aspects you were not aware of beforehand? a. What new words/ terms/ concepts/ strategies/ … have you learned when solving the task or when engaging with the feedback? Please give a concrete example for illustration and explain it in your own words. b. Tell me about a mistake or misconception that you have become aware of. What did you learn from it? (4) What actions will you take in response to the feedback? (5) What, if anything, is unclear about the feedback? These are the open questions that I want to pose to my teacher/ peer: … (6) If you don’t have a question, how could you apply the feedback to a similar task? In digital settings, the revisions might become directly visible to everyone if the “track changes” function is activated in a document (see section 3.1). The learners could also use a checklist or assessment rubric to highlight the points they have worked on. Care must be taken, though, that the rubric does not lead learners to think of feedback as “a list of things to be done (ticked off)” (Nicol & Macfarlane-Dick, 2006, p. 209, with reference to Sadler, 1983). Instead, they might be asked to summarize the feedback in their own words ( Jug et al., 2019, p. 246) and to exemplify their learning gain from the feedback (cf. Nicol, 2010, p. 508). This can also be done in very simple ways, e.g. by writing a “one-minute paper” (Svinicki, 2001, p. 19) or by assessing the usefulness of the feedback by writing brief comments on (colored) slips of paper in the face-to-face classroom (Vogt & Froehlich, 2018, p. 140). In digital environments, learners could insert comments or sticky notes in a cloud document (see section 3.3) or respond to quick live polls that inquire into their engagement with the feedback (see section 3.10). Moreover, a chat tool can be utilized to enable quick exchanges, for instance as part of the Google Docs interface (Wood, 2021, p. 5; see section 3.7). Asynchronously, the students might record an audio, video or screencast file in which they demonstrate their understanding and response to the feedback (e.g. the “Video Feedback Loop” by McDowell, 2020b, p. 21; 2020c, p. 131; see sections 3.11, 3.12 and 3.13). In a more elaborate manner, the learners could develop an action plan for their further learning path based on the feedback they have obtained (Hartung, 2017, p. 209, citing Beaumont et al., 2011, pp. 675, 677; Molloy et al., 2020, p. 534). Alternatively, they may discuss the feedback contents in small groups and develop an action plan together (Nicol, 2010, p. 508). Digital applications, such as Google Keep, could be utilized by the learners in order to note down their next steps as a kind of checklist, mindmap or comprehensive revision plan (Rottermond & Gabrion, 2021, p. 41; cf. section 3.3). Likewise, e-portfolio tools can be helpful for learners to assemble, organize and compare the feedback they have received, to reflect on it and carve out plans for their future learning (cf. Winstone 43 2.2 Feedback literacies <?page no="45"?> & Carless, 2020; see section 3.15). These plans can be exchanged and discussed with the teacher or with peers, either in blended-learning settings or virtual environments, e.g. by means of videoconferencing and screensharing (see section 3.14). Simultaneously, the eportfolio could be considered as a kind of “longitudinal feedback journal” (Ajjawi & Boud, 2018, p. 1108) that serves as a log of their learning journey. There are thus various ways in which “digital technologies can be utilised to mediate feedback related dialogues that may enhance feedback uptake and literacy processes” (Wood, 2021, p. 3), as the present book will show. Clearly, teachers (or peer assessors) need to respond to the learners’ application of the feedback, which may in turn trigger further feedback loops and feedback exchanges. Hence, this final phase may likewise lead to a new “Dialogic Feedback Cycle” (Beaumont et al., 2011, pp. 674-675), as visualized by Figure 6. Prof. Dr. Jennifer Schluer Dialogic Feedback Cycle 01 Screencast Feedback 1 Dialogic Feedback Cycle Feedback is requested or actively sought Participants are encouraged to engage in further dialogue Feedback is applied whenever it is considered useful Constructive feedback is provided Discussion about whether feedback is understood Discussion about whether Figure 6: Dialogic feedback cycle In sum, all the steps and competencies that have been outlined so far contribute to a reconceptualization of “assessment […] as an active process done with rather than to students” (Cartney, 2010, p. 556, original emphasis). For digital or technology-enriched environments, some additional skills are relevant, which will be reviewed next. 44 2 Theoretical frameworks and foundations <?page no="46"?> 2.3 Digital literacies Undoubtedly, digitalization affects almost all areas of life and its importance is likely to increase in the foreseeable future (Redecker & Punie, 2017, p. 12). Despite this, many educational institutions only seize a small fraction of the possibilities afforded by digital technologies. This does not necessarily have to do with a limited availability of technological devices and programs, but with attitudinal aspects (cf. Blume, 2020) as well as insufficient digital competencies of the various stakeholders (cf. Kluzer & Pujol Priego, 2018, p. 46; König, Jäger-Biela, & Glutsch, 2020; Monitor Lehrerbildung, 2021, p. 2). At the outset, it should be noted that the literature shows terminological incon‐ sistencies regarding the use of terms such as technological competencies, media competencies and digital competencies as well as “Internet literacy, ICT [Information and Communication Technology] literacy, media literacy and information literacy” (Ferrari, 2012, p. 16). Current frameworks often refer to “digital literacies” as a cover term, encapsulating many of the aforementioned aspects. This notion and relevant frameworks will be discussed in the present section. 2.3.1 Defining digital literacies Clearly, becoming digitally literate is a very complex and continuous endeavor. Digital technologies keep on changing and afford new types of interaction and communication as well as of knowledge representation, access and creation (Ware, Kern, & Warschauer, 2016, p. 307). Texts can become enriched by multimedia elements, which, in turn, may lead to the development of new genres and discourse structures as well as language forms (Ware et al., 2016, p. 307). These developments necessitate new conceptualizations of literacy that go beyond the reading and writing of paper-based texts (Ware et al., 2016, p. 307). Ware et al. (2016) defined digital literacies as the “reading and writing on electronic devices and the Internet, [which] broadly includes the knowledge, skills, and practices that people engage with when they read and write in electronic environments” (p. 307). The term “literacies” in the plural thereby stresses the multiplicity of forms and purposes that reading and writing can have in the digital age (Ware et al., 2016, p. 307). Crucially, however, digital literacies nowadays extend beyond the written mode and capture oral and multimodal ways of expression as well (The New London Group, 1996, pp. 60-61). Various semiotic resources and technological tools can be utilized to create products which carry “traces of [a] multivocal and multimodal remixing of signs” (Ware et al., 2016, p. 310). Thus, not only linguistic competence is important, but “symbolic competence with multiple semiotic modes” (Ware et al., 2016, p. 313) in a context-sensitive manner (p. 322). Hence, literacy was increasingly seen as “multiple, dynamic, dialogic, and situated” (Ware et al., 2016, p. 308) and the importance of multiliteracies was stressed (The New London Group, 1996). The pedagogy of multiliteracies is based on the concept of 45 2.3 Digital literacies <?page no="47"?> “design”, i.e. people’s “adaptive meaning-making capacity” (Nelson & Kern, 2012, p. 53). To cite Kress (2004), “design asks, what is needed now, in this one situation, with this configuration of purposes, aims, audience, and with these resources, and given my interests in this situation” (p. 49, original emphasis). Overall, The New London Group (1996) differentiated between six design elements (pp. 78-80, 83, examples slightly adapted): (1) Linguistic (e.g. grammar, vocabulary, use of stylistic devices, information structures as well as local and global coherence patterns), (2) Visual (e.g. images, illustrations, colors, video, page layouts and screen formats, including perspective, foregrounding and backgrounding etc.), (3) Audio (e.g. music, voice, sound effects), (4) Gestural (e.g. body language, facial expressions, sensuality, feelings and affects), (5) Spatial (e.g. geographic meanings, environmental space), (6) Multimodal (setting the previous five design elements into dynamic relations). The latter point, multimodality, refers to the interconnections between the previous ones, “the social practice of making meaning by combining multiple semiotic resources” (Siegel, 2012, p. 671). Certainly, not only the creation, but also the interpretation of multimodal design requires various competencies from teachers and learners. To facilitate understanding, Mayer and colleagues suggested several multimedia principles. Based on Paivio’s (1986) dual-coding theory, they assumed that two separate channels are at work in the human mind: a verbal/ auditory one and a visual/ pictorial one (Mayer, 1997, p. 4; 2002, p. 27; 2005a, p. 33; 2006, pp. 3, 41; 44; 46-48; Mayer & Moreno, 2003, p. 44). As each channel only has limited capacities for processing information (Mayer, 2006, pp. 48-50), there is a danger of cognitive overload. For instance, if learners need to pay visual attention to reading an on-screen text and viewing an animation at the same time, they may not know where to focus their attention (Mayer & Moreno, 2003, pp. 45-46; cf. 1998, p. 313; Mayer, 2006, pp. 139-140). This phenomenon has been termed “splitattention effect” by Sweller (1999, cited by Mayer & Moreno, 2003, p. 45). To prevent this, the words could be presented as audio narration instead (Mayer, 2006, p. 146; Mayer & Moreno, 1998, pp. 312, 318; 2003, p. 46). Even though research in that field has not yet focused on the particularities of feedback, some of their principles might nevertheless be relevant. For example, the socalled “modality principle” emphasizes the usefulness of presenting the information in different modalities (Mayer, 2002, p. 28; 2006, pp. 134-146). In the ideal case, there will be a balance and complementary relationship between visual and auditory information, “produc[ing] an effect that is more than the sum of its parts” (Einstabland & Letnes, 2010, p. 11, cited by Mathisen, 2012, p. 106; see also Vincelette & Bostic, 2013, p. 271). Furthermore, redundant and irrelevant information should be excluded from any multimedia presentation, as captured by the “redundancy principle” (Mayer, 2006, pp. 147-160). Related to that, the “coherence principle” asks for “[a] concise presentation [which] allows the learner to build a coherent mental representation - that is, to focus on the key elements and mentally organize them in a way that 46 2 Theoretical frameworks and foundations <?page no="48"?> makes sense” (Mayer, 2006, p. 133). This can also be supported by “spatial contiguity” (cf. Mayer, 2006, pp. 81-95) and “temporal continuity” (pp. 96-112). These principles state that visuals should be presented in close spatial and temporal proximity to the corresponding words. Hence, a careful concertation or synchronization of visual and auditory information is important (cf. Stannard & Sallı, 2019, p. 461). Furthermore, it would be helpful to present contents in small chunks, as is stressed by the “segmenting principle” (Mayer, 2005d, p. 6). In that respect, organizational cues serve to emphasize the structure of, for example, video contents in general (Brame, 2016, p. 2) or of problemsolving paths in particular (De Koning, Tabbers, Rikers, & Paas, 2009, p. 120). Finally, personalization is crucial, as captured by the “personalization principle” (Mayer, 2002, p. 29; cf. 2005b, pp. 6-7; 2005c, p. 201). The learners should get the impression that the material has specifically been designed for them and their needs (Brame, 2016, p. 4). At the same time, this may build and enhance social relationships, which is also essential for successful feedback interactions (cf. e.g. Chong, 2021). 2.3.2 Developing digital literacies Similar to feedback literacies (see section 2.2), teachers are seen as enablers of learners’ digital literacies. Hence, in order to develop learners’ digital literacies, educators need to be digitally literate themselves (Redecker & Punie, 2017, p. 4). Several frameworks have therefore been suggested that aim at building teachers’ digital competence or digital literacy. Three of them will be presented in the current section: TPACK, UNESCO ICT and DigCompEdu. The abbreviation TPACK stands for Technological, Pedagogical, And Content Knowledge, i.e. the combination of subject content, pedagogical as well as technological knowledge that forms “an integrated whole, a ‘Total PACKage’” (Thompson & Mishra, 2007, p. 38). It is a framework for developing teachers’ knowledge about the effective and integrated use of technology (Mishra & Koehler, 2006). In that regard, it outlines seven interrelated dimensions (see Koehler, Mishra, & Cain, 2013, pp. 14-17; Mishra & Koehler, 2006, pp. 1026-1030): (1) Content Knowledge (CK): knowledge about the subject matter, (2) Pedagogical Knowledge (PK): knowledge about teaching and assessment methods and processes, (3) Technology Knowledge (TK): knowledge about the different types of technologies available that evolve and shift continuously, (4) Pedagogical Content Knowledge (PCK): knowledge about teaching processes for particular content areas, (5) Technological Content Knowledge (TCK): knowledge about technology for specific contents, (6) Technological Pedagogical Knowledge (TPK): knowledge about the influence, affor‐ dances and constraints of various technologies in teaching, (7) Technological Pedagogical Content Knowledge (TPACK): knowledge about the inte‐ grated use of various technologies for the teaching of different content areas. 47 2.3 Digital literacies <?page no="49"?> The framework stresses the importance of a context-sensitive technology integration. Contextual variables include, for instance, institutional conditions, sociocultural fac‐ tors, time constraints, the learners’ as well as teachers’ competencies and the available equipment (cf. e.g. Harris & Hofer, 2011, pp. 213, 222, 224, 225; Mishra & Koehler, 2006, p. 1029). Therefore, it is necessary to situate knowledge of technology within content and pedagogical knowledge instead of treating technology separately from or as an add-on to content and pedagogy (Mishra & Koehler, 2006, pp. 1024-1025). While TPACK uses the general term “technology”, the UNESCO ICT Framework for Teachers foregrounds “Information and Communication Technologies” (ICT) (UNESCO, 2018). It is conceptualized as a guide for teacher trainers and educators who aim to foster preand in-service teachers’ continuous professional development with regard to technology integration (UNESCO, 2018, pp. 8, 11). It offers a comprehensive set of 18 competencies that are organized into three successive stages of (1) knowledge acquisition, (2) knowledge deepening and (3) knowledge creation. They stretch over six areas of teachers’ professional practice (UNESCO, 2018, pp. 8-10): (1) Understanding ICT in Education Policy, (2) Curriculum and Assessment, (3) Pedagogy, (4) Application of Digital Skills, (5) Organization and Administration, (6) Teacher Professional Learning. For all aspects, it provides detailed descriptions, performance indicators and sample activities. In the end, the aim is to empower teachers and students to use ICT in a competent, inclusive, innovative and responsible manner in line with the 2030 Agenda for Sustainable Development (UNESCO, 2018, p. 7). The third framework that is reviewed here is of particular relevance for educators in Europe. DigCompEdu is the short form of the “European Framework for the Digital Competence of Educators” (Redecker & Punie, 2017, pp. 4, 8). It addresses educators at all levels of education in formal and non-formal learning contexts. In analogy to the Common European Framework of References for Languages (CEFR), a progression model is proposed that comprises six stages. For educators, these are the following: A1: Newcomer, A2: Explorer, B1: Integrator, B2: Expert, C1: Leader, C2: Pioneer (Redecker & Punie, 2017, pp. 9, 27-31). The different stages were inspired by Bloom’s revised taxonomy of learning outcomes that range from “remembering” and “understanding” to “applying” and “analyzing” as well as to “evaluating” and “creating” (Anderson et al., 2001; see Redecker & Punie, 2017, p. 29). The framework specifies a set of 22 competencies that are are grouped into six areas (Redecker & Punie, 2017, p. 8): (1) Professional engagement: organizational communication, professional collaboration, reflective practice, digital continuous professional development, (2) Digital resources: selecting, creating and modifying as well as managing, protecting and sharing digital resources, 48 2 Theoretical frameworks and foundations <?page no="50"?> (3) Teaching and learning: teaching, guidance, collaborative learning and self-regulated learning, (4) Assessment: assessment strategies, analyzing evidence, feedback and planning, (5) Empowering learners: accessibility and inclusion, differentiation and personalization, actively engaging learners, (6) Facilitating learners’ digital competence: information and media literacy, communica‐ tion, content creation, responsible use, problem solving. For the present book, the fourth area is particularly relevant, as it deals with “digital technologies and strategies to enhance assessment” (Redecker & Punie, 2017, p. 16). This can either be accomplished by augmenting existing assessment strategies or by innovating new ones (Redecker & Punie, 2017, p. 21). With the available tools becoming ever more diversified, awareness of the distinct affordances as well as training in their functions would be crucial for teachers and learners (cf. Chang, 2016, p. 102). The following sections will therefore provide an overview of digital feedback methods before they will be discussed in more detail in chapter 3. 2.3.3 Questions/ Tasks (for teachers and students) Knowledge, reflection and application: (1) Choose one of the following tools for self-assessing your digital literacy: a. The Digital Competence (DigComp) Wheel (available in different languages; offers a spider web visualization of one’s digital competencies): https: / / digital-competence.eu/ dc/ b. DigCompEdu self-assessment tools for teachers: ■ https: / / digital-competence.eu/ digcompedu/ survey/ qid-8430/ (analogous to the DigComp Wheel) ■ https: / / ec.europa.eu/ eusurvey/ pdf/ pubsurvey/ 132740? lang=EN&unique= (currently PDF only; until early 2022, an interactive version was available as well) ■ https: / / joint-research-centre.ec.europa.eu/ digcompedu/ digcompedu-selfreflection-tools_en (newest version of the DigCompEdu self-reflection tool since early 2022) c. Digitalskillsaccelerator.eu for enhancing students’ digital skills: https: / / www.digitalskillsaccelerator.eu/ learning-portal/ online-self-assessmenttool/ d. Ikanos test for citizens, students and teachers based on the DigComp frame‐ work: https: / / test.ikanos.eus/ index.php/ 566697? lang=en e. eLene4work (questionnaire for the evaluation of one’s soft skills, including digital skills; for project information see http: / / elene4work.eu/ ): http: / / sa.elene4work.eu/ selfassessment.php (2) Do the results mirror your own perceptions? What was surprising to you? (3) What do you want to do next in order to increase your digital competence? With what area do you want to start? 49 2.3 Digital literacies <?page no="51"?> Modification of the task for in-class use: (1) Everyone in your course is assigned a number that corresponds to a particular self-assessment survey for digital competence. (You may additionally distinguish between the different educational institution types in the DigCompEdu survey.) Every student then completes the assigned survey on their own. Next, ask your students to do the following steps. (2) Look at your survey results and reflect on them (see questions above). (3) Then get together with the other students who completed the same survey. Talk about the survey and its results. Discuss the following questions: a. In how far do the survey results match your own perceptions? b. What was new to you? c. What are the strengths of the survey? d. Do you feel that essential aspects/ dimensions are missing from the survey? What would you add or change in the survey? (4) Afterwards, new (jigsaw) groups will be formed so that the different surveys can be compared. You may either start with small groups of about three persons before larger groups are formed. Discuss the above-mentioned questions and evaluate the surveys in terms of their usefulness and potential contexts for using them. 2.4 Digital feedback terminology and overview Digital feedback is still a relatively young term and is used alongside many other related terms. It comprises manifold ways of providing and receiving learning-orien‐ ted comments by using technologies. Further common cover terms are “technologyenhanced” (e.g. Henderson & Phillips, 2014) and “technology-enabled” feedback (Winstone & Carless, 2020, p. 60). Sometimes, the tools that are used to create and receive feedback have given rise to alternative labels, such as computer-mediated feedback or mobile-assisted feedback. While these notions accentuate the devices that are utilized for feedback exchanges, the terms “online feedback” or “web-based feedback” foreground the virtual environment in which these feedback interactions take place. Moreover, the more general label e-feedback (electronic feedback) has been in use for a long time before terminological preferences seem to have shifted to digital feedback. The aim of the present section is to categorize different digital feedback methods and to delimit the scope of the ensuing chapter. Before that, the readers are invited to reflect on their own practices. 50 2 Theoretical frameworks and foundations <?page no="52"?> Questions/ Tasks before reading the section Knowledge and reflection questions: (1) In what ways have your feedback practices changed during the Covid-19 pandemic and the closures of educational institutions? a. Have you switched to digital methods? Which ones? b. Have you continued to use analog feedback methods (such as correcting learners’ written solutions in print or handwritten format via postal mail)? (2) What feedback opportunities would you have liked to seize (but haven’t)? (3) Why haven’t you seized them? a. Was there a lack of equipment and technological infrastructure? (On your part? On the part of your learners? On both sides? ) b. Weren’t you sure about how to utilize the equipment? c. Weren’t you sure about how to design feedback appropriately in digital ways (digital didactics)? d. Did you feel that there wasn’t enough support from your school or colleagues or somebody else? e. Weren’t you sure whether your students would be able to utilize the digital feedback? f. Weren’t you sure about how to foster digital feedback skills among your students? g. What else? 2.4.1 Technology-generated and technology-mediated digital feedback A frequent distinction is the one between technology-generated and technologymediated feedback. Technology-generated (or computer-generated) feedback is typically automated (Ene & Upton, 2018, p. 1), as it has been pre-configured by a software programmer. For instance, it comprises learning apps and games (cf. Budge, 2011, p. 343) as well as feedback given by humanoid robots (see Handke, 2020). Even though the feedback has previously been programmed by a human who has written codes and algorithms, it is commonly not considered as human-generated feedback because “teachers’ responsibility for giving feedback is reduced” (Sherafati, Largani, & Amini, 2020, p. 4592). Rather, technology-mediated (or computer-mediated) feedback is classified as human-generated feedback (Ene & Upton, 2018, p. 1) since it requires personal involvement beyond or aside from programming. The tools that can be used for such purposes demand more or less sophisticated technological competences from the users, i.e. teachers and learners, as will be outlined in chapter 3 (cf. e.g. Ali, 2016, p. 108; Ghosn-Chelala & Al-Chibani, 2018, p. 149). What is more, the technologies and media to convey feedback have diversified and changed over the past decades, from feedback via cassette tapes, CD-ROMs, TV and computers to smartphones and other interactive devices (Gikandi, Morrow, & Davis, 2011, quoted by Evans, 2013, p. 85). In technology-mediated feedback, the provider utilizes such technological devices as well as software and apps in order to create feedback messages 51 2.4 Digital feedback terminology and overview <?page no="53"?> in written, oral and/ or visual ways (cf. the review by Sherafati et al., 2020, p. 4592). To exemplify, feedback can be transmitted via e-mail, text chats (Biber et al., 2011, p. 13) and electronic documents, for instance as comments and corrections in a text editor (e.g. Microsoft Word or Pages), via web conferences, by means of audio files, screencasts or other video formats (see e.g. Ali, 2016, p. 108; Chang et al., 2018, pp. 406, 411; Rottermond & Gabrion, 2021). Such technology-mediated feedback, i.e. feedback that assessors can create on their own, constitutes the focus of this book. Studies have confirmed that humangenerated feedback is usually more helpful than automated feedback, which is quite often very standardized and limited in detail. Accordingly, it has been recommended to use computer-generated feedback only as a supplement to feedback created by humans. For instance, this was confirmed by Sherafati et al.’s (2020) research that investigated the effectiveness of the feedback and learners’ satisfaction with it (pp. 4591, 4606-4607). Further studies of automated feedback indicated that it seems more suitable for testing than for providing in-process support to learners. For example, Tärning (2018) reviewed 242 learning apps used in Swedish primary schools and categorized the feedback they provided. Most apps (77 %) delivered only verification feedback, i.e. information as to whether the user’s response was correct or not (see section 2.1.2). Only 12 % provided some kind of elaborative feedback, mostly hints about solving the tasks, whereas only two out of the 242 provided explanatory feedback (Tärning, 2018, pp. 268, 271). About 55 % of the reviewed apps offered some encouragement to the learners, either “in the form of applause, cheering, balloons and stars” or similar (47 %) or as very general person-related comments, such as “good work”, “perfect”, or “amazing, you did it” (53 %) (Tärning, 2018, p. 274, emphasis omitted). According to Hattie and Timperley (2007, p. 90), this superficial feedback that addresses learners at the selflevel is not really helpful for making further progress (see section 2.1.1). The more valuable feedback at the task, process or self-regulation level (Hattie & Timperley, 2007, pp. 86-87, 90), however, was not given (Tärning, 2018, pp. 247, 275). Moreover, there was hardly any opportunity for the learners to try again, but instead the correct solution or a score was shown (Tärning, 2018, pp. 275-279). Tärning (2018) therefore argued that “most apps miss[ed] the opportunity of treating the learner as an active and constructive being who would benefit from more nuanced feedback” (p. 248). To improve the automated feedback provision, Narciss has worked on an interactive tutoring feedback (ITF) model that is concerned with tutorial feedback strategies (e.g. Narciss, 2013). Tutorial feedback strategies were defined as a combination of “formative elaborated feedback with tutoring and mastery learning strategies” (Narciss, 2013, p. 8). Hence, learners are not only made aware of gaps in knowledge, understanding or application, but also receive hints, explanations, examples and strategic advice (Narciss, 2018, p. 11). They are thus encouraged to actively search for solutions and construct knowledge (Narciss, 2018, p. 11). In a later paper, she referred to her model as an Interactive-Two-Feedback-Loop (ITFL) model (Narciss, 2018) to highlight the two core components, i.e. learner’s internal feedback loop and an external feedback loop from another feedback source (pp. 9-13). 52 2 Theoretical frameworks and foundations <?page no="54"?> To summarize this section, research has shown that computer-generated feedback has mostly served (summative) scoring and testing purposes instead of guiding the learners in formative ways (Ware, 2011, quoted by Evans, 2013, p. 85). While huge efforts are made to improve the automated feedback provision, it still seems that the human-generated feedback caters for learners’ needs more effectively. Certainly, automated feedback can fulfill important functions, but at present it might best be regarded as a supplement to feedback by human assessors. How this can be accomplished with technological means will be the objective of the remaining chapters and sections. In that regard, a frequent distinction has been made between synchronous and asynchronous e-feedback, which will be explained next. 2.4.2 Synchronous and asynchronous digital feedback Digital feedback can be exchanged in synchronous and/ or asynchronous ways (Ahmed, McGahan, Indurkhya, Kaneko, & Nakagawa, 2021, p. 293). Some technologies typically allow for the provision of synchronous feedback, whereas others rely on asynchronous communication (cf. Ali, 2016, p. 108; Chang et al., 2018, p. 407; Ghosn-Chelala & Al- Chibani, 2018, p. 149). Synchronously, feedback can be provided in real time, e.g. in webmeetings or through live corrections in a cloud document. Moreover, chats and messengers are typically associated with synchronous communication despite the short time delay (sometimes called quasi-synchronous). Finally, there are numerous ways of giving feedback in a time-delayed manner through asynchronous digital feedback. To exemplify, feedback can be exchanged via e-mail or written comments in a learning management system (LMS). Electronic files, such as Word documents with visible “track changes” and comment bubbles, could be attached to offer more detailed feedback. Digitally recorded feedback is another important type of asynchronous digital feed‐ back. Mainly three subtypes have been discussed in the previous literature: audio feedback, video feedback (talking head), and screencast feedback (cf. Phillips et al., 2017, p. 364; Ryan et al., 2016, p. 2). The terminology is, however, inconsistent: Sometimes “video feedback” was used as a cover term for both asynchronous screencast and talking-head video feedback as well as for synchronous video conferences in which feedback is provided (Hartung, 2017, p. 206). By contrast, Fang (2019) utilized screencast feedback as the superordinate term, comprising not only videos of screen movements with or without the addition of a talking head, but also talking-head videos (p. 1) as well as “synchronous virtual meetings with screen-sharing components as screencasting for feedback” (Fang, 2019, p. 106). In the book at hand, “talking-head video feedback” will be distinguished from “screencast feedback” before possible combinations will be discussed, e.g. as part of videoconferences. Many learning management systems, such as Moodle, Canvas, ILIAS or Blackboard, directly integrate a variety of these feedback options (Hartung, 2017, p. 206), but the scope of available functions differs from platform to platform. The choice of technologies for feedback purposes, however, should not be driven by their availability (techno-centric view), but by their usefulness for reaching a particular learning objective (didactics first 53 2.4 Digital feedback terminology and overview <?page no="55"?> principle; cf. Harris & Hofer, 2011, p. 214; KMK, 2016, p. 51; Wannemacher, Jungermann, Scholz, Tercanli, & von Villiez, 2016, p. 5). Consequently, teachers should be able “to make informed decisions on how to take advantage of the affordances of technology” while being aware of their possible constraints (Thompson & Mishra, 2007, p. 38). In fact, each feedback method offers distinct affordances. For instance, Chang (2012) found that synchronous face-to-face and digital modes allowed for social interaction and negotiation with the partners (pp. 72-73). Conversely, the asynchronous mode gave students sufficient time for reflection and the delivery of detailed feedback, for example by using the “Track Changes” tool for sentence-level corrections (Chang, 2012, p. 73). Synchronous modes might be particularly beneficial for the early stages of a drafting process, whereas asynchronous modes could be more suitable for later stages, e.g. in the writing process (Chang, 2012, p. 74). Others likewise stressed that synchronous oral exchanges (and online chatting; see Ene & Upton, 2018, p. 10) appeared to be valuable for global, higher-order concerns (see section 2.1.2), but also to summarize the essence of feedback messages that had previously been given in other ways (Ene & Upton, 2018, p. 10). Moreover, personal factors, such as preferred learning styles or the expertise in a particular field (e.g. grammar) can be influential (Chang, 2012, pp. 73-74). In addition, it needs to be borne in mind that many asynchronous and synchronous digital tools exist that can be utilized for various purposes, which is why further research is required. Eventually, different methods can (and should) be combined functionally. For instance, in synchronous feedback provision, text-related comments could be inserted into a collaborative document (e.g. Google Docs), while a chat tool may enable quick exchanges regarding the organization of the workflow and discussion of responsibil‐ ities (Strobl, 2015, cited by Chang et al., 2018, p. 416). These combinatory aspects will be considered at the end of each methodological chapter and reviewed more extensively in the general discussion. A preview of the chapter structures, contents and interconnections will be given next. 2.4.3 Overview of digital feedback methods discussed in this book Even though technologies open up many opportunities, most digital feedback is still rooted in the written paradigm (Kay & Bahula, 2020, p. 1890). Accordingly, several of the reviewed feedback methods are written in nature. The presentation of methods will begin with these methods, but then expand to the multiple other ways of digital feedback provision. Figure 7 gives an overview of the different digital feedback methods that will be discussed in more detail in the remainder of this book. The figure distinguishes between synchronous and asynchronous methods, with fluid boundaries between them, since the synchronicity partly depends on the users’ response times. Moreover, the figure differentiates between methods that tend to be unimodal and those that tend to be multimodal. Overall, however, the categorization should not be regarded as fixed, as unimodal methods can quickly turn into multimodal ones, depending on new technological developments and various combinatory possibilities. 54 2 Theoretical frameworks and foundations <?page no="56"?> Automated Writing Evaluation (AWE) Video Conference Feedback Chat Feedback Live Polls/ Audience Response Systems (ARS) Cloud Editor Feedback Wiki Feedback Forum Feedback E-Mail Feedback Text Editor Feedback Audio Feedback Video Feedback Screencast Feedback Blog Feedback E-Portfolio Feedback Survey Feedback Asynchronous Synchronous/ Asynchronous Synchronous Figure 7: Overview of digital feedback methods included in this book 55 2.4 Digital feedback terminology and overview <?page no="57"?> A short explanation may serve as a quick guide for the readers. Written electronic feedback has so far been the most used type of digital feedback. At the same time, this notion is highly diversified because it captures numerous ways of providing written feedback in electronic ways. These are typically asynchronous in nature (e.g. annotations in a text editor), but sometimes approximate synchronous exchanges, such as in chats or collaborative writing. As Chang et al. (2018) explain, written electronic feedback “usually involves the use of online and offline text editors, often with review features (e.g., MS Word and Google Docs track changes and comment bubbles) and may also include the use of email, discussion boards, course management systems and blogs” (p. 408). Moreover, automated corrections and suggestions can be added, for example by using the pre-installed spelling and grammar checker of text editors or by installing additional plug-ins or browser extensions. The written mode can be combined with and thus enriched by further media, e.g. through integrated audio clips or through screenrecordings of the commenting process. However, they may also be employed as stand-alone methods. Similar to written feedback, recorded media (audio, video and screencast feedback) enable a later reconsultation as well as easy sharing with parents and tutors, for instance. Consequently, these persons may offer further support to the learners, while gaining insight into the assessment process and learning progress. In that respect, audio files are relatively quick to create and easy to disseminate; however, there is only the voice of the assessor without any visual support. Video feedback, by contrast, comprises audio and visual information at the same time. It offers multiple communicative benefits, which are not available in text-based or audio feedback. Notably, screencasts, i.e. audio-visual recordings of the screen, coupled with voice comments, are advantageous. They may additionally include a talking-head video of the assessor’s face, for example by using a split-screen approach. Many videoconferencing solutions nowadays permit combinations of these various methods, e.g. feedback via chats, text pads, talking-head video and voice as well as feedback via screensharing. In addition, students and assessors may share (a link to) a collaborative document (e.g. Google Docs or wiki) during a videoconference and work on it synchronously. Furthermore, several web-conferencing applications enable a live voting so that immediate feedback can be collected from everyone. Alternatively, external polling tools can be utilized. Sometimes, however, a more comprehensive survey would be valuable to gain deeper and more structured insights into students’ understanding. Moreover, to track learners’ progress over time, e-portfolios can be conducive. Likewise, they are beneficial to foster students’ self-assessment skills. It thus becomes clear that several feedback methods can be combined synergistically. Ultimately, the selection of a particular method or several methods should depend on the learning objective and the learner group. To help teachers and students make the most suitable choices, the next chapters will identify the affordances and limitations of the different digital feedback methods. Moreover, each chapter will contain recommendations for their practical implementation. 56 2 Theoretical frameworks and foundations <?page no="58"?> Depending on the emphasis of prior research, the descriptions of some digital feedback methods will focus on teachers as the feedback providers and learners as the feedback recipients. This, however, is solely due to the prevalence of that feedback direction in the prior literature. Clearly, any feedback method can be used by teachers and students alike. Just like many teachers, also students need to be introduced to the methods as well as be granted sufficient opportunities for practice before they are able to implement them successfully. As with feedback literacy in general (see section 2.2), teachers should thus enable learners to utilize digital feedback methods so that feedback exchanges can take place in all directions by using appropriate media and modalities. The recommendations provided by this book can therefore be used by teachers in order to familiarize their students with the digital feedback methods. At the same time, advanced students (e.g. university students) may directly utilize the information to practice digital feedback themselves. Finally, it is crucial to bear in mind that digital feedback is not restricted to digital teaching settings. Rather, digital feedback can be implemented in face-to-face as well as online learning (Evans, 2013, p. 85) and thus also in hybrid and blended courses. It is the teachers’ task to create suitable feedback designs for particular learning environments (Nash & Winstone, 2017; Winstone & Carless, 2020). The book aims to provide theoretically and empirically grounded practical advice for the selection and implementation of digital feedback. 57 2.4 Digital feedback terminology and overview <?page no="60"?> 3 Digital feedback methods This chapter introduces fifteen digital feedback methods. It will start with mainly written methods before moving on to more complex multimodal ones. While the order of the single methods follows an inherent logic, each of the fifteen subchapters can nevertheless be read independently based on the readers’ interests and needs. Each subchapter will follow the same structure: (1) Definition of the feedback method and alternative terms/ variants, (2) Contexts of use, purposes and examples, (3) Advantages of the feedback method (for students and teachers, respectively), (4) Limitations/ disadvantages (for students and teachers, respectively), (5) Required equipment, (6) Implementation (how to), (7) Advice for students, (8) Combinations with other feedback methods, (9) Questions/ tasks (for teachers and students). In addition, a short handout for each of the fifteen digital feedback methods is provided online in the e-library. They serve as short summaries that can be utilized in workshops and seminars, for instance as part of pre-service teacher education or in-service training. Furthermore, readers are invited to navigate the interactive presentation by Schluer (2022), which offers a condensed but comprehensive overview of the different methods and their interrelationships (available at https: / / tinyurl.com/ DigitalFeedbackOverview). 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) 3.1.1 Definition and alternative terms/ variants Written electronic feedback is frequently provided by using the commenting and review features of offline and online text editors as well as of further platforms, tools and apps (Chang et al., 2018, p. 408). A rather similar definition was proposed by Clark-Gordon, Bowman, Hadden and Frisby (2019) who likewise stressed the use of commenting functions and track changes in word processing software and learning management systems (LMS). Given the prevalence of written feedback as compared to other modalities, many scholars simply referred to superordinate notions such as “computer-mediated feedback” when talking about text editor feedback (e.g. Kouakou, 2018, p. 9). “Text editor feedback” is, however, no common term in the literature, although written electronic feedback via a text editor differs from written e-feedback <?page no="61"?> that is provided in forums, wikis, emails etc., as the next sections will show (e.g. sections 3.3 to 3.8). Similarly, we find further terms, such as “text feedback” (e.g. Borup, West, & Thomas, 2015) or “on-script annotation” (Soden, 2017, p. 2), alongside “written feedback” in many studies. Some even put specific emphasis on the delivery of corrective feedback rather than metalinguistic explanations or feed-forward suggestions (AbuSeileek, 2013b). In that respect, AbuSeileek (2013b, p. 320) distinguished between the automated correction functions that are offered by the word processing software and the manual corrections that assessors can make by using the track changes and commenting functionalities of the software. Automated functions (see section 3.2) can be utilized by the learners for self-review, while assessors can use them to complement their feedback, e.g. as part of screencast feedback (see section 3.13). Comments and track changes, by contrast, are typically provided by an external person, either the teacher or other learners (e.g. AbuSeileek, 2013b; Clark-Gordon et al., 2019; Ene & Upton, 2014). To summarize, feedback given in offline text editors can be defined as an asynchronous feedback method in which assessors use word processing programs with an‐ notation and editing features to create written digital feed‐ back. It is one of the most widespread forms of e-feedback (cf. the reviews by e.g. Chang et al., 2018, p. 408; Elola & Oskoz, 2016, p. 60). 3.1.2 Contexts of use, purposes and examples Written electronic feedback has been a common practice in numerous disciplines (see e.g. Clark-Gordon et al., 2019). Most frequently, though, it has been employed for feedback on written assignments, especially in large classes (Clark-Gordon et al., 2019). Apart from teacher-to-student feedback (e.g. Ene & Upton, 2014; 2018; Kim & Emeliyanova, 2021; Rodina, 2008), it can also be utilized for peer feedback purposes (e.g. AbuSeileek, 2013b; Ho & Savignon, 2013). In that regard, a combination of the two appears to be more effective (Al-Olimat & AbuSeileek, 2015, p. 27), i.e. starting with peer feedback and complementing it with instructor feedback (see section 2.2.3). Overall, written electronic feedback seems to be an extension of handwritten feedback, which still predominates in many classrooms (cf. e.g. Ferris, 2014, p. 16; Sprague, 2017, p. 64). Those assessors who prefer handwriting or find it useful for feedback on certain assignment types might alternatively resort to the digital inking feature of tablet PCs (e.g. Brodie & Loch, 2009; Fisher, Cetl, & Keynes, 2008; Lee & Cha, 2021; Lee & Lim, 2013). With a digital pen, then, they can annotate electronic documents in a way that resembles handwritten paper corrections (Brodie & Loch, 2009, p. 766). At the same time, it reduces paper shuffling and allows for a faster turnaround time (Brodie & Loch, 2009, p. 766), e.g. by sending the annotated document to the learner via email (see section 3.8) or by uploading 60 3 Digital feedback methods <?page no="62"?> it onto the LMS. It might be particularly useful for annotating graphs or diagrams as well as mathematical notations (Fisher et al., 2008). Digital inking is thus a possible variant of typed feedback in a text editor, but could likewise be part of screencast feedback, for instance (see section 3.13). It appears to have great future potential, but has not yet been researched a lot. Digital inking will therefore only be sparsely treated in the next sections. Instead, the main focus will be placed on typed electronic feedback comments and track changes in offline text editors and similar applications. As it is the first written digital feedback method that is introduced in this book, some general characteristics of written e-feedback will be explained alongside the particularities of text editor feedback. 3.1.3 Advantages of the feedback method Written electronic feedback can be advantageous for learners and teachers for reasons of familiarity, accessibility, transparency and specificity. Generally speaking, written feedback still is a very common and thus familiar form of feedback for learners (cf. e.g. Fish & Lumadue, 2010; McCarthy, 2015, p. 160; Saeed, 2021, p. 15). Many of them find it sufficient (Mathieson, 2012, p. 149; cf. Chronister, 2019, p. 44); some even consider it as more “official” and “formal” than other feedback modes (McCarthy, 2015, p. 163). Indeed, if the topic does not require visual presentation due to its non-complexity, written text feedback can suffice (Adiguzel, Varank, Erkoç, & Buyukimdat, 2017, p. 248). On-script comments help to easily locate a problem and provide an explanation about the error (Chang, N. et al., 2012, p. 11) or to directly show the correct form (Cavaleri et al., 2019, p. 12). The latter is particularly useful for revising the reviewed text since the changes can be immediately implemented (cf. Hartung, 2017, p. 206). Furthermore, the comments can be much richer in detail than hard-copy annotations because more space is available (Chang, N. et al., 2012, p. 10; Rodina, 2008, p. 110; Silva, 2012, p. 3). For handwritten or printed assignments, comments are usually written in the margins to the left or right of the text (marginal feedback), in-between the lines of the draft (interlineal feedback) or as a summative comment below the paper (Silva, 2012, p. 3; see also Soden, 2017, p. 2). This can cause problems of legibility for the feedback recipients (Chang, N. et al., 2012, p. 3; Clark-Gordon et al., 2019), which could be pre-empted by using electronic commenting, though (as reviewed by Bahula & Kay, 2020, pp. 6535-6536). Moreover, the electronic commenting functions in the text editor enable an integra‐ tion of supplemental resources (see also implementation below in section 3.1.6). To exemplify, assessors might include hyperlinks to relevant materials that could thus be accessed immediately by the students (Clark-Gordon et al., 2019; see also Rodina, 2008, p. 110). The written mode of electronic commenting also allows teachers to “reuse common feedback comments by copying, pasting, and then modifying comments to fit specific 61 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="63"?> students’ work” (Borup et al., 2015, p. 175). For this purpose, they can save feedback phrases, explanations and suggestions in a separate document or database (Rottermond & Gabrion, 2021, p. 40; see implementation section 3.1.6). This stands in contrast to the uniqueness of videos (Borup et al., 2015, p. 175) or handwritten feedback, for instance. As one lecturer confessed, I started to realize that “Hey, there are a lot of patterns of things that I am saying.” So I would just copy my explanations and keep them in a separate document. I would then just paste them over and make specific comments where I felt they were needed. (Borup et al., 2015, p. 175) This re-use of comments can be eased through comment banks in cloud-based systems (Rottermond & Gabrion, 2021, p. 40), as will be explained in section 3.1.6. At the same time, it also permits instructors to provide feedback in a more organized manner and thus to deliver it more efficiently (Borup et al., 2015). The time-saving can be further enhanced if the assessors are comfortable with typing. It allows them to produce more comprehensive feedback within a short period of time (Chang, N. et al., 2012, p. 10; see also Clark-Gordon et al., 2019; McCarthy, 2015, p. 164). In addition, before passing the documents to the learners, they can reflect on and modify their comments as needed, whereas handwritten comments cannot be changed easily. Finally, there are also environmental reasons for using electronic commenting rather than handwritten feedback as it produces less waste (Clark-Gordon et al., 2019) and saves trees and money (Chang, N. et al., 2012, p. 14). The next point, accessibility, is closely related to transparency and specificity, as it not only refers to document access but also to the clarity of the comments that are made. First of all, learners and lecturers alike do not need any special equipment except for a text editor or app that can be accessed at their computer, laptop, tablet or iPad, smartphone or other mobile device (Chang, N. et al., 2012, p. 8; see equipment in section 3.1.5). The documents can then be exchanged rather effortlessly at a convenient time and location (Chang, N. et al., 2012, p. 8; Clark-Gordon et al., 2019). As compared to several other digital feedback methods, the file size is comparatively small (McCarthy, 2015, p. 164) so that a dissemination via email will not consume too much server space or download costs. Alternatively, the submissions and review comments can be organized conveniently in an LMS. Furthermore, the electronic format facilitates plagiarism scans (Clark-Gordon et al., 2019; see also section 3.2.3) and often leads to a quicker return of the reviewed papers (Brodie & Loch, 2009, p. 766; Chang, N. et al., 2012, p. 9; McCarthy, 2015, p. 164). Moreover, it helps to keep a lasting record of students’ performance (Clark-Gordon et al., 2019). Students can reconsult this asynchronous written feedback multiple times on later occasions (Edwards et al., 2012, p. 105), which stands in contrast to e.g. face-to-face feedback. If the written feedback is clear enough, teachers and learners do not need to invest time in scheduling individual meetings. Some students even feel uncomfortable in one-on-one meetings with their instructor, notably in a face-to-face situation (cf. 62 3 Digital feedback methods <?page no="64"?> Chang, N. et al., 2012, p. 3). In Borup et al.’s (2015) study, they sometimes even valued the efficiency of text feedback higher than the affective benefits of the video format (see section 3.12), for instance (p. 161). Another advantage as compared to video feedback and other digitally recorded feedback types is the ease of navigation and localization in the document. As Letón, Molanes-López, Luque and Conejo (2018) noted, “[t]extual presentation allows learners to reread relevant passages, skip unimportant ones, and adjust the reading pace to individual cognitive needs” (p. 189). Consequently, many learners found it easier to navigate or skim through the document, locate the problematic passages as well as to revise surface-level features (Borup et al., 2015, p. 174; Howard, 2018, p. 243; Mathieson, 2012, p. 149; McCarthy, 2015, p. 162; Schilling & Estell, 2014, p. 35; Silva, 2012, pp. 1, 9-10). Moreover, the corrections can be done in a stepwise manner without having to watch an entire video about it (cf. Silva, 2012, p. 10). Learners thus highlighted the accessibility of the written comments as opposed to the onerous re-playing of video files to locate particular passages (Kay, Petrarca, & Bahula, 2019, p. 1912; see also Lee & Bailey, 2016, p. 145). Instead, they were able to read the comments and suggestions at the corresponding point of occurrence (Chang, N. et al., 2012, p. 10; Mathieson, 2012, p. 149; Silva, 2012, p. 10). Accordingly, text feedback was often perceived as organized, specific and efficient (Borup et al., 2015, p. 161; see also McCarthy, 2015, p. 160). Some also valued the possibility to print out the text feedback (Edwards et al., 2012, p. 105; Mathieson, 2012, p. 149; McCarthy, 2015, p. 164) and compare it to the original submission (Langton, 2019, pp. 41-42). Clearly, this direct comparison can also be accomplished by the document comparison functions of the word processor (cf. Cunningham, 2017b, p. 49; see section 3.1.7). When assessors use the track changes function of a text editor, their changes become transparent to the learners (Elola & Oskoz, 2016, p. 69). The feedback recipients can easily detect the erroneous word or phrase and view its corrected version or reformulation alongside it (AbuSeileek, 2013b, pp. 320, 330) - provided that they are familiar with the review features of the text editor (see Cunningham, 2017b, p. 63). Beyond insertions and deletions at the level of individual words or phrases, assessors can also move around entire sentences and paragraphs to suggest structural changes (Yohon & Zimmerman, 2004, p. 221). In addition, comments can be added that specify the type of error (cf. Elola & Oskoz, 2016, pp. 65-66) and offer further information if needed. As an alternative or complement to error codes, color coding could be utilized, i.e. one color would represent a specific assessment criterion, e.g. yellow for grammar, blue for content and orange for word choice (e.g. Kouakou, 2018; see also section 3.1.6). Due to its explicitness at the level of form (Elola & Oskoz, 2016, pp. 65-66), learners found text editor feedback particularly helpful for formal or mechanical revisions at specific locations in the text (cf. Saeed, 2021, pp. 10-12; Silva, 2012, p. 11). They may simply accept the suggested changes, which constitutes a time-efficient way of revising (Silva, 2012, p. 10). Similarly, the students in Bakla’s (2020) study regarded written feedback as “the most practical [method] because there were fewer steps to access the 63 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="65"?> feedback” (p. 117). This easy access, efficient viewing possibility as well as its concise and edited content was highly appreciated by the students (Borup et al., 2015, p. 173; see also the respondents in Edwards et al., 2012, p. 105; cf. Langton, 2019, p. 41). However, as most of the reported improvements centered on mechanical or surface-level aspects (cf. the review by Ene & Upton, 2018, pp. 2-3), there are some limitations that need to be addressed. 3.1.4 Limitations/ disadvantages Overall, the learning gain from written electronic feedback via text editors is disputed for several reasons that will be outlined below. While facilitating localization (section 3.1.3), the on-script nature of written feedback is a major limitation because comments are usually inserted in a sequential manner, making it hard for the learners to discern the relative importance of the single comments in relation to each other as well as their potential interrelationships. To exemplify, there might be recurring mistakes across several pages of a document, but these are interspersed by several remarks about other aspects. Aside from this lacking global overview and guidance, written feedback provided via text editors often merely focusses on surface-level and mechanical aspects, such as spelling, punctuation and grammar (McCarthy, 2015, p. 155; Morra & Asis, 2009, quoted by Vo, 2017, p. 25). In turn, macro-issues of content, organization and fluency are neglected (cf. Skehan, 1998, cited by Fukuta, Tamura, & Kawaguchi, 2019, p. 3; see also Anson, 2015, p. 377; Grigoryan, 2017b, p. 452). Accordingly, learners tend to readily make those revisions that refer to the surface-level of a text, but that do not require deeper thinking (Thompson & Lee, 2012). Quite often, students blindly accept the corrections and suggestions, especially when the track changes function was utilized (Clark-Gordon et al., 2019; Laing, El Ebyary, & Windeatt, 2012, p. 156). Moreover, the relatively short nature of the comments is seen as problematic for two major reasons. First, they cause an authoritative tone, which might have a negative impact on the socioaffective dimension of the learner-teacher relationship. Second, the brevity often leads to problems of comprehensibility due to their ambiguity. These two factors will be dealt with in more detail below, as they oftentimes result in learners’ low responsiveness to the provided feedback (for an overview of potential reasons see e.g. Zamel, 1985; also cited by Stannard, 2008, p. 17; West & Turner, 2016, p. 400; see also Duncan, 2007). As regards interpersonal relations, analyses of written electronic feedback showed that assessors were primarily concerned with error corrections and thus the provision of negative feedback (96 % in the study Ene & Upton, 2014; see also the review by Hyland & Hyland, 2006, p. 84). Furthermore, the language in which the negative feedback was conveyed was highly directive and authoritarian (Cavaleri et al., 2019, p. 10; Cunningham, 2017b; 2019b; Ene & Upton, 2014; as reviewed by Hyland & Hyland, 2006, p. 84). By contrast, video-based modes of feedback provision included a substantially 64 3 Digital feedback methods <?page no="66"?> higher amount of praise, suggestions and explanations (Cavaleri et al., 2019, p. 10; Cunningham, 2017b). Resultantly, students might be upset by the “impersonal” (Clark- Gordon et al., 2019), “angry or unfriendly tone” implied by the written feedback and may additionally find it “vague and confusing” (Cruz Soto, 2016, p. 64). This might even be perceived as face-threatening (Soden, 2017, p. 3), which is why assessors need to formulate their comments carefully (see section 2.1.3). As Soden (2017) put it, “the danger of repeated negative or critical feedback” is that it might “reinforce low selfesteem and low motivation in poorly performing students” (p. 2, with reference to Juwah et al., 2004, Värlander, 2008, and Wingate, 2010). Moreover, the sheer number of comments and corrections can be distressing for the feedback recipients, as has been reported in several studies (e.g. Bakla, 2017, p. 324; Elola & Oskoz, 2016, p. 69; Ferris, 2003, cited by Silva, 2012, p. 11; cf. the reviews by Ali, 2016, p. 107, and Ghosn-Chelala & Al-Chibani, 2018, p. 148). The participants noted that the reception of an excessive amount of corrections or “red ink comments” (participant in O’Malley, 2011, p. 30) can be shocking (Elola & Oskoz, 2016, p. 69) and “intimidating” (participant in Rybakova, 2020, p. 505). As a result, they may feel anxious, stressed, overwhelmed and discouraged (Langton, 2019, p. 53; Mathieson, 2012, p. 149; Zhang, 2018, pp. 22, 26; see also the review by Cunningham, 2017b, p. 34; 2019a, p. 223). The other and related problem is that the written comments are typically rather short and may thus bear greater potential for nonor misunderstanding (Clark-Gordon et al., 2019). The cryptic and ambiguous nature of written feedback has in fact been highlighted by numerous scholars (cf. e.g. Cunningham, 2017b, p. 13; Grigoryan, 2017b, p. 452; Nicol, 2010, p. 507; Thompson & Lee, 2012; West & Turner, 2016, p. 401). Moreover, many of the remarks appeared to be rather generic and not tailored to the individual student (Duncan, 2007, p. 273; see also Weaver, 2006, pp. 387-388). What is more, the terse comments usually presuppose a complex understanding of the underlying thematic or grammatical concepts (cf. Duncan, 2007, pp. 273, 274, 277; Higgins, Hartley, & Skelton, 2001, p. 272). However, especially students in lower grades need more explanation until the underlying concepts are clear to them, e.g. with regard to what constitutes an ‘academic style’ (Duncan, 2007, pp. 273, 274) or a ‘critical analysis’ (p. 277; see also Hall, Tracy, & Lamey, 2016; Weaver, 2006). Due to the conciseness of the comments, it is thus likely that some of the requested changes will remain nebulous to the learners if no sufficient explanation is provided (cf. Jones, Georghiades, & Gunson, 2012, p. 604; Mathieson, 2012, p. 149; Zhang, 2018, p. 21). In other words, learners could be insecure as to what they should do (Elola & Oskoz, 2016, p. 68), which may lead to faulty revisions and frustration (cf. Ghosn-Chelala & Al- Chibani, 2018, p. 154; Hall et al., 2016). Others might exclusively focus on the grade that they have attained instead of engaging in the revision process at all (Duncan, 2007, pp. 272, 275; Higgins et al., 2001, p. 270; cf the review by Cranny, 2016, p. 29102). Accordingly, learners may complain that the written comments lack precision (Sheen & Ellis, 2011, p. 600) and personalized support (cf. Crawford, 1992; Stannard, 2008a; both cited in Mathisen, 2012, p. 99). Therefore, Weaver (2006, pp. 384, 387-388) requested 65 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="67"?> more guidance on the part of the assessors as to how the feedback needs to be understood, so to avoid misinterpretations by the students and a low responsiveness to feedback comments in subsequent learning and revisions. If the written modality is inappropriate for this, it could become necessary to schedule office hours to clarify the feedback comments, which would increase students’ dependency on the lecturer (Vo, 2017, p. 67) and cause additional time investments for teachers and students alike (cf. Fang, 2019, p. 119; Woodard, 2016, p. 30). However, the provision of written feedback is oftentimes already time-consuming by itself if done properly (e.g. Cavaleri, Di Biase, & Kawaguchi, 2013, p. 8; Clark-Gordon et al., 2019). Some respondents also argued that it was more time-intense than handwritten feedback (Brodie & Loch, 2009, p. 765) or screencast feedback (Silva, 2012, p. 7; see also section 3.13). In particular, when teachers have to assess an excessive amount of student work, this not only consumes much time, but may also lead to hand fatigue and wrist pain (Kay et al., 2019, p. 1909; Sprague, 2017, p. 142). Others even feared burnout through the tedious process of repeatedly typing the same or similar corrections (Fang, 2019, p. 91). Especially in large classes, the production of comprehensive written feedback might not be feasible (Fang, 2019, p. 124). To save time and energy, some assessors may simply provide an “occasional tick or cross” in the paper or give an unspecific comment such as “good work” (Butler, 2011, p. 100). These superficial or highly generic (Law, 2013, p. 329; Ryan, Henderson, & Phillips, 2019, p. 1510; Zamel, 1985) and unspecific comments (Rybakova, 2020, p. 508) are usually not helpful for the learners. They might be alleviated if feedback banks are used, as recommended in the implementation section 3.1.6. Another possibility would be the voice dictation feature of Microsoft Office that transforms spoken language into written text. This way, handstrain from excessive typing can be avoided. On the other hand, if handwriting via digital ink is utilized, problems of legibility may arise (see McLaughlin, Kerr, & Howie, 2007, pp. 337-338; McVey, 2008, p. 43). However, this could be remedied by using handwriting recognition tools, even though this might be slower and thus frustrating for the assessors (McLaughlin et al., 2007, p. 334), at least initially. Moreover, although digital ink is often perceived as more personal and human by the learners (McVey, 2008, pp. 41-42), it is still a form of written feedback and thus restrained in its “personal touch”, notably because of the “lack of social interaction” (Chang, N. et al., 2012, p. 3). Especially in asynchronous written feedback, learners can neither directly comment on nor ask open questions to the assessors. However, this is often necessary for several reasons. First, due to its brevity, written feedback seldom contains explanations (Cruz Soto, 2016, p. 64) or feed-forward information that the students could use in their future work (Duncan, 2007, pp. 272, 278). Furthermore, they cannot directly clarify their questions with the assessor (cf. e.g. Saeed, 2021, p. 16; Vo, 2017, p. 59) and thus students often fail to implement revisions successfully (Clements, 2006, as well as Nurmukhamedov & Kim, 2010, quoted by Thompson & Lee, 2012). Hence, with asynchronous written feedback, teachers could be inclined to engage in transmissive rather than in dialogic feedback practices (Heron-Hruby, Chisholm, & Olinger, 2020, pp. 85-86), or at least it might create such an impression among the 66 3 Digital feedback methods <?page no="68"?> students. This would deprive learners of the opportunity to improve their learning process and outcome (cf. Holmes & Smith, 2003, cited by Mathisen, 2012, p. 99). Asynchronous written feedback therefore cannot replace synchronous exchanges, e.g. interactive face-to-face or digital meetings. To exemplify, some learners favor dynamic chats over text editor feedback for affective and interactional reasons (cf. the review by Ene & Upton, 2018, p. 3; see also section 3.7). Even asynchronous digital methods, such as talking-head videos (section 3.12) or screencast feedback (section 3.13), are often perceived as conversational (cf. the review by Ene & Upton, 2018, p. 3; Silva, 2012, p. 12) although they are not truly dialogic (see section 2.1.7). What is more, the success of the written feedback process might be obstructed by accessibility issues. Teachers and students alike need access to computers or other mobile devices, the internet and a text editor or app (Brodie & Loch, 2009, p. 765; Chang, N. et al., 2012, p. 9). Another problem with text editor feedback are software incompatibilities. For example, different versions of the same word processor (e.g. Microsoft Word) or distinct text editors (e.g. Pages, Open Office) and operating systems (Mac, Linux, Windows) may cause technical problems (Silva, 2012, p. 8). In the worst case, assessors or students might not be able to open the document at all; in other cases, the formatting (e.g. font and page layout) could be problematic (Silva, 2012, pp. 8, 13). If the written remarks are exclusively provided in an online editor or in the comment section of a course platform, these problems might be pre-empted, though (see section 3.3). On the other hand, the fear of data right infringements is likely to be higher when online programs are used. In older studies, participants sometimes even reported that they generally distrusted the electronic delivery of feedback (Bridge & Appleyard, 2008, as cited in Chang, N. et al., 2012, p. 3). With a greater prevalence of digital media, this has probably decreased. Nonetheless, staff (and student) reluctance is a common problem of new, digital feedback methods. This does not only pertain to the shift from paper-based to electronic written commenting (Brodie & Loch, 2009, p. 765), but also to more general anxieties associated with technology-mediated communication (Clark-Gordon et al., 2019). For instance, some cited health issues about screen use (Clark-Gordon et al., 2019). Others who identified as kinesthetic learners rather wanted to physically touch the reviewed paper and write notes on it (Chang, N. et al., 2012, pp. 9, 12). Quite often, however, resistance and anxiety result from unfamiliarity with the digital alternatives. For example, teachers might not know how to deliver electronic feedback effectively in a word processor (cf. Brodie & Loch, 2009, p. 765). Likewise, students could be unaware of the text editor functions and the way they had been utilized for feedback purposes (Silva, 2012, p. 13). To instantiate, learners might not know how to handle the reviewing features and change mark-up views, for example to compare the original version with the reviewed version; they may not know how to close or respond to comments in the margin and how to accept track changes or reject them (cf. AbuSeileek, 2013a, p. 204; Cunningham, 2017b, p. 63; Silva, 2012, p. 13). Therefore, a stepwise introduction and opportunities for practice for learners and teachers are primordial. For this reason, the next sections will be devoted to practical advice. 67 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="69"?> 3.1.5 Required equipment The required equipment for providing written electronic feedback is rather simple. As hardware, assessors need a laptop, tablet or desktop PC, and sometimes a mobile phone might even be sufficient. In terms of software or platforms, many alternatives exist. If the institutional LMS (e.g. Moodle, Blackboard or Canvas) has an integrated annotation tool, assessors may choose to use it. It often allows for the provision of written feedback and the insertion of further media (such as audio and video comments). Otherwise, an ordinary word processor is sufficient, such as Microsoft Word (https: / / www.microsoft.com/ en-ww/ microsoft-365/ word) or Pages (https: / / www.apple.com/ pages/ ). If a student has submitted a document in an alternative file format, assessors could convert it into a PDF file (e.g. if a .jpg file was submitted) or directly open an existing PDF document. With Adobe PDF (https: / / www.adobe.com/ acrobat.html), especially in the Pro version, assessors can resort to a range of annotation tools, including comment bubbles, colored highlights, underlining, strikethrough and so on. Comment functions can likewise be used in other Office products, such as Microsoft PowerPoint and Google Slides (https: / / www.google.com/ slides/ about/ ). The next sections will concentrate on the implementation in the offline version of text editors, notably Microsoft Word, whereas online applications will be dealt with in section 3.3. 3.1.6 Implementation (how to) Preparatory guidance and familiarization (teacher and students): As mentioned by e.g. Silva (2012) and Cunningham (2017b; 2019a), many students were unable to revise their assignments effectively because they were not familiar with the features of the text editor, such as Microsoft Word (see section 3.1.7 below). Of course, this presupposes that the instructors know how to use the commenting and track changes features of Microsoft Word or other text-editing software. In that respect, Rodina (2008) supplied a detailed description of the procedure. Even though this advice dates back some time already, it still appears applicable and will be complemented by updated suggestions whenever relevant. In general, learners should be introduced early enough to the reviewing functions as well as to the guidelines for electronic submissions and file management (Yohon & Zimmerman, 2004, p. 229). In face-to-face and online teaching, it would make sense to share the screen and explain the procedure to the students with the help of a sample document. Moreover, learners should be granted sufficient time for open questions and practical application to ensure their understanding (see section 2.2.1). Especially if students work at different operating systems and with different editing programs or versions thereof, the similarities and differences should be discussed and clarified (see Silva, 2012, in section 3.1.4). This way, software incompatibilities or formatting 68 3 Digital feedback methods <?page no="70"?> problems could be avoided. Despite such a careful familiarization, learners and teachers might nevertheless encounter unforeseen problems. It is therefore recommended to possess a certain level of frustration tolerance and to demonstrate flexibility (Rodina, 2008, p. 114). In particular, when they try out electronic submissions and commenting for the first time, “patient support” would be indispensable (Rodina, 2008, p. 114). Submission and download of assignments: In addition to the content requirements for an assignment, students need to learn about the formatting regulations and the submission procedure. In that respect, Rodina (2008) as well as Yohon and Zimmerman (2004) suggested helpful practices, especially for teachers who must manage a high amount of assignments. First, they should hand out specific formatting guidelines that detail the font type, font size and margin settings (Yohon & Zimmerman, 2004, p. 229). Quite often, though, standard fonts and margins are sufficient, while the particularities of print submissions, such as double spacing or wide margins, are usually no longer necessary in electronic format (Rodina, 2008, p. 107). Instead, a comment margin will be created automatically by the text editing program as soon as comments are inserted. However, it would make sense to have students name their electronic files in a consistent manner (Rodina, 2008, p. 107; Yohon & Zimmerman, 2004, p. 229). The file name could consist of the keyword from the task, followed by the writer’s last name, e.g. “application_studentlastname.docx” (cf. Rodina, 2008, p. 107). The assessors will then insert a shorthand at the end of the file name for the reviewed version, while the writers will extend it with a revision note afterwards (for details see below). To avoid unnecessary mistakes regarding spelling, punctuation, grammar and word choice, students should be advised to use the spelling and grammar checker functions of the word processor (Yohon & Zimmerman, 2004, p. 229; see section 3.2 on AWE). In that regard, it would be important to select the right language for the checking tools, e.g. French if an assignment is written in French as a foreign language (Rodina, 2008, p. 107). At the same time, learners should be aware of faulty corrections that might be suggested by the program (see section 3.2). Moreover, teachers might need to remind students to “avoid using the password protection function” of the text editor (Yohon & Zimmerman, 2004, p. 230). Further, both students and teachers alike should make sure to run an up-to-date antivirus software on their computers (Yohon & Zimmerman, 2004, p. 229). If email is used as the dissemination medium, all parties should (automatically) scan the emails and attachments before opening them (Yohon & Zimmerman, 2004, p. 231). The reception of numerous emails could, however, cause chaos on the instructors’ part, especially if they teach numerous classes or even several students in different classes. Therefore, it would be important for students to specify the course name in the subject line (see section 3.8.6). A more comfortable solution would be to use the task submission and review functions of the institutional LMS, e.g. Blackboard or Moodle (Rodina, 2008, p. 108; see e.g. Ene & Upton, 2014, p. 84). 69 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="71"?> After the submission, assessors (teachers or peers) download all the assignments to their computers or open them in an online editor, app or LMS. The track changes and commenting functions should immediately be activated afterwards. For downloads, Rodina (2008, p. 109) suggested saving an additional copy by writing one’s initials at the end of the file name. An example would be “application_studentlastname_revie‐ werinitials.docx” for the reviewed document, e.g. “application_Miller_JS.docx”. These documents should be saved in a designated place on the assessors’ hard drive (cf. Yohon & Zimmerman, 2004, p. 231) unless an online editor or app is used (see section 3.3). Deciding on a feedback strategy and reviewing features: Planning, reflecting, setting a focus: Next, the assessors need to decide on a feedback strategy, given the many reviewing functions that are commonly offered by a word processing program. In view of the specific learning objectives and assessment criteria, assessors should tailor their feedback to the students’ performance (Chang, N. et al., 2012, p. 19). To this end, it would be helpful to read the submission first before starting with the commenting and correction process (Chang, N. et al., 2012, p. 19). Depending on the amount and types of errors, instructors should set a focus for their feedback message and utilize the different features of the text editor in a purposeful manner. As prior research has detected a tendency for teachers to provide lots of negative corrective feedback in the written electronic mode (e.g. Ene & Upton, 2014; review by Hyland & Hyland, 2006, p. 84; see section 3.1.4), the assessors need to pay particular attention to a balance of positive and negative feedback. Moreover, the explicitness or implicitness of the corrections, questions and suggestions should be adapted to the learners’ understanding (cf. section 2.1.3; e.g. Ho & Savignon, 2007, cited by Vo, 2017, pp. 13-14). In terms of reviewing features, assessors can choose one or several from the 5 Cs in Figure 8. marginal or inline Track hanges omments olor coding Correction odes oncluding comment highlighter tool font color footnotes or endnotes autocorrect phrases comment banks Figure 8: 5 Cs of feedback in text editors 70 3 Digital feedback methods <?page no="72"?> To use C1, i.e. the “track changes” function, assessors need to activate it as soon as they open the submitted document (Rodina, 2008, p. 107). It can be found in the review tab of Microsoft Office, for instance (Osborn & Wenson, 2011, p. 1). With this function, assessors can directly change a text and make these modifications visible to others (Dirkx, Joosten-ten Brinke, Arts, & van Diggelen, 2021, p. 191; see e.g. Chang et al., 2018, p. 409; Fish & Lumadue, 2010; Mathieson, 2012, p. 147; Silva, 2012, p. 6). For example, this comprises deletions, insertions and the movement of text to a different position in the document (cf. Yohon & Zimmerman, 2004, p. 221). These changes can be easily detected by the students as they appear in a different color (one for each reviewer; see Clark-Gordon et al., 2019, Table 1; Dirkx et al., 2021, p. 191; Ho & Savignon, 2013). Rodina (2008) argues that these direct corrections are particularly suitable for “complex problems that […] students [cannot] resolve without direct help” from others (p. 109), as for example syntactical and lexical errors. It should be noted, though, that the track changes by themselves do not provide metalinguistic explanations for the corrections (AbuSeileek, 2013b, p. 320). Consequently, learners would need to “reflect upon the changes”, try to understand them and then decide whether they want to accept them or not (Dirkx et al., 2021, p. 191). Further explanations and advice can be inserted in comment bubbles or as inline comments within square brackets (cf. Rodina, 2008, p. 109) or within a series of asterisks (Yohon & Zimmerman, 2004, pp. 223-224), as will be explained below. Aside from that, the appearance of the in-text track changes can be adjusted. By selecting the corresponding options in the track changes menu, users can modify the inline track changes to comment balloons appearing in the margin (Osborn & Wenson, 2011, p. 2). In addition, there are multiple options regarding the display of the changes, i.e. whether the changes should be indicated by a simple markup (typically a red mark next to the corresponding line) or in an exhaustive manner (“all markup” in Microsoft Word) or whether only selected types of changes should remain visible (e.g. only showing text manipulations, but no changes in formatting) or none at all (“no markup”). Eventually, writers can also view the originally submitted draft (“original”) in order to trace the changes in the commented version and conduct draft comparisons. Overall, it would be important to make students aware of these different markup options of the track changes feature, as shown in Figure 9 (see section 3.1.7). 71 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="73"?> Figure 9: Track changes options in Microsoft Word Since the direct changes and correction marks often appear in red by default, assessors may wish to change the standard correction colors, as e.g. recommended by Rodina (2008, p. 107). In Microsoft Word, this can be done in the review tab by opening the advanced settings (“preferences”) of the markup options (see Figure 9 above). Assessors could then select blue, for instance, as the standard color to avoid a potentially discouraging effect of the red marks (cf. Semke, 1984). Beyond that, the systematic use of a consistent color system (C2) may ease fast recognition of the error type or praise in electronic text feedback (see Langton, 2019, p. 39), especially for visual learners. This means that assessors would either change the font color of the pertinent passages in the document or use the highlighter (or shading) tools of the word processor. In that respect, yellow could stand for grammar problems, blue for spelling mistakes, green for word choice, and so on (Langton, 2019, p. 38). When assessors work at a tablet PC, they could likewise use the digital pen to create hand-drawn highlights in different colors. Moreover, a tablet PC is suitable to provide digital handwritten feedback (Brodie & Loch, 2009), as for instance by writing error codes into the document. Such a system of error codes (see e.g. Elola & Oskoz, 2016, p. 63) is, however, useful and limiting at the same time (see section 3.1.4). By contrast, comment bubbles (C3) permit more elaborate explanations (e.g. Dirkx et al., 2021, p. 191; Yohon & Zimmerman, 2004, p. 221) and an integration of hyperlinks to external resources (Harper, Green, & Fernández-Toro, 2018, p. 284; McLaughlin et al., 2007, p. 339). To insert a comment, assessors would mark the pertinent passage in 72 3 Digital feedback methods <?page no="74"?> the text and then click on “New comment” in the review tab of the text editor (Osborn & Wenson, 2011, p. 2; see Figure 10). Figure 10: Adding comments in Microsoft Word In these comment bubbles, assessors can provide implicit and/ or explicit feedback, such as by giving explanations, raising questions or making suggestions (Dirkx et al., 2021, p. 191; Yohon & Zimmerman, 2004, p. 222). They can also use it for positive feedback (Yohon & Zimmerman, 2004, p. 222) and for embedding external resources (Harper et al., 2018, p. 284; McLaughlin et al., 2007, p. 339). For example, these could be hyperlinks to helpful websites that might be drawn on repeatedly (cf. McLaughlin et al., 2007, p. 332) or specific references to textbook sections (Rodina, 2008, p. 110). To categorize the contents of the comment bubbles, users may likewise color them with the help of the highlighter tool. Furthermore, the feedback dialogue can be continued by the learners, as they may directly respond to the questions and suggestions in the comment section. The commenting process can be speeded up by configuring autocorrection codes in the word processor (C4) (Rodina, 2008, pp. 110-111, 115-116; Yohon & Zimmerman, 2004, p. 226). This way, assessors would only need to type the shorthand of the code, while the full explanation would be added automatically. To determine the reusable snippets, they need to go to the “File” menu of the Microsoft Word document, click on “Options”, navigate to the “Proofing” tab and then select “AutoCorrect Options” (Windows). Mac users, in turn, would need to click on “Preferences” in the Word menu, go to “Tools” and choose “AutoCorrect”. A new window will open in which the users can specify a shorthand and the full explanation. Yohon and Zimmerman (2004) recommend starting a correction code by an asterisk followed by a specific letter combination, e.g. “*av” for “use of active voice” or “Replace passive voice with active voice” (p. 226). Thus, whenever assessors type “*av” into the document, the full explanation will be inserted automatically (Yohon & Zimmerman, 2004, p. 226). Rodina (2008, p. 110) alternatively suggests using the equals sign or any other infrequently used sign (with the frequency of the signs depending on the discipline). For her field of teaching French 73 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="75"?> as a foreign language, Rodina (2008, pp. 115-116) provided a comprehensive list of common autocorrection codes in the appendix of her publication. In addition, she put the full explanations into square brackets (followed by arrows) in order to set them off from the rest of the text (Rodina, 2008, pp. 110-111, 115-116). This is particularly useful for in-text comments. Apart from autocorrection codes, assessors can re-use comments in still other ways. For example, they can set up a comment bank in a separate document that can be utilized across different editors and platforms (Leibold & Schwarz, 2015, p. 42; Rottermond & Gabrion, 2021, p. 40). However, the effectiveness of text documents for this purpose is disputed since the comments cannot be organized and drawn on easily (Mandernach, 2018, pp. 8-10; Rybicki & Nieminen, 2012, p. 256). Alternatively, some LMS and software, such as Google Classroom, Google Keep or Emended enable a more effective organization and re-utilization of comments (Rottermond & Gabrion, 2021, p. 40). Emended (http: / / emended.com/ ) is a cloud-based feedback system that allows assessors to create a category system with feedback comments (Rybicki & Nieminen, 2012; see also Chang et al., 2018, p. 409). They can be applied to any submitted electronic assignment (Chang et al., 2018, p. 409; see also McLaughlin et al., 2007, p. 332), similar to the coding procedures in sophisticated qualitative data analysis programs such as MAXQDA. This way, assessors do not need to retype the comments, but can simply click on them and drag them to the pertinent passage (Rybicki & Nieminen, 2012, p. 256). The comments can be rich in detail, e.g. containing explanations, examples, external links to further resources and suggestions for improvement (Chang et al., 2018, pp. 409, 419; Rybicki & Nieminen, 2012, p. 256). Furthermore, teachers (and learners) may easily obtain an overview of the relative frequency of error types by looking at the code frequencies within the category system (cf. Rybicki & Nieminen, 2012, pp. 255, 257). In addition, teachers can track the changes made by the learners when they resubmit their paper based on the feedback they have received (Rybicki & Nieminen, 2012, p. 257). In that respect, the program automatically highlights the modifications that were made, and so assessors can check whether the comments have been understood and addressed by the learners (Rybicki & Nieminen, 2012, p. 257). There are thus many enhanced benefits as opposed to text commenting in word processors (Rybicki & Nieminen, 2012, pp. 256-258). However, teachers and learners alike would need access to the platform in order to use it. In addition, receiving default comments from their teachers could be perceived as demotivating by learners over time (Chang et al., 2018, p. 409). Moreover, the system has been primarily designed for written feedback, and so the advantages of oral feedback cannot be seized since the oral mode has not been integrated so far. One possible alternative would be Google Keep (https: / / keep.google.com/ ) that allows for the organization of multimedia resources, such as texts, hyperlinks, images (e.g. motivating badges, stickers or animated GIFs), videos and voice messages (Bell, 2018; see implementation in cloud section 3.3.6 below). Furthermore, other apps or 74 3 Digital feedback methods <?page no="76"?> databases could be utilized that facilitate the organization and retrieval of recurring comments (see also section 3.3 on cloud documents). Returning the reviewed assignment: Apart from the many possibilities for inserting in-text comments, general overviews or evaluative summaries are helpful for the students (C5, i.e. concluding comments). They can be written at the beginning or end of the reviewed document, for instance (cf. Chang, N. et al., 2012, p. 19). As another alternative, a completed assessment rubric (see section 2.1.2) could either be copied into the text document or handed out together with the reviewed assignment. The files would then be attached to an email (Rodina, 2008, p. 109) or uploaded onto the course cloud or LMS (e.g. Ene & Upton, 2014, p. 84). Likewise, the email or LMS comment section might contain a general summary for the attached document (see section 3.8 on email feedback). When disseminating the feedback, assessors should also request a revision from the students (Rodina, 2008, p. 113) or any other feedback response (see section 2.2.4). Especially if they work with offline editors, the students should save the document as a new version by including “rev” (for revision) at the end of the file name, as in “application_Miller_rev” (Rodina, 2008, p. 113). As soon as the learners open the document, they could turn off the track changes function (Rodina, 2008, p. 113) so that their implementation of the comments will not be tracked (unless assessors want to see it). They should then work through the document and make all necessary changes (Rodina, 2008, p. 113) or note down open questions to engage in further discussion (see section 3.1.7 below). However, they should not blindly accept all changes, e.g. by using the “Accept all” function, but make a conscious decision for each correction or suggestion (Yohon & Zimmerman, 2004, p. 227). Finally, the students are asked to upload or send their completed revision (Rodina, 2008, p. 113) as part of the formative assessment process. Assessors may also request a reflective paper (or recording; see section 2.2.4) from the students in which they explicate their learning gain from the feedback and revision process. Optionally or alternatively, a peer review could be conducted, either before, while or after the teacher has commented on the document. To exemplify, Rodina’s (2008, p. 112) students simultaneously received peer and teacher feedback to consider the suggested changes more deeply and seriously. Clearly, the implementation of peer reviews requires multiple skills from the students, as had been outlined in section 2.2.3 in more general terms. Some more specific hints can also be found in the subsequent advice for students. 3.1.7 Advice for students Learners usually demonstrate “varying degrees of computer literacy” (Silva, 2012, p. 8; cf. Bush, 2020, p. 10). Therefore, it is crucial to support students in the use of the different word processing features so that they can give feedback themselves as well as 75 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="77"?> make sense of the edits and comments they receive (see e.g. Cunningham, 2017b, p. 63; 2019a, p. 236; Howard, 2018, pp. 140, 143, 210). Some students may have difficulties in accepting comments in Word, in converting Microsoft Word files, saving a document from a template (Silva, 2012, p. 8) or in working with hanging indents, headers, page numbers and margin settings (p. 13). As Silva (2012) explained, many students did not know how to get rid of the comments. As a result, students opened their original document, making changes in the original, while reviewing the teacher comments in a separate window. This revision practice made it impossible for students to view any punctuation corrections I made (they were in red but were very small and nearly impossible to see), which explains why students failed to address numerous punctuation problems in their revisions. (p. 13) She further argued that “[i]f students had known about the “Accept” function in Mi‐ crosoft Word, they could have completed all their revisions in one digital space without managing multiple windows or monitors” (Silva, 2012, p. 13). Similarly, Howard (2018) noted that some students struggled with the “Track changes” feature when obtaining their corrections (pp. 215, 225-226, 248). Likewise, Cunningham noticed that students had difficulties in relating the comment bubbles to their text (2017b, p. 66; 2019a, pp. 237-238) or simply closed them without addressing the corrections in their papers (2017b, p. 63; 2019a, p. 236). In the same vein, some respondents in Cruz Sato’s (2016) study “had trouble managing editing features of their word processing software” (p. 42). Therefore, learners obviously first need an introduction to the most important reviewing features of their text processing program (Cunningham, 2017b, p. 63; 2019a, p. 236; Yohon & Zimmerman, 2004, p. 227). As a first step, the feedback recipients should know how to access the reviewed document. If they encounter technical problems, such as that the file cannot be opened, they should notify the assessor promptly. Once opened, the learners should check the track changes settings of the word processor (cf. Yohon & Zimmerman, 2004, p. 227). They might want to switch the views between their originally submitted document, the revised document with all visible markups, selected markups only or simple markups. Alternatively, they could open their original draft in a separate window and compare it to the commented version, either manually or by using the document comparison tool. Of course, they could also make use of the possibility to print out the text feedback (Edwards et al., 2012, p. 105; Mathieson, 2012, p. 149; McCarthy, 2015, p. 164) and compare it to the original submission (Langton, 2019, pp. 41-42). This might be helpful for later reference or for flicking through the feedback document (Edwards et al., 2012, p. 105). However, this would entail printing and paper costs. For these and other reasons, familiarity with the corresponding functions of the word processor would be crucial. Next, students should not blindly accept all suggested track changes, but consider each one carefully in order to learn from them (Yohon & Zimmerman, 2004, p. 227). While working through the comments, the feedback recipients should note down their open questions, e.g. in a separate document or as a direct response to a comment 76 3 Digital feedback methods <?page no="78"?> bubble. In case learners prefer taking handwritten notes, they could either print out the annotated assignment or use another piece of paper. To save resources, they might also want to use the handwriting function of their tablet or iPad. The learners would then discuss their open questions and further remarks in a subsequent classroom meeting or during the (online) office hours, via a re-upload or email, for example. Ultimately, the learners should be able to implement the feedback, either in a revision (if requested) or in a subsequent task (see section 2.2.4). If there is some delay between the reception of the feedback and a similar assignment, it would be advisable for the learners to revisit the feedback before engaging in the new task. Once learners are familiar with the reviewing, editing and commenting features of a word processor, they may also use it for peer feedback purposes (see e.g. AbuSeileek, 2013b; Ene & Upton, 2014; Ho & Savignon, 2013; Rodina, 2008, pp. 111-112). Beyond that, they can utilize comments and track edits as well as further functions in order to self-assess their writing. To instantiate, the spelling and grammar checkers would be helpful to erase typos, punctuation errors and grammatical mistakes (see AWE in section 3.2). Furthermore, they can use comment bubbles to write memos to themselves in order to revisit them at a later point (e.g. whether they should mention a particular argument or example in the current or in a subsequent paragraph). They might also use the comment function in order to respond to the reviewer’s remarks or to ask questions to the instructor. Finally, with the track changes they can create visible traces of their revisions for their own or for their teacher’s later perusal. 3.1.8 Combinations with other feedback methods Feedback provided via text editors can be combined with many further modes and methods. Especially the combination of automated (AWE; see section 3.2) and manual corrections in a word processor appears to be profitable for learners and teachers alike. For teachers, it is a time-saver; for students, it is a useful tool for monitoring their text production and for correcting mechanics prior to draft submission. To exemplify, in the study by AbuSeileek (2013b), the “combination of the track changes and word processor feedback” resulted in a better writing performance than using only one of the two ways (p. 330). In addition, AWE and/ or text editor feedback can be combined with audio feedback (section 3.11), video feedback (section 3.12) and screencast feedback (section 3.13). Some platforms and programs even afford the direct integration of audio or video comments into the reviewed document. For example, in Microsoft Word and Adobe PDF, audio comments can be attached to particular words or passages (see section 3.11). Likewise, some LMS, including Moodle, Canvas or Blackboard, offer a variety of annotation tools that exceed the written modality, i.e. voice and video comments. On the other hand, if the reviewed text drafts are distributed to the writers via email, then email feedback can complement the written feedback in the text document (section 3.8). 77 3.1 Written electronic feedback in (offline) text editors (Text editor feedback) <?page no="79"?> Aside from these asynchronous modes, the text annotations can be shown or per‐ formed during a web conference, which would represent another possible combination of feedback methods (see section 3.14). Similarly, synchronous chats (section 3.7) would enable further interaction, discussion and clarification of the feedback in the text document (cf. Ene & Upton, 2018). Alternatively, assessors may discuss the feedback with the learners during a face-to-face conference (Silva, 2012, p. 3). 3.1.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) For what types of tasks might text editor feedback be well suited? (2) What functions and features of the text editor program are useful for feedback provision? (3) What are possible disadvantages of text editor feedback for learners? Application questions/ tasks: (4) In this chapter, you learned about the different tools of text editors that you can utilize in order to review an assignment. Take a text draft by one of your students or peers and consult (or list) relevant assessment criteria. a. Think about a color scheme that you consider appropriate and apply it to the text draft. b. What other tools do you use (comment bubbles, track changes)? c. Before sending the text to the student, reflect on your use of the different text editor tools. For this, it might be useful to return to the reviewed text on another day. Think about the recipient’s perspective: Are your comments clear enough? Are the track changes discouraging? Is the color-coding consistent or maybe even distractive? Change your feedback if necessary. 3.2 Automated writing evaluation (AWE) 3.2.1 Definition and alternative terms/ variants Automated Writing Evaluation (AWE) systems rely on the use of Artificial Intelligence (AI) and Natural Language Processing (NLP) techniques with the aim of providing automated feedback and corrections to the users (review by Barrot, 2021, p. 1). AWE is the only technology-generated method that is described in this book, but with the intention of using it in a human-mediated manner (see below). As the name indicates, AWE has mostly been applied to writing, notably essays, but more recently also to other written genres (Cotos, 2018, p. 2). The software can analyze and evaluate various aspects of writing, such as language errors in terms of mechanics, grammar, word choice usage and style etc. (Cotos, 2018, pp. 1-2). Some programs may 78 3 Digital feedback methods <?page no="80"?> also indicate errors or recommendations beyond the phrase and sentence level, e.g. concerning idea development, discourse structure and rhetoric (Cotos, 2018, p. 2). Automated Writing Evaluation (AWE) has been referred to by many names in the past decades (Ariyanto, Mukminatien, & Tresnadewi, 2021, p. 41; Azmi, Al-Jouie, & Hussain, 2019, p. 1736). Sometimes, the labels have been used interchangeably, sometimes with a different accentuation. For instance, we find “Computer-Based Text Analysis (CBTA)” (Ariyanto et al., 2021, p. 41) and “Online Automated Feedback (OAF)” (Cheng, 2017, p. 18), with one notion highlighting the device and the other stressing the online interface. The idea of correction and grading is foregrounded in terms such as “Automated Written Corrective Feedback” (AWCF) (Barrot, 2021, pp. 2-3) and “Automated Essay Grading (AEG)” (Azmi et al., 2019, p. 1736). The prevalent focus on essays becomes manifest in several further labels, such as “Automated Essay Evaluation (AEE)” (Ariyanto et al., 2021, p. 41) or “automatic evaluation of essays” (Azmi et al., 2019, p. 1736) as well as “Automated Essay Scoring (AES)” (Zhang, 2021, p. 170). Altogether, AWE and AES seem to be the two most common notions in the literature. Cotos (2015) used “Automated Writing Analysis” (AWA) as a cover term for the two (p. 199), with AES primarily serving summative assessment and AWE being useful for formative assessment (p. 200). This distinction has, however, not reached universal agreement. In other papers, AES is simply described as another name for AWE (e.g. Zhang, 2021), with AWE having both the functions of providing summative assessment scores and formative corrective feedback (e.g. Cotos, 2018, p. 2; Waer, 2021). Similarly, Bai and Hu (2017) stated that AWE offers “quantitative assessments and qualitative diagnostic feedback” (p. 67). Nevertheless, the scoring idea captured by the term “Automated Essay Scoring (AES)” outlines a central purpose of these systems. Especially the first generations of AWE aimed at grading and benchmarking learners’ performance by calculating summative scores to assess writing quality (see the review by Hazelton, Nastal, Elliot, Burstein, & McCaffrey, 2021, p. 43). This is reflected in definitions of AWE that analyze texts and measure features “quantitatively” (Cotos, 2015, p. 199) to produce “holistic scores” (Cotos, 2018, p. 1) or specific “count[s]” of grammatical errors (Bridgeman & Ramineni, 2017, p. 62) or of any other “measurable assessment criteria” (Cheng, 2017, p. 18). As such, the AES systems assist decision-making, notably by testing agencies rather than by teachers in the classroom (Cotos, 2018, p. 2; cf. Hazelton et al., 2021, p. 43). Examples include the “Intelligent Essay Assessor” or the “e-rater®” by the Educational Testing Service (Hazelton et al., 2021, p. 43). By contrast, a newer generation of AWE systems sets a focus on guiding writers during the composition process (cf. the review by Hazelton et al., 2021, p. 43). This is reflected in names such as the “Writing Mentor” (see Burstein, Riordan & McCaffrey, 2020; Hazelton et al., 2021), “ProWritingAid” (Ariyanto et al., 2021) or “Virtual Writing Tutor” ( John & Woll, 2020, p. 185). Accordingly, authors such as Burstein, Riordan and McCaffrey (2021) explained that AWE feedback “provides writers with actionable guidance during the writing process by suggesting corrections and 79 3.2 Automated writing evaluation (AWE) <?page no="81"?> potential improvements” (p. 342). The emphasis on formative feedback also becomes evident in names such as “Formative Automated Writing Evaluation” (fAWE) (Hazelton et al., 2021, p. 39). Instead of scores, these systems give feedback by e.g. underlining or highlighting errors or giving metalinguistic hints to support the writers (AbuSeileek, 2013b, p. 320; Hazelton et al., 2021, p. 39). Similarly, Wang and Bai (2021) concluded that “AWE systems not only serve as scoring engines but also as Computer Assisted Language Learning (CALL) tools for users” (p. 68). Overall, we may thus identify two major functions of AWE systems: calculating achievement scores for testing purposes and providing formative feedback for writing support, e.g. through self-correction. Moreover, educators can use the AWE function‐ alities to speed up the correction process, as they do not need to correct every linguistic mistake, but can concentrate on higher-level aspects. These advantages, but also the still-prevailing limitations of AWE will be synthesized in sections 3.2.3 and 3.2.4. For now, we may define AWE feedback as a technol‐ ogy-generated feedback method relying on computational analyses of (written) language that can serve scoring pur‐ poses, but can also support learners and teachers during correction processes (including self-correction) through error indication in direct (supplying a correction) and in‐ direct ways (underlining, metalinguistic hints). 3.2.2 Contexts of use, purposes and examples As the name already indicates, AWE (automated writing evaluation) is mainly con‐ cerned with written assignments (Cotos, 2018, p. 2). Similarly, the term AES (automated essay scoring) refers to a written genre, namely essays in particular. The scope of assignments can, however, be widened to other genres (cf. Cotos, 2018, p. 2) as well as to other file types in which text elements are found, such as in PowerPoint presentations or chat messages (see section 3.7). Not all files work with every software, though (cf. section 3.2.4). Moreover, most of the programs had been specifically designed for evaluating essays, with other genres being relatively unexplored (Cheng, 2017, p. 18). While AES and AWE can be applied to many subjects, these systems had been originally developed for writers in English-speaking countries ( Jingxin & Razali, 2020, p. 8335). Many studies therefore examined the usefulness of AWE for native speakers (Cotos, 2018, p. 3). By now, they have also frequently been investigated in the field of foreign language writing instruction (cf. Jingxin & Razali, 2020, p. 8335). There is also software that has been created for specific groups of EFL (English as a foreign language) learners (e.g. Pigai for Chinese; Bai & Hu, 2017, p. 67) and for the learning of other languages, such as the Spanish Writing Mentor (see section 3.2.5). Due to continuous improvements in AI and NLP, research keeps on flourishing and programs for many different languages might be developed and improved in the future. Based on the survey of the literature, it appears that many AI researchers highlight the 80 3 Digital feedback methods <?page no="82"?> potentials of AWE, whereas educators (and many other users) do not seem to be fully convinced of their features. In the end, AWE should only be viewed as a supplement to human feedback and other feedback methods, not as their replacement (cf. e.g. Ariyanto et al., 2021; Bai & Hu, 2017; Wilson & Czik, 2016). The next sections will therefore outline the reported advantages and limitations based on a review of current research. 3.2.3 Advantages of the feedback method It has often been claimed that automatic corrections contribute to a reduction of teachers’ workload when marking students’ papers (e.g. Palermo & Wilson, 2020, p. 94; Wilson & Czik, 2016, quoted in Ariyanto et al., 2021, p. 43). Notably, for lower-level aspects of spelling, punctuation and grammar, AWE seems to work pretty well so that teachers can use it for form-focused feedback (Miranty & Widiati, 2021, p. 127). They would thus have more time to provide personalized feedback on higher-order issues, such as content, structure and argumentation (Miranty & Widiati, 2021, p. 127; see also Dikli & Bleyle, 2014; Lai, 2010; Palermo & Wilson, 2020; Warschauer & Grimes, 2008; Wilson & Czik, 2016). AWE can consequently be used as a complement to personalized feedback in text editors (e.g. through comment bubbles) or to any other on-site or online feedback method that is deemed suitable for a particular purpose (see section 3.2.8 for suggestions). Since AWE tools help to gain overviews of recurring problems, class time could be seized effectively, with a discussion focusing on the most frequent issues instead of time-consuming individual feedback (Ariyanto et al., 2021, pp. 44-48). AWE is also a time and cost relief for large-scale assessments (Zhang, 2021, p. 174; see also Li, Link, & Hegelheimer, 2015; Waer, 2021; Wilson & Roscoe, 2020; Zhai & Ma, 2021). Moreover, AWE is time-saving for the students because it can help learners selfassess (Balfour, 2013). They obtain immediate feedback about a wide range of error categories (Barrot, 2021, p. 3; see also Cheng, 2017; Palermo & Wilson, 2020, p. 102; Waer, 2021, p. 3) so that they can check their papers instead of having to wait for teacher feedback (cf. Ariyanto et al., 2021, p. 45). It allows them to “work at their own pace” (Waer, 2021, p. 5) as well as to access the feedback whenever they want to, independent of time and place restrictions (Dembsey, 2017, p. 89). Furthermore, learners can consult the feedback without having to fear a face-to-face evaluation by their teachers or peers (Waer, 2021, p. 5). Further affective factors were frequently cited as well. It appears to be more neutral since it is unaffected by “the mood or current state of mind of the teacher, such as exhaustion from checking papers and attitude towards the students” (Barrot, 2021, p. 3). Potential biases and inconsistencies by human raters are thus reduced (Zhang, 2013). Moreover, AWE can be a “confidence builder” for students (Ariyanto et al., 2021, p. 45; cf. Barrot, 2021, pp. 15, 17; Chen & Cheng, 2008, p. 108). Both highand low-achieving students may use it to check their writing, which can lead to increased autonomy and motivation (Bai & Hu, 2017, p. 69; Chen & Cheng, 2008, p. 108; Cheng, 2017; Grimes & Warschauer, 2010; Palermo & Wilson, 2020; Zhai & Ma, 2021; Zhang, 2021, p. 175). 81 3.2 Automated writing evaluation (AWE) <?page no="83"?> They might even be willing to invest more “time in solving problems in their writing” (Wilson & Roscoe, 2020, p. 92), as AWE provides ample opportunities for revising (Wilson & Czik, 2016). This way, it may even help to decrease writing apprehension (Waer, 2021, p. 16; see also Miranty & Widiati, 2021). To avoid becoming “slaves of the AWE system”, learners need to be “capable of [making] independent judgement” regarding the suggested revisions (Bai & Hu, 2017, p. 77). For this, they need to possess a relatively high language proficiency, though (cf. the participants in the study by Bai & Hu, 2017; see also limitations below in section 3.2.4). Overall, AWE tools may identify a broad range or error types (Barrot, 2021, p. 3), e.g. concerning spelling, punctuation, grammar, sentence variation, vocabulary and word usage, style, text complexity and structure (Balfour, 2013, p. 42). In that regard, the color-coding systems that some AWE software (e.g. Grammarly) employ can help learners recognize different kinds of errors (Barrot, 2021, pp. 4, 14; Cotos, 2015, p. 214; Miranty & Widiati, 2021, p. 135). This color-coding serves as an input-enhancement technique to facilitate noticing (Cotos, 2015, p. 214). Apart from error indications and corrections, many programs also offer metalinguistic explanations (Barrot, 2021, pp. 3-4; see e.g. Palermo & Wilson, 2020, p. 70), which learners can utilize for their revisions in a self-regulated manner (Palermo & Wilson, 2020, p. 101). For instance, information about stylistic and structural aspects may ultimately help students to write in compliance with expected standards and conventions (Chen & Cheng, 2008, p. 108). Apart from individual indications of errors and suggestions in the submitted text, many AWE programs offer an overall score as well as information about the distribution of different error types (see e.g. Palermo & Wilson, 2020, pp. 69, 71). This kind of diagnostic feedback can assist the learners in identifying the areas they still need to work on. It may thus support them in their self-regulated learning (Barrot, 2021, p. 15). Moreover, they can use this feedback report as a foundation for further consultation with their teacher or tutor (see section 3.2.5). Even word processors can provide valuable corrective feedback to writers. In Microsoft Word, for instance, errors are underlined, and learners can click on them in order to obtain alternative suggestions (cf. AbuSeileek, 2013b, p. 320). Moreover, the in-built “Editor” of the most recent Word version shows an overview of different error categories and suggestions for improvement. Usually, though, AWE tools have a broader range of functions than text editing programs and operate more reliably. Many of them even offer a plagiarism detection feature (e.g. PaperRater, Grammarly). It flags those parts of a text that might be counted as plagiarism and therefore asks the users to add a proper citation (Barrot, 2020, p. 2). For teachers, this is a useful feature to check the originality of students’ papers; for students, it is helpful to monitor their source use (cf. Barrot, 2020, p. 2). Given the variety of features, AWE software can thus provide a high quantity of feedback, which can be a pro and a con (see limitations in section 3.2.4). On the positive side, it could lead to a higher learning gain. Especially regarding mechanics and other lower-level aspects, AWE tools are relatively reliable and useful for learners (e.g. Palermo & Wilson, 2020, pp. 95-96). For 82 3 Digital feedback methods <?page no="84"?> instance, Bai and Hu (2017) found that revisions improved by “100 % for mechanics, 57 % for grammar and 42 % for collocations” (p. 77). Further studies likewise noted that the grammar, word choice, punctuation and spelling functions can be valuable for both highand low-achieving students (cf. Ariyanto et al., 2021, pp. 45-48; Li, Feng, & Saricaoglu, 2017; Liao, 2016; Saricaoglu, 2019; Saricaoglu & Bilki, 2021). By contrast, others argued that learners with a high proficiency may benefit more from AWE whereas low performers might gain more from teacher feedback (Ariyanto et al., 2021, p. 49; see also Zhang, 2020). This is reasonable because a certain degree of language knowledge and language awareness seem to be required to judge the correctness of the AWE feedback (cf. Ariyanto et al., 2021, p. 49). Teachers may therefore incorporate AWE into their classes for focus-on-form activities in order to promote linguistic accuracy ( John & Woll, 2020, p. 170). Indeed, the comments provided by several AWE programs can encourage metalinguistic reflection about their performance and facilitate the noticing of errors and mistakes (cf. Barrot, 2020, p. 4; 2021, pp. 5-7, 17). Nevertheless, low-level students could likewise have positive attitudes towards AWE and may find it useful (Cotos, 2018, p. 4) even if it not fully effective for their writing. As part of the writing process, AWE seems to be suitable for the early stages of drafting as well as for later stages of revision (Cotos, 2018, p. 4). Its ultimate effect on the quality of texts might, however, be rather small, given that higher-order aspects cannot yet be reliably assessed by AWE programs. Accordingly, studies have found “small to moderate effects on […] writing quality” (Wilson & Roscoe, 2020, p. 91; see also Stevenson & Phakiti, 2014). Its effectiveness appears to depend on several factors, for instance on its successful integration into an instructional context (Ariyanto et al., 2021, pp. 41-42; Wilson & Roscoe, 2020, p. 118) as well as learners’ proficiency. Before we move on to recommendations regarding the implementation of AWE, though, the commonly reported limitations will be surveyed next. 3.2.4 Limitations/ disadvantages In the previous section, we have seen that AWE can assist teachers in the marking process and learners in the process of self-correction and reflection. However, a certain degree of language knowledge appears necessary in order to check the correctness of the suggested changes. Otherwise, AWE can lead to errors. Apart from the cognitive prerequisites, affective challenges of AWE were mentioned frequently. As AWE programs analyze a variety of error types, learners may quickly receive a huge amount of feedback. Accordingly, they might feel overwhelmed (Barrot, 2021, p. 15) and discouraged (Cheng, 2017; Cotos, 2018, p. 6). They could feel overburdened because they do not know how to prioritize and how to proceed with the revision (Cotos, 2018, p. 6). Since the emphasis of AWE is on error detection, learners might be demotivated further due to a lack of praise and positive feedback (Dembsey, 2017, p. 89). Ultimately, they may even refrain from engaging in the revision and further learning process (Cheng, 2017). 83 3.2 Automated writing evaluation (AWE) <?page no="85"?> The negative perceptions towards AWE feedback also come from its imprecision regarding certain error types. On the one hand, not all errors are recognized by the program (Bai & Hu, 2017, p. 68; see also Dikli & Bleyle, 2014). Chen, Cheng and Yang (2017), for instance, conducted a comparison of AWE and instructor feedback and found that AWE detected fewer errors. On the other hand, AWE software may overcorrect errors (Barrot, 2021, p. 3; see John & Woll, 2020, for a more detailed comparison of error detection accuracy). If students notice false corrections, they might distrust AWE (Liaqat, Munteanu, & Demmans Epp, 2021, p. 660). For instance, a student in Liaqat et al.’s (2021) study felt confident in grammar and doubted that the errors indicated by the program were correct (p. 660). Analogously, other linguistic areas might have been assessed inappropriately. To exemplify, some software “overemphasizes the use of transition words” and “ignores coherence and content development” (Chen & Cheng, 2008, p. 104; see also Wilson & Roscoe, 2020). What is more, the feedback is often based on conventional language patterns, which makes it less suitable for creative writing and other unconventional ways of expression (Chen & Cheng, 2008, p. 104). Rather, it seems to promote the use of formulaic language (Stevenson & Phakiti, 2014; Wilson & Roscoe, 2020). When utilizing AWE, students might even “consciously or unconsciously adjust their writing to match the assessment criteria of the software” (Cotos, 2015, p. 206; see also Zhang, 2021, p. 175). Likewise, the language of the feedback could be too formulaic and generic to be helpful for the learners (Chen & Cheng, 2008, p. 101; cf. Jiang & Yu, 2020, p. 2; see also Grimes & Warschauer, 2010; Ranalli, 2018). As a result, they might not know what to do and may need to ask the teacher for help and further explanations (Chen & Cheng, 2008, p. 101). This, in turn, would increase their workload (Chen & Cheng, 2008, p. 101). Otherwise, the application of AWE feedback could lead to false revisions. Indeed, some students might be tempted to blindly accept all AWE corrections and suggestions “without actually trying to find out whether it is true or not” (student cited by Ariyanto et al., 2021, p. 47; cf. Barrot, 2021, p. 17). In particular, students with a lower proficiency do not yet seem to be capable of judging the automated feedback critically (Ariyanto et al., 2021, p. 47). This is highly problematic because AWE feedback can be inaccurate (e.g. Bai & Hu, 2017, p. 73; Chen et al., 2017; Miranty & Widiati, 2021, p. 135). For instance, Dodigovic and Tovmasyan (2021) remarked that “Grammarly has approximately one third of inaccuracies […] in addition to errors it ignores” (p. 83; cf. Chen et al., 2017, p. 126, for the “Correct English” system). Additionally, the language variety might have an impact, as the software seems to work better with US English than with other varieties (Nova, 2018) or dialects. Moreover, the proficiency level could be influential. For lowand medium-quality essays, Wang and Bai (2021) reported an agreement of more than 80 % between the AWE scores and a human rater, whereas it was lower than 40 % for high-quality essays (p. 778). Higher-order aspects and “sophisticated syntactic errors” (Wang & Bai, 2021, p. 778) might be specifically challenging for the programs. 84 3 Digital feedback methods <?page no="86"?> The AWE analyses and suggestions could therefore only be of limited use (Chen & Cheng, 2008, p. 97; cf. the review by Bai & Hu, 2017, p. 69). Some claimed that it could be useful for writing the first draft, but not for further revisions (Chen et al., 2017, p. 102; see also Cotos, 2018). Ariyanto et al. (2021) observed that the students with high proficiency only applied AWE to their first drafts in order to eliminate minor errors before submitting their drafts to their teachers (pp. 46-47). Low-achievers, by contrast, not only used AWE for their first drafts, but also for later drafts to reduce the number of errors (p. 47). However, low-achieving students may even make new grammar errors when they attempt to revise their content based on the AWE feedback (Huang & Renandya, 2020). Learners might also be more inclined to concentrate on surface-level aspects during their revisions rather than “giving sufficient attention to meaning” (Chen & Cheng, 2008, p. 95) and argumentative quality (Cotos, 2015, p. 206). Such kind of feedback would, however, be needed by the students as soon as they have reached a certain level of linguistic proficiency (Chen & Cheng, 2008, pp. 106-107). Furthermore, AWE can neither evaluate the factual correctness of the papers (Cotos, 2015, p. 206) nor the level of learners’ creative expression (Waer, 2021, p. 4). Eventually, the use of AWE software could even discourage creative writing if students rely on the “pre-programmed essay prompts” (cf. Cotos, 2015, p. 206). Overall, it appears that AWE is most useful for immediate corrections of surfacelevel aspects (Chen et al., 2017, pp. 125, 127), but not for improvements of fluency, text structure (Lv, 2018, p. 194), argumentation and creative expression. Moreover, the longterm effects of AWE feedback are not proven (Ariyanto et al., 2021, p. 48). Furthermore, many applications have been created for feedback on a writing product rather than for in-process writing support (Strobl et al., 2019). Related to that, the social dimension is critical for feedback and learning. Even though some programs utilize an avatar (e.g. The Writing Mentor, cited by Burstein et al., 2021, p. 337; see section 3.2.5) or resort to conversational language when giving metalinguistic explanations (e.g. the Research Writing Tutor presented by Cotos, 2015, p. 214), AWE cannot fully replace interactions with humans. Several scholars have therefore criticized the non-interactive nature of AWE (Gao, 2021) and its deprivation from social and communicative aspects that are typical of real-world contexts (Cotos, 2015, p. 206; see also Vojak, Kline, Cope, McCarthey, & Kalantzis, 2011; Wilson & Roscoe, 2020; Zhang, 2020). They cautioned that AWE might dehumanize the writing class by removing the human aspect (Warschauer & Ware, 2006; see also Chen & Cheng, 2008, p. 96), that it “lacks human pedagogical touch” and ignores “individual differences” (Barrot, 2021, p. 3, based on Ranalli, 2018). The feedback is not seen as personal and sufficiently tailored to the individual student (Dembsey, 2017, p. 89; cf. Cotos, 2018, p. 6). Consequently, it may not be as facilitative in terms of uptake and understanding as teacher feedback (Li, 2021, p. 9). Nevertheless, some teachers worried that AWE could threaten their job (Chen et al., 2017, p. 125) although AWE will probably never be capable of understanding a text in a way humans would do (Graesser & McNamera, 2012, cited in Balfour, 2013, pp. 42-43). Likewise, other scholars ascertained 85 3.2 Automated writing evaluation (AWE) <?page no="87"?> that “AWE alone cannot improve students’ writing skills” (Zhai & Ma, 2021, p. 3; cf. Jingxin & Razali, 2020, pp. 8336-8337; Palermo & Wilson, 2020). Rather, it should be reasonably combined with other feedback methods, including teacher feedback, as will be discussed in the subsequent sections 3.2.6 to 3.2.8. There are also some software-specific aspects that need consideration. First of all, each program has a different scope of functions and corpus size it draws on (e.g. Bai & Hu, 2017, p. 77). Some of them do not provide sufficient explanations about the errors (Ariyanto et al., 2021, pp. 48-49). Furthermore, several systems had originally been designed for L1 (native/ first language) writers and might thus not be able to cater for the specific needs of L2 (second/ foreign language) learners (Cotos, 2015, p. 206). Quite often, the feedback is given in English only, which makes it hard to understand, especially for beginning learners ( Jingxin & Razali, 2020, p. 8337). Likewise, more AWE systems need to be developed for languages other than English (Strobl et al., 2019). Finally, some AWE software can be very expensive ( John & Woll, 2020, p. 172) and merely offer some basic functionalities in the free version, which are of limited use to learners and teachers. The next section will therefore give an overview of some AWE programs and their features. 3.2.5 Required equipment To use an AWE or AES application, a desktop, laptop or tablet PC is recommended, but for shorter written passages the use on mobile phones might likewise be feasible. In the past studies, a wide variety of programs was utilized. Moreover, it is likely that several new tools will be developed in the future. For summative assessment, the following AES systems and engines were cited: Intelligent Essay Assessor (IEA), Educational Testing Service’s (ETS) e-rater® (https: / / www.ets.org/ erater/ about), C-rater (Burstein et al., 2021), AutoScore (Xie, Chakraborty, Ong, Goldstein, & Liu, 2020), Accuplacer (https: / / accuplacer.collegeboard.org/ ), MI Write (formerly known as PEG Writing; https: / / www.miwrite.net/ ; Palermo & Wilson, 2020; Wilson & Roscoe, 2020), MyAccess! (https: / / www.myaccess.com/ ; based on the IntelliMetric AES system; Hussein, Hassan, & Nassef, 2019, p. 6) and Pigai (http: / / www.pigai.org/ ; Bai & Hu, 2017; Gao, 2021; Jiang & Yu, 2020; Jingxin & Razali, 2020; Wang & Bai, 2021). In turn, the AWE software that has mainly been employed for formative assessment includes Grammarly (https: / / www.grammarly.com/ ; Barrot, 2020; 2021; Miranty & Wi‐ diati, 2021; Nova, 2018), Jukuu (http: / / www.jukuu.com/ ), Criterion (https: / / criterion.e ts.org/ criterion/ default.aspx; Burstein, Chodorow, & Leacock, 2003; 2004; Saricaoglu & Bilki, 2021), ProWritingAid (https: / / prowritingaid.com/ ; Ariyanto et al., 2021), Turn‐ itin (https: / / www.turnitin.com/ ), PaperRater (https: / / www.paperrater.com/ ; Reynolds, Kao, & Huang, 2021), Correct English (https: / / www.correctenglish.com/ ; Chen et al., 2017), WriteToLearn (and other tools available at https: / / www.pearsonassessments. com/ ), Writing Mentor (https: / / mentormywriting.org/ ; Burstein et al., 2021; Hazelton et 86 3 Digital feedback methods <?page no="88"?> al., 2021), Revision Assistant (https: / / www.revisionassistant.com/ ), Writing Roadmap (https: / / www.roadmapwriters.com/ ), Virtual Writing Tutor (https: / / virtualwritingtut or.com/ ), iWrite (https: / / iwrite.org/ ) as well as the spelling and grammar checking functions of Microsoft Word. Overall, the functional coverage by the Microsoft application appears to be low and thus only of limited use as compared to other programs ( John & Woll, 2020, p. 185). However, the added value sometimes only pertains to the paid versions of AWE programs, whereas the free alternatives have a limited functional scope. For instance, when comparing the tool descriptions of Microsoft Word and Grammarly (https: / / www.grammarly.com/ plans), we observe that MS Word and the free version of Grammarly concentrate on grammar, punctuation, spelling and clarity, with only one additional feature mentioned for Grammarly Free, which is tone of delivery. Beyond that, the premium version of Grammarly analyzes fluency, formatting, formality level, consistency in spelling and punctuation, inclusive language use and politeness as well as a compelling word choice and sentence variation. It also offers a plagiarism screener (Barrot, 2020, pp. 2, 4). To seize these additional features, users would need to pay a monthly fee, though (https: / / www.grammarly.com/ plans). The free and paid versions of Grammarly comply with iOS and Windows operating systems on several devices and can be accessed online with different web browsers (Chrome, Safari, Firefox, Microsoft Edge) (Barrot, 2021, p. 4). They can thus be utilized in any convenient place with internet access (Nova, 2018; see also Guo, Feng, & Hua, 2021, p. 5). Furthermore, Grammarly works as an add-on in several applications, including Word (Miranty & Widiati, 2021, p. 135; see also Guo et al., 2021, p. 5). Users can select from different varieties of English (American, British, Canadian, Australian) (Barrot, 2021, p. 4). To help learners distinguish between error types, Grammarly underlines them in different colors (Barrot, 2021, p. 4; Miranty & Widiati, 2021, p. 135). Red stands for spelling, punctuation and grammar; purple for tone, formality and politeness; blue for issues with clarity and conciseness; and green for suggestions that might make the text more engaging (Barrot, 2020, p. 3; 2021, p. 4). Moreover, the users can obtain a performance report that contains the word count and an overall score based on the indicated corrections and suggestions (Barrot, 2021, p. 4). The Writing Mentor TM (https: / / mentormywriting.org/ ) developed by the Educa‐ tional Testing Service (ETS) likewise categorizes errors in distinct colors and provides a feedback report at the end. This report can be downloaded and shared with the instructor for further consultation. The Writing Mentor works as a Google Docs addon (section 3.3) and aims to help writers improve their writing. To personalize the AWE comments, the feedback is given by “Sam”, a non-binary persona with a gender-neutral name (Burstein et al., 2021, p. 337). The Writing Mentor add-on also exists for Spanish as the target language (https: / / mentormywriting.org/ es.html; see e.g. Cahill et al., 2021, pp. 116-119). While the interface of many AWE tools has originally been designed for an English-speaking audience, the particular needs of different groups of foreign-language 87 3.2 Automated writing evaluation (AWE) <?page no="89"?> speakers have not been considered by all programs. Pigai (http: / / www.pigai.org/ ), however, has been specifically developed for the needs of Chinese learners of English as a foreign language (Bai & Hu, 2017, p. 69; Jingxin & Razali, 2020, p. 8337). It is the Mandarin Chinese word for “correction” ( Jingxin & Razali, 2020, p. 8337). The program provides feedback in Chinese and considers errors that may come from interlanguage transfer ( Jingxin & Razali, 2020, p. 8337). It appears to be particularly helpful for Chinese students with a relatively low EFL proficiency. On the other hand, it does not seem to capture all types of errors in a reliable manner, e.g. regarding collocations, syntax and logic (Gao, 2021, p. 327). 3.2.6 Implementation (how to) As a prerequisite, teachers should understand the strengths and limitations of AWE in general and of the specific software (Bai & Hu, 2017, p. 79; Li, 2021, p. 10). Furthermore, they should make their choices based on the tool’s affordances in relation to the learning goal, the learners’ language proficiency as well as further pedagogical factors (cf. Bai & Hu, 2017, p. 79; Cotos, 2018, pp. 5-7; John & Woll, 2020, p. 186). Instead of trying to replace other feedback methods with AWE, they should integrate them in a meaningful way (Ariyanto et al., 2021; Li, 2021, p. 10). Certainly, teachers need to familiarize themselves with the program (Ariyanto et al., 2021, p. 44) before clarifying the purposes and procedures with the learners (cf. Cotos, 2018, pp. 5-6). They should explain the functions of AWE software and encourage students to try them out. Beyond that, teachers should “offer explicit instruction on how to respond to AWE feedback” regarding different error types, such as mechanics, grammar, content and organization (Zhang, 2020, p. 12). Moreover, the students “should be made aware of an AWE system’s limitations”, for example concerning collocations or syntax (Bai & Hu, 2017, p. 79). In that regard, teachers may draw learners’ attention to the additional use of online corpora (e.g. British National Corpus or the Corpus of Contemporary American English), grammar books and (online) dictionaries (Bai & Hu, 2017, p. 79). These will help them to judge the correctness of the suggestions and clarify potential confusions (Bai & Hu, 2017, p. 79; Barrot, 2021, p. 9). Hence, writers should always carefully and critically consider the automated feedback before editing their drafts (Ariyanto et al., 2021, p. 48). In fact, most scholars do not support the exclusive use of AWE for providing feedback, but state that it should be “a supplement to writing instruction rather than […] a replacement of writing teachers” (Chen & Cheng, 2008, p. 98; see also Ariyanto et al., 2021; Cotos, 2015; Palermo & Wilson, 2020). Overall, it should be “integrated into an instructional framework that focuses on the writing process and the writing product” (Waer, 2021, p. 4). This may include combinations with instructor feedback and peer feedback, which also resonates with the social nature of learning in general and writing in particular ( Jingxin & Razali, 2020, pp. 8338-8339; cf. Bai & Hu, 2017, p. 79). To exemplify, Jingxin and Razali (2020) designed a framework for using Pigai 88 3 Digital feedback methods <?page no="90"?> with EFL students, which could also be applicable for other AWE software (p. 8340). It combines the use of the AWE software on one’s own as well as together with peer and instructor feedback (see also Bai & Hu, 2017). In a first step, the pre-writing stage, the learners discuss a particular topic with their fellow students ( Jingxin & Razali, 2020, p. 8340). Next, they compose their first draft before they apply the AWE program to it. Based on the automated feedback, the students begin revising and editing their draft. Afterwards, peer reviewers give feedback on these second drafts. The peer feedback is used for a third draft that is submitted to their teachers. While teachers monitor the entire writing process of the students, they also provide feedback themselves (cf. section 2.2.3). This feedback is utilized for the final revision (draft 4) that is handed in for grading by the teachers and scoring by the AWE program ( Jingxin & Razali, 2020, p. 8340). In addition, there are also automated tracking systems that help users “track whether revisions [were] made in response to the feedback received” (Cheng, 2022, p. 6). This may help teachers understand the impact of their feedback and the AWE comments, but can also assist the students in implementing the revisions (Cheng, 2022, p. 21). In order to apply the feedback, though, learners need to assess the usefulness of the AWE corrections and advice. In that respect, Wilson and Roscoe (2020) found that students “made gains in independent writing performance” when AWE was combined with “evidence-based strategy instruction” (p. 91). They modeled general self-regulatory procedures, such as goal setting and self-evaluation, as well as specific strategies for argumentative writing (Palermo & Wilson, 2020, p. 82). In addition, they introduced the AWE system to the students, i.e. NC Write, which is a version of MI Write (Palermo & Wilson, 2020, p. 68). Overall, the integration of AWE was found beneficial as part of iterative cycles of writing practice and feedback (Palermo & Wilson, 2020, pp. 93-94, 101-102). Similarly, Ariyanto et al. (2021) incorporated AWE into a multi-stage writing process with guidance by the teacher (p. 41). Overall, teacher support thus appears to be crucial, at least until students are able to self-monitor their revisions based on their language knowledge and the AWE suggestions (cf. Bai & Hu, 2017, p. 71). During the familiarization and practice stages, teachers should review the suggested corrections of the AWE program together with the learners (Ariyanto et al., 2021, p. 44). This also helps them identify the most common errors that their students still grapple with, but also gives them the opportunity to address those errors that the software did not detect (Ariyanto et al., 2021, p. 44). Moreover, they should discuss misclassified errors in order to raise learners’ awareness of potential program flaws and limitations (cf. Ariyanto et al., 2021, p. 45). Many AWE tools allow their users to ignore individual suggestions and corrections (see e.g. ProWritingAid displayed in Ariyanto et al., 2021, p. 44). Students should therefore be encouraged to verify the correctness of the changes by resorting to relevant resources. Some tips for learners will be suggested next. 89 3.2 Automated writing evaluation (AWE) <?page no="91"?> 3.2.7 Advice for students Before students use AWE software, they should become aware of the features and limitations of different tools. Moreover, they need to recognize the general limitations of AWE and therefore carefully consider the suggestions that are offered instead of blindly adopting them. Some of them might be overcorrections that would distort the text or create a different meaning (cf. Barrot, 2021, p. 9). A strategic and critical use of the corrections and suggestions thus appears crucial (Ariyanto et al., 2021; Bai & Hu, 2017; Guo et al., 2021; John & Woll, 2020, p. 185). Due to bad experiences with the software or a general skepticism against technology, some learners may even be reluctant to or anxious about employing AWE software. With sufficient scaffolding and practice, they might eventually develop more “trust in AWE feedback and proficiency in utilizing the system” (Zhai & Ma, 2021, p. 19). In the end, AWE can be useful, at least for certain kinds of errors and mistakes. It could even help students before and during the writing phase, not only at the end of it (Cotos, 2018, p. 6). As Cotos (2018) explained, the software “Criterion offers a planning tool containing different templates for planning strategies, and Folio offers interactive practice passages for revision and engaging animated tutorials” (p. 6). Likewise, some plagiarism prevention programs can assist the writing process in addition to checking plagiarism in the finalized draft. For instance, the AWE system PaperRater (https: / / www.paperrater.com/ ) has an integrated plagiarism checker to help students monitor the originality of their written work. Moreover, the “Similarity” tool offered by Turnitin (https: / / www.turnitin.com/ pr oducts/ similarity) displays thematically related texts that students can use in their further writing process (Weßels, 2021, p. 1018). Some programs even offer concrete suggestions for reformulations based on existing texts to assist students in the production of unique texts (e.g. Speedwrite, cited by Weßels, 2021, p. 1018). Apart from paraphraser tools, also summarizer tools have been developed that propose a summary of the scanned text (both tools are offered by QuillBot, as quoted in Weßels, 2021, p. 1018). In a recent article, Weßels (2021) discussed several of these webtools and add-ons (for Microsoft Office or Google Docs) in a critical manner, since students might create texts without engaging with the literature thoroughly. This is a huge challenge for assessors because learners’ independent text production cannot be evaluated on the basis of the final submitted text. Teacher-mediated in-process support is therefore essential for this and other reasons, as had been outlined above (section 3.2.6). At the same time, it hints at a possible combination of AWE with other feedback methods, which will be treated in turn. 3.2.8 Combinations with other feedback methods Except for experimental purposes, AWE is rarely used alone, but typically in combina‐ tion with other feedback methods. For example, as shown above (section 3.2.6), AWE can become a part of instructor and peer feedback (Bai & Hu, 2017) as well as a source of self-feedback. 90 3 Digital feedback methods <?page no="92"?> Generally, additional teacher feedback is considered valuable when AWE (and also when peer review) is utilized. Teachers might, for instance, discuss the AWE feedback with the students in order to clarify open questions and erase false corrections by the program (Ariyanto et al., 2021, p. 44). Especially for low-proficiency or low-perform‐ ing learners, automated feedback should definitely be complemented by instructor feedback (Ariyanto et al., 2021, p. 49; Tansomboon, Gerard, Vitale, & Linn, 2017) until they are confident and capable enough to critically evaluate AWE on their own. However, also for teachers, AWE can be a useful supplement to their normal feedback practices. They might experience relief due to a reduced workload because the tools mark several error types that would be tedious to identify when checking papers manually. Scholars have therefore suggested to employ AWE for “form-focused feedback such as grammatical accuracy and mechanics” so that teachers have more time to evaluate the content and organization (Miranty & Widiati, 2021, p. 127; see also Ariyanto et al., 2021, p. 48; Cotos, 2018, p. 5; Palermo & Wilson, 2020; Warschauer & Grimes, 2008; Wilson & Czik, 2016). For the latter, they could utilize comment bubbles in the text editor to provide additional personalized feedback (see section 3.1), but also several other feedback methods. To exemplify, they might use email feedback (see section 3.8), audio feedback or video feedback (sections 3.11 to 3.13) to supply comments on higher-order aspects of writing. Several applications are available as add-ons to offline text editors (see Grammarly above; https: / / www.grammarly.com/ office-addin) to facilitate the combined use of automated and manual corrections. Similarly, some add-ons or AWE software, such as the Writing Mentor TM (see section 3.2.5), can be used in Google Docs (cf. Burstein et al., 2018, pp. 285, 288) so that teachers and peer assessors can insert additional comments or delete misclassified errors. The next section will look more closely at Google Docs and other cloud documents to pinpoint their potential for digital feedback provision. 3.2.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Have you ever used AWE consciously? Do you know how to turn it on and off in your word processing program? (2) What are the advantages of AWE for learners and teachers? (3) Why is AWE often combined with other feedback methods? Please explain and give suggestions for possible combinations. Application questions/ tasks: (4) Try out an external app or browser extension for AWE. You may choose the free trial version in order to assess its usefulness for your own and your students’ pur‐ poses. 91 3.2 Automated writing evaluation (AWE) <?page no="93"?> 3.3 Feedback in cloud editors (cloud-based editing applications) 3.3.1 Definition and alternative terms/ variants Similar to an offline text editor (see section 3.1), online docu‐ ments can be used for the provision of feedback. The afford‐ ance of online editing software is that several persons can edit and comment on the same document simultaneously (e.g. Shintani & Aubrey, 2016). This synchronous feedback is typically text-based, but may include other modes as well, such as voice and video (see section 3.3.6). The feedback comments can be accessed instantaneously or in a time-de‐ layed manner. Hence, feedback in collaborative online docu‐ ments allows for both synchronous and asynchronous ex‐ changes, as indicated by the cloud and the dashed and normal arrow in the picture. However, there does not seem to be an established term yet to describe this type of feedback. Mostly, researchers and practitioners either foreground the name of the app (notably Google Docs, but also Padlet and others) or resort to superordinate terms, such as “synchronous corrective feedback” (SCF) (Shintani & Aubrey, 2016, p. 296). The latter was then paraphrased as feedback via “online simultaneous editing software” (Shintani & Aubrey, 2016, p. 296). With this term, the immediacy of the feedback is stressed, although the comments can also be processed asynchronously. Others, in turn, emphasize the collaborative nature of these online editors or cloud documents by describing them as a “Social Annotation (SA) tool” (Hedin, 2012, p. 1), “online social annotation tool” (E, 2021, p. 5), “collaborative writing program” (Aubrey, 2014, p. 47), “collaborative online writing tool” (Zaky, 2021, p. 65), “online collaborative feedback” (Zaky, 2021, p. 67) or “corrective feedback in computer-mediated collaborative writing” (Storch, 2017, p. 66). More generally, Saeed and Al Qunayeer (2020) called it “interactive e-feedback” that is provided “through Google Docs” (p. 1). Given the varying scope of functions of different online tools for collaborative or interactive feedback, it definitely does make sense to specify the name of the tool; however, “feedback via Google Docs” seems to be too restrictive and dependent on a particular service provider. The technology on which these apps rely is a cloud storage that is accessible by multiple persons (cf. Wood, 2019, pp. 56-57). For this reason, Wood (2019) termed it “cloud-based text editor feedback” (p. 54). A catchier term does not yet seem to exist. On the other hand, the term “text editor” excludes various other applications, such as digital whiteboards or walls that can be used for similar purposes, as for instance Miro or Padlet (see e.g. Pearson, 2021). Likewise, other Google products permit commenting, e.g. Google Slides. For now, feedback in cloud editors or cloud-based editing applications will be defined as an interactive, collaborative online feedback method in which several persons can exchange text-based, but also multimodal comments in synchronous and asynchronous ways. 92 3 Digital feedback methods <?page no="94"?> In the remainder of this chapter, mainly online text editors will be discussed, notably Google Docs, because most research appears to have been conducted about it in contrast to other applications. Google Docs shares many of its functions with the offline version of e.g. Microsoft Word, which had been presented in section 3.1. The basic affordances and limitations of text editors for feedback purposes will therefore not be repeated here. Instead, those characteristics will be outlined that are peculiar to online apps which allow for synchronous, collaborative and (mostly) text-based feedback exchanges. In that regard, related tools will be considered as well, e.g. Padlet. 3.3.2 Contexts of use, purposes and examples Collaborative online editors and other apps appear to be particularly suitable for supporting learners in the process of completing a task, for instance when drafting a text. Due to their collaborative functionalities, they have often been employed for peer feedback purposes (e.g. Aydawati, 2019; Ebadi & Rahimi, 2017) in addition to teacher feedback (e.g. Saeed & Al Qunayeer, 2020; Shintani & Aubrey, 2016; Yim, Zheng, & Warschauer, 2017). Hence, while text editor feedback in offline environments is mostly associated with teacher-to-student feedback, it is more diversified for online editors. Depending on the assignment type, different apps might be chosen. For example, Google Docs is mainly used for feedback on written assignments; Google Slides seems suitable for feedback on presentation slides; and online notice boards, such as Padlet, can be used for brainstorming (e.g. Atwood, 2014, p. 12) and categorization tasks, but also for feedback on numerous multimedia elements. At the same time, different types of media, such as websites, videos and file attachments can be utilized as resources for feed-forward suggestions. All of these elements can be posted onto the digital wall and arranged or re-structured rather freely (depending on the settings one has chosen). By contrast, the comments and attachments in Google Docs and Google Slides usually need to be connected to a particular passage, word or phrase in the online file. In Padlet, in turn, the comments cannot yet be attached to words within a sticky note, but only to the entire note itself. Further advantages and challenges will be discussed in more detail below. 3.3.3 Advantages of the feedback method Online text editors share several advantages with offline word processors (see section 3.1), such as the possibility to insert comments and to track changes (Alharbi, 2020, p. 238). However, in contrast to written comments in offline editors as well as many other digital feedback methods, e.g. email (section 3.8) or recorded feedback in audio or video formats (see sections 3.11 to 3.13), feedback via online editors and collaborative apps allows for simultaneous editing and commenting by several persons (cf. e.g. Alharbi, 2020; Aubrey, 2014, p. 47). This way, all contributors can immediately see the changes and respond to them. They could even create drafts together in real-time, such as in Google Docs or in simpler pads, as for instance ZUM-Pad (see section 3.3.5). Moreover, 93 3.3 Feedback in cloud editors (cloud-based editing applications) <?page no="95"?> they can obtain feedback from several assessors at the same time and respond to these comments quickly afterwards. Likewise, teachers can respond to the students’ replies or revisions and check the corrections, making the feedback more conversational (Aubrey, 2014, p. 52). Hence, online text editors (such as Google Docs) can facilitate interactive feedback exchanges among assessors and authors through the commenting and reply functions (Saeed & Al Qunayeer, 2020). They enable a type of conversation in the cloud (Fuccio, 2014, p. 216). In addition, learners and teachers are able to trace the dynamic changes with the version history feature of e.g. Google Docs (Alharbi, 2020, p. 235; see also Fuccio, 2014). This way, teachers can also monitor the changes students have made to their drafts (Aubrey, 2014, p. 47). As the documents are saved automatically every few seconds, there is hardly any data loss (Fuccio, 2014, pp. 224-225; see also Aubrey, 2014, p. 47). This is another significant advantage of the cloud technology as compared to offline documents stored on a local hard drive, for learners and teachers alike. It is not only user-friendly, but also environmentally friendly as well as cost-effective: Users do not have to purchase an external drive, such as a USB stick (Aubrey, 2014, p. 47), and there is no need to use paper for the completion, review or revision of the assignments (p. 52). Another affordance as compared to handwritten feedback is the easy localization of the comments that are directly connected to specific passages in the document (Fuccio, 2014, p. 209). Similar to text editors, the reviewers also do not have to worry about space restrictions and the recognition of their handwriting (Fuccio, 2014, p. 209). Unlike offline editors, such as Microsoft Word, the users do not need to be concerned about different software versions that could lead to compatibility problems (see section 3.1.4). However, Google Docs also allows them to download the assignment so that it can be re-opened in an offline editor such as Microsoft Word. Due to the synchronization in the cloud, the turnaround is typically faster than in asynchronous offline text editors (Aubrey, 2014, p. 47). This is not only time-efficient for the learners and teachers (Aubrey, 2014, p. 52; cf. Fuccio, 2014), but it can also lead to a higher engagement on the part of the learners. These tools therefore appear to be highly suitable for formative feedback and interactive in-process support as well as multiple rounds of feedback exchanges. However, the feedback can also be provided and accessed in a time-delayed manner (e.g. Aubrey, 2014, p. 45; Ebadi & Rahimi, 2017, p. 794) so that specific meeting times do not need to be agreed on. Hence, beyond the synchronous affordances, the documents are flexibly available for asynchronous editing and commenting (Ebadi & Rahimi, 2017, p. 794). Everyone who has been granted access to the document does not necessarily need to edit it simultaneously (Heath, 2017, p. 160), but at a time and place that is convenient for them (Damayanti, Abdurahman, & Wulandari, 2021, p. 229; Fuccio, 2014, pp. 217, 220; Wood, 2019, p. 105). Thus, in contrast to the ephemeral nature of synchronous exchanges in face-to-face or videoconference meetings (section 3.14), assessors (especially peer reviewers) have more “time […] to think deeply about what parts to edit and how to edit” or comment on (Ebadi & Rahimi, 2017, p. 807). Likewise, the learners “have more time, and less pressure, to […] process and respond 94 3 Digital feedback methods <?page no="96"?> to the feedback” (Sullivan, 2020, p. 669), since the comments can be reconsulted repeatedly. Feedback exchanges can consequently be collaborative in nature while being “staged asynchronously over time” (Wood, 2019, p. 105). While some considered this motivational and helpful (Wood, 2019, p. 105), it is nevertheless advisable to set deadlines, not only for the peer comments, but also for the replies and revisions. Otherwise, a long time lapse might decrease motivation or disrupt interactions. Another affordance and potential source of motivation is the multimodality of the comment types. Apart from text, assessors can record and import voice or video feedback in several applications (Saeed & Al Qunayeer, 2020, p. 9; see sections 3.11 to 3.13). Furthermore, it is possible to easily integrate further useful resources through file uploads or hyperlinks to websites (Wood, 2019, p. 125). This is rather similar to the functions of e.g. Microsoft Word in the offline usage (see section 3.1), but the virtual environment of the online apps may facilitate their actual retrieval. Especially for the insertion of recurring explanations and advice for individual students in Google Docs, the use of a flexibly available comment bank is valuable for the assessors. To exemplify, they may organize resources (text, hyperlinks, videos and audio messages) in the Google Keep app that can be utilized together with Google Docs (Bell, 2018; see implementation section 3.3.6 below). For the learners, the aforementioned affordances may lead to positive perceptions and higher levels of engagement. Moreover, the similarity of Google Docs to popular offline word processors can be seen as conducive to the editing process (Ebadi & Rahimi, 2017, p. 808; see also Fuccio, 2014; Heath, 2017; Hedin, 2012). In addition, the timeliness of the comments had a motivational impact, as learners felt supported emotionally, which may even encourage the formation of learning communities (Wood, 2019, p. 92). They appreciated the effort others had put into reviewing their work, which made them more motivated to act upon the feedback (Wood, 2019, pp. 91-92). All in all, the immediacy of the commenting appears to “encourage frequent touches of a draft”, which would also foster a process view of writing among the students (Fuccio, 2014, p. 225). Several drafts and their revisions may consequently be regarded as a natural part of the writing process. Likewise, the view of feedback can change from the unilateral teacher-to-student transmission to interactive, bior even “multidirectional” and “multi-turn discussions” (Wood, 2019, p. 41; cf. Aubrey, 2014, p. 52; see section 2.2). The instant reply possibilities might encourage several feedback cycles between the peers and/ or the instructor (cf. Wood, 2019, p. 80). These can range from “multiple-mini feedback cycles” (Wood, 2019, p. 125) to more extended discussions. Students may therefore feel as having a conversation with their teacher (Aubrey, 2014, p. 52) and/ or their peers (Wood, 2019, p. 90). This way, feedback was not so much perceived as criticism, but as in-process support with the aim of helping them improve (Wood, 2019, p. 90; cf. Fuccio, 2014, p. 225). In that respect, learners appreciated the possibility to ask questions and clarify the feedback contents in order to arrive at a common understanding as well as to come up with suggestions for the revisions (Wood, 2019, pp. 83, 125). They can even directly tag specific 95 3.3 Feedback in cloud editors (cloud-based editing applications) <?page no="97"?> persons (e.g. the teacher) in the comments in order to seek for their help and opinions (Wood, 2019, p. 85). To exemplify, the feedback dialogue may start with a feedback request in which the learner solicits feedback (Wood, 2021, p. 9; see section 2.2); it can be continued by multi-turn exchanges about the feedback contents as well as by requests for checking the implementation of the feedback in the revision (cf. Wood, 2019, p. 125). With Google Docs, for instance, this process is facilitated by the automatic notifi‐ cation feature, i.e. as soon as somebody has provided a comment, responded to it or resolved it, the writer or assessor, respectively, will receive an email notification (Aubrey, 2014, p. 51; Wood, 2019, p. 106). Assessors can thus immediately check whether or how their feedback has been implemented and might leave additional suggestions if deemed necessary (Aubrey, 2014, p. 51). Moreover, teachers may “monito[r] the quality of the peer-reviewers’ comments”, e.g. by moving from one document to another (cf. Sullivan, 2020, p. 669), and give further advice in the comments or in a chat (Aubrey, 2014, p. 49; see section 3.7). Overall, it appears that feedback via online collaborative apps has positive effects on students’ learning gain (e.g. Alharbi, 2020, Aydawati, 2019, and Ebadi & Rahimi, 2017, regarding Google Docs; Pearson, 2021, using Padlet for instructor feedback). For example, Alharbi (2020) counted that “out of 837 feedback comments, 520 comments were responded to by the students through reply comments” (p. 237), which indicates a high degree of interactivity and responsiveness to the feedback comments. Similarly, Ebadi and Rahimi (2017) observed that online peer editing had a significant effect on EFL learners’ academic writing skills in the short and in the long run as compared to peer feedback in the classroom (p. 805). Likewise, Aydawati (2019) discovered an increase in writing performance among 70.8 % of the students when Google Docs was used for peer review (p. 69). The synchronous and asynchronous affordances can make the learners revise the error immediately on the one hand, but also to consolidate the new knowledge by applying it to further instances of similar sentences on the other hand (Shintani, 2016, p. 533). Hence, apart from the interactive dimension of meaning negotiation, they could also promote “higher levels of […] metalinguistic awareness” (Yim et al., 2017, p. 541) since the asynchronous availability gives them time to think about the structure and practice it in the revisions (Shintani, 2016; Shintani & Aubrey, 2016). Put differently, the feedback might encourage them to engage in self-corrections of the same structure in subsequent sentences (Shintani, 2016, pp. 527-528), which may ultimately lead to greater independency as part of self-regulated learning. 3.3.4 Limitations/ disadvantages Depending on the application that is chosen, it might be necessary for teachers and students to create an account before they can engage in collaborative feedback practices. For example, to use Google Docs, they need to set up a Google account (Firth & Mesureur, 2010, p. 4). Some may consider this an additional burden (cf. Aubrey, 2014, p. 53) because the app is not directly integrated into their LMS. It should be noted, 96 3 Digital feedback methods <?page no="98"?> though, that they do not necessarily have to create a new Google email address, as some might believe (Aubrey, 2014, p. 53), but they can likewise sign up for a Google account with an email address from another service provider. However, some could consider Google (or any other external provider) dubious with regard to data protection, as it might not fully align with the policies of their educational institution. Access to the technology might be challenging in further ways as well. In particular, the use of synchronous online tools requires a stable internet connection. Otherwise, students could encounter problems (Aubrey, 2014, p. 53; Firth & Mesureur, 2010, p. 4; Lin & Yang, 2013, p. 87). Moreover, some may fear that the cloud space might not be sufficient (Firth & Mesureur, 2010, p. 3), especially when they use it extensively - also for several other purposes beyond the specific classroom context (e.g. for creating back-ups of their smartphone). At the time of writing, the free storage space of Google Drive was 15 GB, which is a lot and should thus be sufficient for drafting and feedback purposes. To be on the safe side, teachers may advise students to set up a new Google account with their institutional email address so that this drive space is reserved for learning purposes only and kept separate from their private cloud space (see section 3.3.6 below). Beyond that, the scope of functions may differ from tool to tool. Some necessitate the creation of documents in the online space, whereas others allow for later uploads. For example, students can write an assignment in the local text editor of their computer and upload it later on as soon as their first draft is completed (cf. Fuccio, 2014, pp. 222-223). This way, the possibilities for in-process support are reduced, though. In addition, learners might encounter conversion problems when uploading their locally produced document onto Google Drive. Notably, the font and the formatting of paragraphs and tables in texts could look differently and transition effects in presentations might go lost (Firth & Mesureur, 2010, p. 4). Similarly, if students download a document from Google and open it in their local word processor, these problems may occur (Fuccio, 2014, p. 224; see also Hedin, 2012). In many cases, though, the conversion from Google Docs into Microsoft Word is rather trouble-free (Firth & Mesureur, 2010, p. 4). These issues can arise before and after the feedback process, respectively. During the online feedback exchanges, further challenges could emerge. Crucially, the feedback might not always be clearly visible as such, e.g. if the commentators forgot to activate the “track changes”, “comment” or “suggest” modes, respectively. Beyond that, each user can edit or delete each other’s content so that there is a danger of data loss unless the application saves a version history at regular intervals. Moreover, some contributors might be insecure as to when they should insert feedback comments or continue with their writing. As reported by Fuccio (2014), for instance, some learners “stopped writing while they waited for feedback” (p. 223). In that respect, the teacher should give clear instructions regarding the procedure. Especially if a rather flexible arrangement of comments is possible in applications such as Padlet, learners might need more concrete advice. Otherwise, the authors could lose track of the many notes that are pinned to the screen instead of seizing the space in a more coordinated manner (cf. Atwood, 2014, p. 12). In fact, the ease and immediacy of commenting might lead 97 3.3 Feedback in cloud editors (cloud-based editing applications) <?page no="99"?> to a large number of unconnected and spontaneous remarks, which could decrease their value and usability for the authors. Likewise, due to the online synchronicity, teachers might feel pressured to respond instantaneously, which could lead to high time investments. The next sections will therefore give some advice. 3.3.5 Required equipment In order to use a synchronous online tool, a reliable internet connection is necessary. In terms of hardware, any device that allows for the insertion of text edits and comments can be chosen, e.g. a desktop computer, laptop, tablet PC or smartphone (with sufficiently large displays). Moreover, all writers, collaborators and assessors need to utilize the same app or platform, i.e. a specific text editor or notice board. Among the plethora of tools, Google Docs (https: / / www.google.com/ docs/ about/ ) has so far been most commonly selected in the reported studies (e.g. Alharbi, 2020; Aubrey, 2014; Aydawati, 2019; Lee & Ho, 2021, and many others). It is “a free web-based collaborative writing program” that is connected to Google Drive and thus requires a Google account (Aubrey, 2014, p. 47). Similarly, other products from the Google family can be used, such as Google presentation slides (e.g. Sullivan, 2020), spreadsheets, drawings (cf. Zaky, 2021, p. 69) and Google Jamboard (https: / / jamboard.google.com/ ). The latter combines various types of contents, including texts, tables, pictures and mind maps that can be created collaboratively. Hence, feedback could be provided through the direct modification of the contents. In order to facilitate further exchanges, a chat window can be opened in Google Docs (see section 3.7 on chat feedback), but it may also be used together with the Google Meet app (see section 3.14 on videoconferences). Moreover, it offers a sticky note function in connection with the Google Keep plug-in. The latter, i.e. sticky notes, is the central feature of the online notice board Padlet (https: / / padlet.com/ ; see Atwood, 2014; Pearson, 2021; Sari, 2019). This online wall allows the users to post sticky notes and multimedia elements in a rather freely arrangeable manner. This, however, also depends on the specific settings and the template that is selected. Apart from text, these notes may contain file attachments, hyperlinks, pictures and so on (Atwood, 2014, p. 11). In addition, each element can be liked, ranked and/ or commented on by others so that different possibilities for feedback arise. It is even possible to create voice and screen recordings directly in the app (currently limited to five minutes recording time). Hence, combinations with audio and screencast feedback (sections 3.11 and 3.13) are facilitated. Furthermore, all elements can be re-arranged and linked through arrows across the board. However, in order to use the full range of functions, a registration is required. Another downside is that the free version is limited to three walls, but this number can be increased by purchasing a license or by soliciting new Padlet users. This tool seems to be particularly suitable for feedback on brainstorming or mindmapping activities, i.e. feedback during creative tasks or at early stages of a (writing) project. Miro (https: / / miro.com/ ) is another application for creating complex mind maps and flexible arrangements on an online board. Similar to Padlet, different media contents 98 3 Digital feedback methods <?page no="100"?> can be inserted, including pictures, hyperlinks and videos. Likewise, it offers several pre-configured templates. In Miro, space appears to be infinite, but at the same time easy to navigate due to the zooming function. What is more, the contents can be rated and commented on, thus enabling feedback from others. In addition, it offers an agenda/ schedule function as well as a presentation mode, screensharing and chats. Invited users need to register, though. Moreover, the full version needs to be paid if users wish to seize a broader range of functions and an unlimited number of boards. In contrast to these more complex editors and apps, also simpler alternatives exist. To exemplify, ZUM-Pad (https: / / zumpad.zum.de/ ) offers users a blank web page for collaborative writing and editing (see e.g. Wampfler, 2019). Group members can be easily added, as there is no need to register. In the synchronous content creation and feedback process, distinct colors stand for different users. Moreover, chats are possible. In addition, assessors can monitor the process, including deleted content that can be accessed in the “Time Band” function. However, no other media, such as tables or images, can be integrated. Furthermore, there are applications that allow for annotations on websites, such as Hypothes.is (https: / / web.hypothes.is/ ). It is available as a browser extension and as an LMS app for Moodle, Blackboard, Canvas and others. This social annotation tool seems particularly suitable for feedback on online assignments, e.g. the creation of an e-portfolio (section 3.15), a blog (see section 3.6) or any other website. Assessors can highlight parts of the website content and comment on them (E, 2021, p. 7). These comments can be enriched by tags in order to categorize the contents. Furthermore, users may search the comments for keywords as well as respond to them, so to enable feedback dialogues. Apart from written annotations, images and videos can be integrated. Moreover, the visibility settings can be adjusted to specific groups. Since late 2021, it also enables annotations on PDF documents that are stored in Google Drive, One Drive, Canvas and Blackboard. Hence, we see that many online collaborative tools are available already and that their number as well as scope of functions is likely to grow further in the future. In the following description of possible implementations, the focus will be placed on Google Docs due to its popularity and its breadth of review functions. Also, suggestions for feedback in Padlet will be provided. Overall, however, more research on the use of these and other tools for feedback purposes is needed. 3.3.6 Implementation (how to) The following suggestions will be exemplified by Google Docs and Padlet, respectively. The procedure might need to be adjusted for other applications, but several parallels can surely be detected across different platforms and service providers. Some educational institutions or LMS might even have their own tools for online collaborative feedback; so, teachers are advised to contact their institutional tech advisor. First of all, it might be necessary for teachers and learners to set up an account for the specific tool that is to be used. For these purposes, the institutional mail addresses 99 3.3 Feedback in cloud editors (cloud-based editing applications) <?page no="101"?> should preferably be utilized so that educators can easily add their students to a particular collaborative document or to their Google Classroom, for instance. Otherwise, the students would need to share their personal mail addresses with the instructors. Next, the learners either set up individual collaborative documents, e.g. Google Docs (Ebadi & Rahimi, 2017, p. 794), or the teacher does this for the students (Saeed & Al Qunayeer, 2020, p. 5). The students then start typing their assignment into the cloud document. Alternatively, they may write their assignment in their offline text editor first before uploading the finished draft to the online editor (Saeed & Al Qunayeer, 2020, p. 5). Thus, writers would not be dependent on a stable internet connection while producing the draft, but would only need it for the upload or conversion into Google Docs. However, they should keep in mind that some formatting specifications might get lost during the conversion process (e.g. Firth & Mesureur, 2010, p. 4; see section 3.3.4). Subsequently, they need to share the document with the teacher and their fellow students (Aubrey, 2014, p. 49; Ebadi & Rahimi, 2017, p. 794). In Google Docs, the authors would click “share” and then either enter specific email addresses or create a link for sharing (“Get link”). Furthermore, they need to click the down arrow and choose appropriate settings, i.e. for access as a “commenter” or “editor”, but not as “viewer” only (see Figure 11). Editors will be able to use the full range of editing tools, whereas commenters will not be able to change the original file, but may just leave comments. In most cases, “editor” would thus be the adequate choice. Figure 11: Sharing options in Google Docs If the link is shared with specific email addresses only, the checkbox “notify people” should be selected to inform them about the document. In classroom contexts, it would be advisable to share the document with such a restricted access (by entering the mail addresses of the teacher and the peers). The document will then appear in their Google 100 3 Digital feedback methods <?page no="102"?> Drive. In other cases, the creator of the document needs to send the shareable link to the collaborators and teacher, respectively. As a next step, it seems reasonable to give students the chance for peer review before receiving instructor feedback (see section 2.2.3). Before engaging in the feedback proc‐ ess, though, the students need to be introduced to the specific review or commenting features of the selected online program. Google Docs, for instance, allows direct edits as well as commenting. Inserting comments is fairly easy because (peer) assessors simply need to mark the corresponding passage in the document and then click on “Insert comment”. The comment bubbles may contain written remarks as well as hyperlinks to further resources. Aside from commenting, assessors can suggest text edits by “us[ing] the ‘Suggesting’ function in Google Docs” (Lee & Ho, 2021, p. 126). If several peer reviewers are involved, it would be a good idea to agree on an annotation (e.g. colorcoding) system, similar to feedback in an offline word processor (see section 3.1). The affordance of online editors, however, is that learners may also chat with each other and the teacher synchronously (Damayanti et al., 2021, p. 229). Likewise, teachers can use the chat to provide in-process support to the peer reviewers (Aubrey, 2014, p. 49; see also section 2.2.3). Beyond that, the teachers could monitor the students’ feedback and progress at any time by accessing the different student documents on Google Drive. As soon as (peer) assessors insert comments or reply to them, “Google Docs automatically sends an email” to inform the document owner (and contributor) about it (Aubrey, 2014, p. 51; see also Wood, 2019, p. 106). They can then directly respond to the comment (Lee & Ho, 2021, p. 131), which might increase further engagement. The learners are instructed to carefully read the comments and edits instead of blindly accepting them (cf. Lee & Ho, 2021, p. 131). For suggested edits, they can choose between “accept” and “reject”. For remarks in comment balloons, they can use the “reply” function (analogously to Microsoft Word), for instance by thanking the contributor for the suggestion or by asking for clarification (see Figure 12). Figure 12: Comment options in Google Docs Alternatively, they could also discuss these comments in-person in a follow-up meeting (Lee & Ho, 2021, p. 126). However, they might also tag particular persons in their texts 101 3.3 Feedback in cloud editors (cloud-based editing applications) <?page no="103"?> or comments (e.g. the teacher) by using “@” to address them directly (Wood, 2019, p. 85). As soon as a comment has been clarified, the student can click on the “resolve” button, which will also inform the contributor about the success (Aubrey, 2014, p. 51). The teacher should then check the revisions and potentially ask for further modifications (Aubrey, 2014, p. 51; cf. Saeed & Al Qunayeer, 2020, p. 5) or additional rounds of peer feedback (Ebadi & Rahimi, 2017, p. 795). Instead or in addition to text-only comments, it is recommended to seize the hyperand multimedia functions of the online tool whenever appropriate. Thus, (peer) assessors may insert hyperlinks to useful external resources or to their own voice or video comments. They may even set up a comment bank, for instance by integrating Google Keep, as suggested by Bell (2018). In this app, reviewers can collect and organize helpful resources as well as frequently recurring comments (e.g. comprehensive explanations). These may contain hyperlinks to websites and file uploads, such as text PDFs, handouts, presentation slides, video and audio comments as well as pictures. As badges, the latter may function as external rewards for the students (Bell, 2018). The other elements typically serve as in-process support and can be imported into the learners’ assignment. Assessors could also color-code these different resources in Google Keep. To insert them into Google Docs or Google Slides, the users simply need to click on “Google Keep” in the toolbar on the right-hand side of the document and to select the relevant note from their Google Keep space. Similarly, a Padlet wall can serve as a comment bank because it likewise allows for an easy organization of comments and resources in different multimedia formats. It is not directly connected to Google Drive, though. However, Padlet is useful for feedback on brainstorming and categorization tasks as well as creative and even handwritten assignments. To exemplify, students might be asked to upload photos of their handwritten assignments onto a Padlet, e.g. their solutions to homework tasks for particular textbook units (Pearson, 2021; see https: / / en-gb.padlet.com/ emanismael/ 30 co71wpj76hzizi). Clearly, the learners would need an introduction to the tool (Pearson, 2021) if they had never used it before. Also, the class members should agree on certain guidelines for the technical realization of the feedback comments, as these can be effectuated in various ways in Padlet (written, audio, screencast format etc.). Assessors can also reply to existing notes on the Padlet, e.g. by ranking, rating or coloring them or by leaving a comment that will be directly connected to it. The appearance of these comments resembles chats, which may foster dialogues among the students (cf. Imperial College London, 2021). In addition, assessors could create new notes by double-clicking somewhere on the Padlet wall (Atwood, 2014, p. 11). Preferably, the new note is directly inserted close to the idea they would like to comment on. To accomplish this, the note can be moved to the most fitting position on the board or it may be connected to a more distant place via arrows. Moreover, assessors can upload files from their computer to the Padlet (Atwood, 2014, p. 11), such as PDFs, voice and video comments (cf. Burns, 2019). They could even create audio and screencast feedback spontaneously by using Padlet’s built-in voice and screen recorder (sections 3.11 to 102 3 Digital feedback methods <?page no="104"?> 3.13). Furthermore, they can insert hyperlinks to other online resources, e.g. further references about the topic or tasks for consolidating the knowledge about it. Padlet allows for adjustments of the privacy settings in various ways. Users may share a link to a public or password-protected Padlet; they can restrict the access to (specific) Padlet users only; and they can define the rights from viewing to editing mode, either by only being allowed to add new contents or to change and delete existing notes as well. The link to the Padlet can be sent to specific users or it can be shared as a QR code or even via Google Classroom or some other platform. Moreover, the link can be customized to some extent by adding a keyword to it in Padlet itself or by utilizing an external URL shortener (cf. Burns, 2019), such as Bitly (https: / / bitly.com/ ) or TinyURL (https: / / tinyurl.com/ app/ ). These external tools can likewise be applied to shorten the unwieldy hyperlinks of Google documents. Depending on the settings, students can also contribute to a Padlet without having to sign in (Atwood, 2014, p. 11); however, they would then not be able to use the full range of functions. Accordingly, it is thus possible to give feedback anonymously in Padlet (Sari, 2019, p. 51). However, this may sometimes lead to shallow comments only (cf. Sari, 2019, p. 52). Furthermore, it might hinder interaction with the feedback providers to clarify open questions (cf. Sari, 2019, p. 52). For others, the anonymity is seen positively as it allows them to give more honest remarks (Imperial College London, 2021) and deal with the comments in a more objective manner (cf. Sari, 2019, p. 52). Overall, students, including first-time users, commonly find Padlet easy to operate (Sari, 2019, p. 50). However, as with any other tool that is utilized for the first time for a particular purpose, it would be important to reflect on it together with peers or in the whole class (Zaky, 2021, p. 83; see section 2.2.1). Some important remarks for the students’ perspective will be compiled next. 3.3.7 Advice for students Most of the above recommendations also apply to learners because collaborative applications readily invite their contributions, especially in peer review activities (see section 2.2.3). Clearly, learners first need to be familiarized with the concrete online tool that is utilized (Zaky, 2021, p. 82), e.g. the writing and review functions in Google Docs (see Lee & Ho, 2021, p. 130) or the tool Padlet (e.g. Pearson, 2021). Specifically, they should seize the affordances of the interactive comment functionalities by actively inserting and/ or responding to the comments and suggestions (e.g. in Google Doc). Beyond the text format, they might also resort to voice or video comments in the online application (e.g. in Padlet). Furthermore, they should ask questions if anything remains unclear. For instance, they might chat or schedule face-to-face or video conferences with their instructor or peers during or after the work in the collaborative documents. This already hints at possible combinations of feedback methods that will be briefly surveyed next. 103 3.3 Feedback in cloud editors (cloud-based editing applications) <?page no="105"?> 3.3.8 Combinations with other feedback methods Because of the numerous functions that many online collaborative tools offer, they may exceed the written mode by including voice and video comments, for instance (see sections 3.11 to 3.13). These comments can be recorded and retrieved asynchronously. However, due to the real-time affordances of the online tools, they might also be combined with other synchronous feedback methods, such as screensharing in a videoconference (section 3.14) or feedback chats with the peers or the instructor (section 3.7; e.g. Aubrey, 2014, p. 49). The videoconferences could alternatively be scheduled after the provision of online feedback in Google Docs or other applications in order to discuss the feedback further (cf. Fuccio, 2014; Saeed & Al Qunayeer, 2020). 3.3.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Is feedback in cloud documents an asynchronous or synchronous feedback method? (2) In what ways are cloud applications suitable for a combination of feedback methods? Give reasons and examples. (3) Would you be comfortable working with a Google application? Application questions/ tasks: (4) In one of your classes, you ask your students to perform a brainstorming on a potential project they would like to conduct in small groups. After a first brainstorming phase, they should provide peer feedback on the topics they have suggested. a. What steps do you take to familiarize your students with the procedure? b. What application(s) do you recommend for this purpose? c. How do you evaluate the learning process and outcome? 3.4 Feedback in online discussion forums (Forum feedback) 3.4.1 Definition and alternative terms/ variants Online discussion forums (ODF) have become an integral part of many LMS. Quite often, they are used to raise and discuss questions that the students are concerned with or that the teacher has initiated. Consequently, feedback on these questions and discussions is crucial. ODF feedback can be defined as an asynchronous and mainly text-based digital feedback method that takes places in conversa‐ tional threads (Pedrosa-de-Jesus & Moreira, 2012, p. 57). 104 3 Digital feedback methods <?page no="106"?> In that regard, the collaborative nature of knowledge construction in ODFs is often stressed (Pedrosa-de-Jesus & Moreira, 2012, p. 67). This can either be peer feedback, e.g. students giving feedback in reply posts to their peers, or be combined with in‐ structor feedback, with the instructor joining the discussion and providing feedback. Quite often, the teachers’ feedback is rather seen as a final closing feedback to students’ assignments or directive feedback that intends to moderate students’ feedback and discussions (Rochera et al., 2021; see section 2.2.3). 3.4.2 Contexts of use, purposes and examples ODF feedback can be used for both peer feedback (e.g. Ekahitanond, 2013) and instructor feedback (e.g. Rochera et al., 2021). Due to its social, collaborative nature, it is more likely to be employed for peer feedback, with instructor feedback rather being a supplement or conclusion of the discussion thread. ODFs have mostly been used for written feedback to written assignments in a variety of subjects, such as EFL (Ekahitanond, 2013), biology (Pedrosa-de-Jesus & Moreira, 2012), psychology (Hara, Bonk, & Angeli, 2000; Rochera et al., 2021) and education (Vonderwell, 2003). 3.4.3 Advantages of the feedback method ODF feedback can be advantageous for learners, especially during peer review activi‐ ties. The students learn to actively seek feedback by asking questions to their peers and/ or teacher, which can result in higher-level learning (Pedrosa-de-Jesus & Moreira, 2012, p. 68). The learners engage in knowledge co-construction and may appreciate the social and collaborative nature of learning in online environments and beyond (Pedrosa-de-Jesus & Moreira, 2012, pp. 57, 67; cf. Ekahitanond, 2013, p. 260). Due to its asynchronous nature, the students have “more opportunities to interact with each other” than in the classroom alone, but also “more time to reflect, to think, and to search for further information before making a contribution to the discussion” (De Wever, Schellens, Valcke, & Van Keer, 2006, cited by Pedrosa-de-Jesus & Moreira, 2012, p. 57). The learners can think more deeply about the contributions they would like to make and formulate them more carefully (Vonderwell, 2003, p. 86). Likewise, the peers have more time to reflect on the feedback they have received and to incorporate it into their drafts or revisions (Ekahitanond, 2013, p. 260). The use of ODFs may thus foster students’ skills of critical inquiry and critical thinking (Pedrosa-de-Jesus & Moreira, 2012, p. 58, with reference to Garrison, Anderson, & Archer, 2001). As compared to offline environments, even shy students could feel more open to formulate a forum post that is directed at their peers or instructor (students cited in Ekahitanond, 2013, p. 258, and Pedrosa-de-Jesus & Moreira, 2012, p. 69; Vonderwell, 2003, p. 82). Forums may thus lead to more interaction among the students and the teacher (Pedrosa-de-Jesus & Moreira, 2012, p. 73), especially in distance learning. This 105 3.4 Feedback in online discussion forums (Forum feedback) <?page no="107"?> contributes to creating a learning community with an active exchange of feedback and ideas. Another possibility would be to use the “like” (or “helpful”) and “dislike” (“not helpful”) functions that many forums offer. Hence, if several students encounter the same problem, have a similar question or find a feedback comment helpful, they could click on the “thumbs up” button to indicate its importance, relevance or urgency. In some forums, learners can also mark a forum entry as a “favorite” to retrieve important entries more easily. Eventually, the students’ questions and discussions help the instructors to fine-tune their teaching to learners’ needs (Pedrosa-de-Jesus & Moreira, 2012, p. 57, referring to Dillon, 1986, and Pedrosa-de-Jesus, 1991). However, to make the ODF feedback a successful endeavor, teachers should monitor the students’ contributions and likewise offer feedback or direction when needed (Ludwig-Hardman & Dunclap, 2003, cited by Rochera et al., 2021, p. 4; cf. section 2.2.3). Since the start page of forums usually gives an overview of those threads that had been discussed most frequently and most recently, teachers may navigate to those threads that have not yet been discussed thoroughly in order to encourage further contributions. On the one hand, this already points to a potential challenge (see next section); on the other hand, it hints at an important aspect of the implementation procedure that will be proposed in section 3.4.6. 3.4.4 Limitations/ disadvantages The oftentimes delayed and mainly text-bound feedback was cited as one of the main limitations of ODF feedback. As there is no direct interpersonal interaction and usually a “lack of visual communication cues” (Hara et al., 2000, p. 116, quoted by Pedrosa-de-Jesus & Moreira, 2012, p. 57), messages can be easily misunderstood, thus requiring a “careful construction” (Vonderwell, 2003, p. 86). This, in turn, is timeintense (Ekahitanond, 2013, p. 258), which is why some learners may refrain from contributing to the ODF, especially in non-graded assignments (Vonderwell, 2003, p. 83). In addition, some could feel “uncomfortable about interacting” with (rather unknown) peers (Vonderwell, 2003, p. 82), whereas others might be afraid of asking questions to their teacher in openly visible threads. Especially teachers may find it hard to give timely and sufficient feedback to many student posts (Pedrosa-de-Jesus & Moreira, 2012, p. 72). Often, the number of posts increases if a deadline is approaching, which could overburden teachers (Pedrosa-de- Jesus & Moreira, 2012, p. 72). Responding to students’ questions, but also moderating the peer feedback discussions of students makes substantial demands on teachers’ time resources (Saadé & Huang, 2009, p. 88, cited by Pedrosa-de-Jesus & Moreira, 2012, p. 57). There might be many different themes in one thread or parallel themes in separate forum threads (Pedrosa-de-Jesus & Moreira, 2012, p. 67). This makes it hard to keep track of all posts and respond to them as needed. In addition, the contents and quality of the learner posts can be negatively affected by foregoing student 106 3 Digital feedback methods <?page no="108"?> comments (cf. Pedrosa-de-Jesus & Moreira, 2012, p. 67), which would require a quick response. Overall, it appeared that the teacher presence has a substantial impact on the development of the students’ discussions and feedback (Pedrosa-de-Jesus & Moreira, 2012, p. 67), which is why strategies for ODF feedback are urgently needed. Suggestions for the implementation will therefore be presented below. 3.4.5 Required equipment In terms of equipment, a hardware device is needed that allows for internet access, such as a desktop PC, laptop, tablet PC or mobile phone. Regarding the required software, typically the forum tool of the LMS is utilized, e.g. the discussion forum of Moodle or Blackboard. However, also alternative software exists that can be used for such purposes, as for instance the “Knowledge Forum” (https: / / www.knowledgeforum.com/ Kforum/ products.htm) that was employed by Rochera et al. (2021, p. 6). It offered additional functions as compared to an ordinary text thread. To exemplify, it allowed for a “graphic representation of the posts and the relations between them in discussion threads” (Rochera et al., 2021, p. 6). Moreover, it was possible to label the posts in order to structure them and to attach notes with further comments as well as hyperlinks to external resources (Rochera et al., 2021, p. 6). Also, many further forum apps nowadays enable the attachment of files to provide further (including multimodal) resources. 3.4.6 Implementation (how to) The implementation of ODF feedback depends on its purpose, since ODFs might either deal with a particular task or topic or constitute a space for open comments. For instance, instructors may set up a forum in their LMS for students’ questions that could arise at any time; they might also create a FAQ forum in which they respond to “Frequently Asked Questions”, and they can offer a “Help and Tips” forum in which they provide in-process support for particular assignments or general information they want to share (Vonderwell, 2003, p. 80). These forums may serve as a supplement for feedback that is provided in other ways or for formative assistance during project work. However, forums may also be set up for the specific purpose of feedback provision on certain assignments, most commonly as part of a peer review activity. For any kind of implementation, preparation and familiarization are essential, for instance regarding the purposes and procedures of peer feedback in general and in ODFs in particular (cf. Rochera et al., 2021, pp. 3-4; see section 2.2.1). The forum discussions could either be initiated by the students themselves or be pre-structured by the teacher (Blackboard Help, 2020). At any rate, the discussion page(s) should be given a meaningful title (Blackboard Help, 2020) and students should be asked to insert their comments into the right thread to keep a better overview of the many topics that may unfold. 107 3.4 Feedback in online discussion forums (Forum feedback) <?page no="109"?> When writing a post, the contributors can utilize different editing functions to format the text, attach files, include hyperlinks or embed multimedia if the tool offers these features (Blackboard Help, 2020). Learners may thus choose to either directly type a text into the forum or attach a longer assignment or feedback file to their post. Anyway, when formulating the post, the contributors should write as clearly as possible to avoid misunderstandings. For feedback comments, the wording should not be too rude, as the written modality already has an inherent negativity bias (cf. section 3.8). If they feel that they need to modify their post, they should utilize the editing functions or deletion function (cf. Blackboard Help, 2020). Moreover, as the ODFs can be read by peers and teachers, the contributors should be clear about the addressee (Rochera et al., 2021, p. 4). In some forums, they may even directly tag relevant persons in order to address them. Beyond that, any course member should make sure in the settings of the app or LMS that they will be notified about new posts. This notification function enables timely responses to posts, i.e. at a moment when the feedback would still be relevant and not overridden by other comments or questions (cf. Rochera et al., 2021, p. 4). Apart from that, they can usually gain an overview of the forum contributions on the start page of the forum section in their LMS. To provide feedback, the peers and teachers should use the “reply” function for the pertinent post. Sometimes, though, it might be reasonable to create a new thread to discuss an issue in more detail. In that respect, Rochera et al. (2021) distinguished between chain effects and cluster effects in ODFs. Chain effect means that the discussion occurs in a single thread, whereas cluster effect means that a new thread is started (Rochera et al., 2021, p. 19). The former may result in a deep discussion of a specific topic (chain), while the latter (cluster) “extends the discussion” to other, related topics (Rochera et al., 2021, p. 19). To acknowledge the helpfulness of specific responses, students can additionally click on the “thumbs up” button to give a fast reply. However, it could also be used to signal the importance of a problem so that other peers or teachers may take a closer look at those posts and contribute to them. Eventually, questions might also be marked as solved or the corresponding threads could be closed. Overall, teachers have a central role in facilitating the forum discussions (ACUE, 2020; Pedrosa-de-Jesus & Moreira, 2012, p. 67; Rochera et al., 2021; cf. section 2.2.3). They may “push the debate further” (Rochera et al., 2021, p. 18) or deeper content-wise and provide organizational help. As Rochera et al. (2021) observed, teacher feedback has a notable impact on the forum discussions “in terms of the number of students who responded to the feedback, the number of posts they made, and the number of days during which these posts were made” (p. 15). In the end, this likewise has an effect on the students’ level of satisfaction with the ODF feedback. Based on the book by Boettcher and Conrad (2016), the Association of College and University Educators (ACUE, 2020) suggested tips for teacher actions during different phases of ODF feedback. This resonates with Rochera et al.’s (2021) recommendation “to vary the type of feedback in correspondence with students’ needs at different moments 108 3 Digital feedback methods <?page no="110"?> of the discussion” (p. 21). At early stages, teachers should briefly respond to the first posts to encourage further contributions by others (ACUE, 2020). At later stages, they should “prompt […] deeper engagement and thinking” by replying to individual student posts and stimulating further discussion (ACUE, 2020). At some point, though, they also need to “provide expertise” to clarify misconceptions, summarize recurring themes or close a thread and introduce subsequent learning activities (ACUE, 2020). This aligns with the general advice given in section 2.2.3. 3.4.7 Advice for students Since ODFs have mostly been used in peer feedback settings, the specific advice for students has already been part of the implementation procedure described in section 3.4.6. In particular, they need to decide whether their comment would be most fitting in a general FAQ thread or in an existing topic-bound thread or whether it would be best to open a new thread. Care should also be taken regarding the wording, as asynchronous written feedback can easily be misunderstood or perceived negatively (Vonderwell, 2003; see also section 3.8 on email feedback). They might also tag specific persons directly to request responses from them. Moreover, they should seize the formatting and upload options of the ODF whenever suitable for their feedback purposes. At the same time, the uploads or hyperlinks may result in combinations of digital feedback methods that will be briefly addressed next. 3.4.8 Combinations with other feedback methods In the surveyed literature, hardly any explicit combination of ODF with other feedback methods was found. However, it can be easily combined with them, especially if the forum is part of an LMS and/ or allows for multimedia attachments. This way, forum feedback could be supplemented by e.g. text editor feedback (section 3.1) and recorded feedback (sections 3.11 to 3.13). Usually, this is done by uploading these files to the forum post or by inserting a hyperlink to a resource with further information. Lee and Bailey (2016), for instance, uploaded talking-head video feedback as a media comment in an ODF. Moreover, forum discussions can be followed up by synchronous meetings, either in the face-to-face classroom or in a videoconference (see section 3.14). 3.4.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Have you ever posted in an ODF? For example, as a student in the course forum of your LMS or in your private or professional life (e.g. as a computer user asking for help or as an expert in a subject matter)? (2) In what ways and why should instructors participate in the forum discussions? (3) Could ODF be used to combine different feedback methods? 109 3.4 Feedback in online discussion forums (Forum feedback) <?page no="111"?> Application questions/ tasks: (4) Your institutional LMS has an integrated ODF tool. You noticed that your students do not really use it. What could you do to encourage feedback interactions in the ODF? Try it out in one of your classes and ask your students about their perceptions and reservations regarding ODFs. 3.5 Feedback in wikis 3.5.1 Definition and alternative terms/ variants Wikis are online collaborative platforms consisting of several interlinked pages that are the result of the joint contributions by several users (Demirbilek, 2015, p. 212; cf. Kemp et al., 2019, p. 149). The concept of wiki co-editing was originally proposed by Leuf and Cunningham (1994, cited by Lin & Yang, 2011, pp. 88-89). The term itself derives from the Hawaiian word for “quick”, which hints at the rapid nature of knowledge cocreation (Lin & Yang, 2011, p. 88). In addition to texts, wikis often enable the insertion of multimedia elements, such as images, graphics, videos and audio files (Demirbilek, 2015, p. 212; see also Ma, 2020, p. 198). Wikis can thus encourage creativity in collaborative content creation (Demirbilek, 2015, p. 212). At the same time, the co-editing function serves as a quality assurance mechanism, as its contents are enhanced through peer re‐ view. Depending on the settings, everyone who has access to the wiki can modify its contents in anonymous or nonanonymous ways (Kemp et al., 2019, p. 149; see also the re‐ view by Vahedipour & Rezvani, 2017, p. 113). The change log typically shows the history of all modifications that were made by the contributors (Kemp et al., 2019, p. 149). Feed‐ back is thus an important underlying principle of wikis, which is achieved through coediting, but also reflection on the modified contents (cf. Demirbilek, 2015, p. 212). This is similar to Google Docs and some other cloud applications (see section 3.3) since they could likewise be used for synchronous and asynchronous content creation and editing. However, wikis are usually (semi-)publicly visible (cf. Hyland, 2003, cited in Lin & Yang, 2011, p. 95), whereas Google documents are mainly intended for a more restricted audience. Moreover, the purpose is somewhat different, as wikis often center on the definition or explanation of a certain term (cf. Wikipedia). Depending on the objective of a learning activity and the tool features that are needed for accomplishing it, teachers may thus choose the most fitting method. The next section will briefly survey possible contexts of use regarding feedback provision in wikis. 110 3 Digital feedback methods <?page no="112"?> 3.5.2 Contexts of use, purposes and examples Wiki writing relies on social interactions to produce a webpage that is intended for and accessible by a larger audience, e.g. the classroom, school or general public (cf. the review by Lin & Yang, 2011, pp. 89-90, 95). In most publications on wikis, the content creation function was foregrounded, with the focus being placed on collaborative learning through group work. This, however, only captures one basic function of wikis, i.e. co-constructing texts and other elements (Ma, 2020, p. 198). Even though feedback is essential during knowledge co-construction, there is only a small body of literature devoted to the use of wikis to provide feedback, particularly peer feedback. In that regard, researchers distinguished between two types of social interactions and peer feedback, respectively. On the one hand, there is participatory feedback given by peers working in the same group during collaborative writing in a wiki (e.g. Kemp et al., 2019; Lin & Yang, 2011). It relies on intra-group interaction between members of a specific group who create contents together (Ma, 2020, p. 198). On the other hand, inter-group interaction takes place between members of different groups who are nevertheless part of the same learning community (Ma, 2020, p. 198). Here, another student or group provides feedback on a different wiki (e.g. Demirbilek, 2015; Gielen & De Wever, 2012). This would occur after intra-group feedback, i.e. after a group has finalized a wiki about their project, topic or key term. The main difference is thus between group-internal feedback and feedback from an external group. So far, however, inter-group interaction has received less attention than intra-group interaction (Ma, 2020, p. 198) even though external feedback may bring in novel perspectives and thus new possibilities for meaning negotiation and creation (Ma, 2020, p. 203). Moreover, “most of these studies were conducted at the tertiary and secondary levels and covered a range of different subject disciplines” (Woo et al., 2013, p. 280). This included disciplines such as EFL (Lin & Yang, 2011; Ma, 2020; Vahedipour & Rezvani, 2017), education (Demirbilek, 2015; Gielen & De Wever, 2012), psychology, medicine, geography, engineering, information science (reviewed by Woo et al., 2013, p. 280) and public administration (Hu & Johnston, 2012). Although the current literature only uses wiki feedback to exchange written peer feedback, it is also possible to provide other kinds of feedback in wikis, such as audio and video, by using the uploading function (cf. Kemp et al., 2019, p. 151; Reich, Murnane, & Willett, 2012, p. 10; see sections 3.11 to 3.13). However, this has not yet been researched sufficiently (see combinations in section 3.5.8). 3.5.3 Advantages of the feedback method Both teachers and learners usually find wikis relatively easy to use (Kemp et al., 2019, p. 151; Vahedipour & Rezvani, 2017, pp. 120-121) and appreciate their functional scope. Users (here: assessors, including peer reviewers) can edit information easily, similar to the “track changes” tool in a word processor (Lin & Yang, 2011, p. 89; see sections 3.1 and 3.3). The authors may review these modifications in the edit history and thus 111 3.5 Feedback in wikis <?page no="113"?> compare their own writing with the edited information (Lin & Yang, 2011, p. 95). They might also undo the changes or reactivate a foregoing version in the wiki record (Kemp et al., 2019, p. 151). Depending on the application that is used, the contributors can enrich a text by images, documents (Demirbilek, 2015, pp. 218, 220) as well as audio and video files (Kemp et al., 2019, p. 151), which makes it a versatile tool for many assignment types (Reich et al., 2012, p. 10) and multimodal assessment purposes (see sections 3.11 to 3.13 on audio, talking-head and screencast feedback). Decisions on the mode may depend on the learners’ needs and level at which the feedback is addressed. Generally, feedback in wikis seems suitable for both higher-level (content, organization) and surface-level (formal aspects, mechanics and other lower-level comments) revisions (Woo et al., 2013, pp. 302-303). In some wikis, the collaborative work and assessment can take place synchronously or asynchronously (Kemp et al., 2019, p. 151), while others only permit delayed instead of simultaneous access (depending on the settings). Beyond that, the feedback providers can access the wikis anytime (Woo et al., 2013, p. 302) and anywhere (Kemp et al., 2019, p. 153), but they could also work at the same computer in the classroom (Kemp et al., 2019, p. 151) and discuss the feedback orally (see combinations in section 3.5.8). Working at their own time and pace is advantageous (Colomb & Simutis, 1996, cited by Kemp et al., 2019, p. 151) as long as it does not lead to any delays for the group members or feedback recipients. In fact, social interaction is key to the success of a wiki, with peer feedback resulting in the negotiation and co-construction of meaning (Kemp et al., 2019, p. 151; Lin & Yang, 2011, p. 91). They learn to adopt the different social roles of feedback providers and recipients (Kemp et al., 2019, pp. 159-160), both within and across groups (see Ma, 2020), argue for and against points, develop ideas, suggest alternatives and advance their knowledge (review by Lin & Yang, 2011, p. 91). As with other interactive peer approaches (see section 2.2.3), this can help them develop content and language knowledge as well as critical thinking skills (Demirbilek, 2015, pp. 218, 221; Kemp et al., 2019, p. 157). For instance, in terms of grammatical accuracy, a significant effect of wiki feedback was found as compared to traditional handwritten feedback (Vahedipour & Rezvani, 2017, p. 111). Beyond a learning gain, affective aspects are frequently cited, e.g. positive attitudes and the motivating nature of wiki peer feedback (Gielen & De Wever, 2012, p. 592; Kemp et al., 2019, p. 159; Ma, 2020, pp. 212-213). Closely connected to the affective dimension is the social, community-building potential of wikis (cf. the review by Demirbilek, 2015, p. 212; Kemp et al., 2019, p. 153). Due to the timeand space-independent nature of online wikis, the community may even continue beyond normal class times (Demirbilek, 2015, p. 218). The community feeling can have an important motivating function because the students realize that they write for an audience that extends beyond the teacher (Lin & Yang, 2011, p. 95; Vahedipour & Rezvani, 2017, p. 120) and even beyond their own class if the wiki is publicly available online. 112 3 Digital feedback methods <?page no="114"?> At the same time, the learners’ contributions serve as feedback to the teachers (Hu & Johnston, 2012, p. 499) who can scaffold their support accordingly (Woo et al., 2013, p. 303). Teachers can review the record of all changes in the history function of the wiki, including the feedback the peers have provided (Kemp et al., 2019, p. 152; see section 3.3 on Google Docs). They may intervene at any point, e.g. during the writing and peer review process, or provide summative feedback at the end (Kemp et al., 2019, p. 159; Woo et al., 2013, p. 303). Hence, wikis may contain teacher feedback in addition to peer feedback in various ways. This is similar to many other interactive feedback methods, such as peer review in general (see section 2.2.3) or in Google Docs in particular (section 3.3). The same is true of the disadvantages that will be sketched below. On the one hand, this results from the overlap in features of the different collaborative applications; on the other hand, it has to do with the lack of research that specifically focuses on feedback in wikis. 3.5.4 Limitations/ disadvantages In the reviewed studies, the learning gain from peer review in wikis varied and seemed to depend on the quality of the feedback provided (Gielen & De Wever, 2012, p. 592). Prior training on effective peer review therefore seems important (see section 2.2). Still, not all students appreciated peer review in wikis because their content can be changed - or even deleted - by others quite easily (Ma, 2020, p. 201), which thus undermines their own voice. Conversely, some peer reviewers are hesitant about editing others’ texts because they find it too “invasive” (Ma, 2020, p. 202) or because they do not feel expert enough (student cited in Lin & Yang, 2011, p. 99; cf. Kemp et al., 2019). They would thus rather add information than edit or delete existing contents (students in the study by Karasavvidis, 2010, cited in Peled, Bar-Shalom, & Sharon, 2014, p. 580). In particular, if their names are visible to the others (Demirbilek, 2015, p. 219), students might be too worried about damaging or losing face (Lin & Yang, 2011, p. 97). As a consequence, they would provide praise or agreement rather than criticism (Lin & Yang, 2011, p. 99; cf. Peled et al., 2014, pp. 580-581). A possible solution would be to use pseudonyms (Lin & Yang, 2011, p. 99) or to choose an app that permits anonymous contributions (cf. Ma, 2020, p. 214). This, however, would make it hard for teachers to evaluate individual student participation (Trentin, 2009, quoted by Ma, 2020, p. 199; see also Ma, 2020, p. 204). Finally, there are two interrelated factors, technology and time, that could negatively affect feedback in wikis. Students might feel challenged by the wiki interface and its features (Demirbilek, 2015, p. 219; Lin & Yang, 2011, p. 97). Especially if there is no auto-saving function (Demirbilek, 2015, p. 219; Lin & Yang, 2011, p. 97), data might be lost, which can lead to frustration. Teachers should also ensure that the wiki editing interface is available in a language their students understand (Demirbilek, 2015, p. 219). At any rate, training in the use of the tools is crucial, but this requires time investments 113 3.5 Feedback in wikis <?page no="115"?> on the part of the teachers and students (Demirbilek, 2015, p. 220; Lin & Yang, 2011, p. 97). The next sections will therefore deal with the technological and pedagogical prerequisites. 3.5.5 Required equipment In order to provide feedback in wikis, a device such as a desktop PC, laptop, tablet or smartphone is needed (Kemp et al., 2019, p. 149). Quite often, wikis are a basic component of the institutional LMS. Otherwise, an external platform would be needed, such as Wikispaces (Demirbilek, 2015), Wetpaint (no longer available, but used by Lin & Yang, 2011), PBworks (http: / / pbworks.com/ education, utilized by Woo et al., 2013) or Google Sites (see Ma, 2020). For instance, Google Sites has three core functions, “i.e. creating pages, editing pages and commenting on pages/ posts” (Ma, 2020, p. 204). For feedback, the latter two are relevant as well as the revision history (Kemp et al., 2019, pp. 150-151, with reference to Storch, 2011; see also Ma, 2020, p. 204), as will be explained in more detail below. 3.5.6 Implementation (how to) To alleviate potential barriers (see above), it is crucial to train the students in peer review via wikis (Demirbilek, 2015, p. 222; Hu & Johnston, 2012, p. 501; Lin & Yang, 2011, p. 96). In that regard, learners should become familiar with the three main reviewing functions that most wikis offer (see Kemp et al., 2019, pp. 150-151, with reference to Storch, 2011). First, editing comprises the insertion, modification and deletion of text and other materials. Second, the discussion or commenting function enables social interactions without directly changing or correcting a text. Thus, the peers could negotiate meanings and structures or insert suggestions as comments. Finally, the history function helps the learners to keep track of older versions and all changes that were made (Kemp et al., 2019, pp. 150-151, with reference to Storch, 2011). If the peers are insecure regarding the wiki functions and the feedback they give or receive, teachers may choose to adjust the wiki settings and offer help (cf. Ma, 2020, p. 213). In terms of privacy settings, teachers should regulate member access to create a safe space for the groups (Kemp et al., 2019, p. 151). In that respect, students might need to register for the platform (see Lin & Yang, 2011, p. 93), but in most cases this is unnecessary because wikis are already a part of the LMS. Additionally or alternatively, the settings can be changed to anonymous contributions if the students fear potential face threats (cf. Demirbilek, 2015, p. 219; Lin & Yang, 2011, p. 97; Ma, 2020, p. 214). Furthermore, the peers might resort to the discussion/ comment feature rather than changing the peer’s text directly by editing it (Ma, 2020, p. 202). To monitor the students’ progress and to prompt or provide further feedback, teachers are advised to compile a list (or separate wiki) of all links to the learners’ wikis so that they can quickly navigate to them (Kemp et al., 2019, p. 151). This meta-wiki 114 3 Digital feedback methods <?page no="116"?> might also be used by the students to access their peers’ wikis or even to ask questions to the teacher (Kemp et al., 2019, p. 151). Overall, the teachers’ presence can stimulate further contributions and revisions (Woo et al., 2013, p. 303), both during intra-group and inter-group feedback. During intra-group interaction, teachers could encourage students to actively communicate within the group. Regarding inter-group feedback, teachers can invite individual students or groups to comment on each other’s work (Lin & Yang, 2011, p. 94). Apart from group projects, also individual students may create a wiki on their own so that only external (and not group-internal) feedback will be relevant for them. Based on the peer comments, each student or group would revise their wiki project afterwards. 3.5.7 Advice for students Since peer review is a key principle of wikis, the advice that was given in the foregoing section is relevant for students and will therefore not be repeated here. Most importantly, they need to be familiar with the particularities of peer feedback in wikis and the functions that the platform offers to them for this purpose. For instance, if they are hesitant about inserting direct edits into a wiki, they might opt for the discussion function (Ma, 2020, p. 202), which is similar to commenting in an offline or online text editor (see sections 3.1 and 3.3). Likewise, the scope of features offered by an app affects the range of possible combinations with other feedback methods, as will be surveyed next. 3.5.8 Combinations with other feedback methods As many wikis allow for the integration of various multimedia materials (Kemp et al., 2019, p. 151), reviewers might upload audio or video files as feedback sources (see sections 3.11 to 3.13). This would help overcome the constraints of the written modality. Moreover, learners could schedule chat sessions (see section 3.7) or videoconferences (section 3.14) to clarify open questions and discuss the wiki contents or its editing history. Likewise, they could work in the same room (Kemp et al., 2019, p. 151) and even at the same computer, notably during intra-group feedback. That way, immediate oral feedback would be possible as well. 3.5.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Have you ever used wikis yourself ? If so, for what purposes? How would you describe your experience? (2) When would you prefer wiki feedback to other digital written feedback methods? Give reasons. (3) What is the difference between intraand inter-group feedback in wiki writing? (4) What advantages do wikis offer for peer feedback? What limitations do you see? 115 3.5 Feedback in wikis <?page no="117"?> Application questions/ tasks: (5) Your teacher asked you (and two of your fellow students) to write a wiki entry about a central term that was recently discussed in your class. You received an introduction and a manual about wiki writing from your teacher and you feel ready to start. What steps do you and your fellow students take? Discuss the procedure with them and try it out. 3.6 Feedback in blogs 3.6.1 Definition and alternative terms/ variants Blogs is the short form of weblogs, i.e. logs of the web (Çiftçi, 2009, p. 22; Nguyen, 2012, p. 13; Rostami & Hoveidi, 2014, p. 227; Zhang, Song, Shen, & Huang, 2014, p. 670), which showcase personal, often journal-like entries in reverse chronological order (Gedera, 2012, p. 19). With the help of a blogging application, authors can publish information about a particular topic, including personal thoughts and reflections, on the web. Apart from text, these web entries can contain pictures, hyperlinks, audio and video files etc. (Huang, 2016, p. 42; Taylor et al., 2007, cited by Rostami & Hoveidi, 2014, p. 227). They are frequently described as “online diaries” (Dippold, 2009, p. 18) or “online journals” (Çiftçi & Kocoglu, 2012, p. 62), i.e. “a web-based form of the traditional journal” (Novakovich, 2016, p. 17) or “publicly accessible personal journal” (Webopedia, cited by Nguyen, 2012, p. 13). However, the genre has become more diversified nowadays (Dippold, 2009, p. 18). Many blogs are devoted to a particular topic, with the authors presenting themselves as experts in the field (Dippold, 2009, p. 18). Blogs are therefore useful for self-(re)pre‐ sentation (Dippold, 2009, p. 33), which distinguishes them in some ways from social networks (Huang, 2016, p. 38) and from the idea of collaborative content creation via wikis (see section 3.5). Furthermore, the blog contents are often more personal than in wikis. Nevertheless, the bloggers’ aim is to reach a wider community, who can respond to the blog entries by contributing comments (Çiftçi, 2009, p. 22). Similar to the comment function of forums (see section 3.4), threads may evolve in which the users exchange their views (cf. Çiftçi, 2009, p. 22). Compared to forums, which are often integrated into an LMS and connected to specific course contents, the audience of blogging is usually a wider one (cf. Novakovich, 2016, p. 16). Moreover, the blog authors may modify their original blog posts in response to the readers’ feedback (cf. Gedera, 2012, p. 19), whereas forum posts are typically kept in their original version. Apart from the functions of self-expression and self-reflection (Dippold, 2009, p. 33), there is thus also a social motivation to blogging (Gedera, 2012, p. 19; Zhang et al., 2014, p. 670). 116 3 Digital feedback methods <?page no="118"?> Feedback in blogs or blog feedback has alternatively been called “blog-based” (Rostami & Hoveidi, 2014; Zhang et al., 2014), “blog-assisted” (Hernandez et al., 2017) or “blog-mediated” feedback (Novakovich, 2016). It consists of asynchronous (Huang, 2016, p. 39; Rostami & Hoveidi, 2014, p. 228), mostly written, digital comments that are posted in response to a person’s blog entry (cf. Bilki & Irgin, 2021, p. 156). This feedback often takes the form of a written paragraph which resembles the prose-like commentary at the end of an assignment (cf. the examples by Bilki & Irgin, 2021, p. 149). A blog feedback sample from an educational context can be found in Hernandez et al. (2017, pp. 116-118). It is structured according to specific assessment criteria and outlines the strengths and weaknesses as well as recommendations to the blog authors. 3.6.2 Contexts of use, purposes and examples Blog feedback is not only useful for educational settings, but also for social interactions in personal or commercial contexts (cf. Campbell, 2003, cited by Gedera, 2012, p. 19). The blog contents can be updated frequently and be made publicly available (Gedera, 2012, pp. 19-20). Likewise, the feedback comments will stay visible to everyone who has access to the blog. Its potential for peer feedback has been stressed and researched in different geo‐ graphical spaces. In most of the surveyed literature, blogs were mainly used to provide peer feedback on writing (e.g. Dippold, 2009, pp. 18-19), notably in ESL (English as a second language) or EFL (English as a foreign language) settings (e.g. Çiftçi & Kocoglu, 2012; Nguyen, 2012). Furthermore, Sayed (2010) used blogs to provide peer feedback for business management students’ persuasive writing. However, the blog functions are not limited to writing, as they may contain hyperand multimedia materials, such as videos, images and file attachments (cf. Hernandez et al., 2017, p. 104). Even though blogs were typically used for peer review, teacher feedback is likewise essential. On the one hand, they serve as mediators of the blog-based peer feedback (Çiftçi, 2009, p. 22; cf. the general advice in section 2.2.3 and the implementation in section 3.6.6). On the other hand, they likewise provide feedback to the individuals or groups who created the blogs (Çiftçi, 2009, p. 43; see also Huang, 2016, p. 38). The advantages and challenges for students and teachers will be outlined next. 3.6.3 Advantages of the feedback method Since not only the blog contents, but also the blog comments are visible to a wider audience, bloggers and reviewers tend to put more effort in creating their contributions and in communicating clearly enough (Sayed, 2010, pp. 55, 61; Zhang et al., 2014, pp. 677-678). They may thus take the blogging task more seriously because they 117 3.6 Feedback in blogs <?page no="119"?> address a wider and more authentic readership than simply the tutor or teacher (student cited in Zhang et al., 2014, pp. 676-677; see also Huang, 2016, p. 44; Novakovich, 2016, p. 16; Sayed, 2010, p. 62). This is similar to wikis (see section 3.5), but many further advantages likewise apply to (digital) peer feedback in general (see section 2.2.3). For instance, it was stated that the bloggers appreciated the opportunity to receive feedback from multiple persons and perspectives (Novakovich, 2016, p. 16; cf. Xie et al., 2008, p. 20). In several studies, the students therefore felt very motivated to create their blogs (Çiftçi, 2009, p. 45) and were looking forward to others reading the contents (Zhang et al., 2014, p. 679) and commenting on them (student cited in Zhang et al., 2014, p. 677). Peer feedback was then seen “as evidence of attention from the intended readers” (Zhang et al., 2014, p. 677). In that regard, the timeliness (Zhang et al., 2014, p. 677) as well as “the amount and quality of the feedback learners received from other students” may even “influenc[e] their motivation to post comments” themselves (Dippold, 2009, p. 22, citing Catera & Emigh, 2005). This interactive exchange can consequently boost their confidence in their abilities as well as their motivation to write and improve further (Gedera, 2012, p. 28; Zhang et al., 2014, p. 677; see also Dippold, 2009, p. 30; Hernandez et al., 2017, pp. 105, 138; Sayed, 2010, p. 61). It may also contribute to enhanced relationships and interactions in the classroom (Kitchakarn, 2013, p. 160) and beyond (Greer & Reed, 2008, cited in Pham & Usaha, 2016, p. 727). Even if the students do not share the same physical space will they be able to collaborate due to the placeand time-independent nature of this asynchronous digital feedback method (e.g. Çiftçi & Kocoglu, 2012, pp. 63, 76). Learners appreciated this flexibility, also because they can work at their own pace (Gedera, 2012, p. 27; see also Huang, 2016, pp. 43-44; Kitchakarn, 2013, p. 160; Sayed, 2010, p. 55). They “have more time to consider what to write and to formulate their responses” (Kitchakarn, 2013, p. 155). As soon as they have posted their comments, they will be immediately visible to the bloggers, who, in turn, can access them at their convenience from anytime and anywhere (Zhang et al., 2014, p. 677). Furthermore, as all contributions in blogs have a date and time stamp (Gedera, 2012, p. 19), they serve as a useful record for teachers and learners alike (Hernandez et al., 2017, p. 115). Teachers can monitor the students’ contributions and intervene when necessary (Çiftçi, 2009, p. 22; Çiftçi & Kocoglu, 2012, pp. 62-63; cf. sections 2.2.3 and 3.6.6). Learners can witness their own progress (student cited in Zhang et al., 2014, p. 678; see also Hernandez et al., 2017, p. 115) and reflect on their learning (Gedera, 2012, p. 28; Nguyen, 2012, pp. 19-20). Notably, “through exposure to a multitude of opinions and through awareness of writing for a wider audience, blogs also foster critical thinking, because learners need to reflect on the possible reactions of others to their postings” (Dippold, 2009, p. 19; cf. Çiftçi, 2009, p. 45; Hernandez et al., 2017, p. 106; Huang, 2016, p. 44; Sayed, 2010, pp. 58, 61). Further learning gains are as follows. Researchers have observed that blogging and blog peer feedback can lead to improvements of learners’ drafts (Çiftçi, 2009, p. 71; Çiftçi & Kocoglu, 2012, p. 77; Pham & Usaha, 2016, pp. 740-741) and writing skills (Hernandez 118 3 Digital feedback methods <?page no="120"?> et al., 2017, p. 115; Kitchakarn, 2013, pp. 152, 160; Rostami & Hoveidi, 2014, p. 228; Sayed, 2010, p. 61), for instance in terms of grammar and vocabulary use (Gedera, 2012, p. 26; Nguyen, 2012, p. 17). Moreover, improvements in spelling (Rostami & Hoveidi, 2014, p. 233) and other mechanical aspects were noted as well as in content and organization (Hernandez et al., 2017, pp. 115, 125, 133). Likewise, the quality of the feedback seemed to improve (Novakovich, 2016, p. 23). As compared to paper-based feedback, there appeared to be “a significantly higher number of directive comments” (Novakovich, 2016, p. 25) that aimed at helping the authors improve through suggestions. Apart from the immediate learning gain from the comments, the feedback may serve as a model for their own feedback contributions (Sayed, 2010, p. 58). They also found their peers’ blogs helpful, as they could learn from good examples, for instance regarding the structure of the blog (Huang, 2016, p. 41; Nguyen, 2012, pp. 17, 19). Hence, there can be multi-faceted learning gains from blog-based peer review (cf. section 2.2.3 on peer feedback). Not only the learning gain, but also further factors had an impact on students’ positive attitudes towards blog-based peer feedback (Kitchakarn, 2013, pp. 152, 159). They emphasized its easy access and usability (e.g. Çiftçi & Kocoglu, 2012, p. 75; Dippold, 2009, p. 32). To provide feedback, they do not need any specific software knowledge, but simply click on the “comments” feature in order to type in their text (Sayed, 2010, p. 55) and potentially include hyperlinks to helpful resources (cf. Gedera, 2012, p. 22; Sayed, 2010, p. 55, citing Jones, 2006). Another advantage for teachers and learners is that most blogs are free to use (Zhang et al., 2014, p. 679) or available at very low cost (Sayed, 2010, p. 55) and that it is easy to set them up (Sayed, 2010, p. 55; see also Wu, 2006, p. 136). However, there are certain limitations, as the next section will show. 3.6.4 Limitations/ disadvantages Most of the limitations of blog-based (especially peer) feedback were affective in nature, but there are also several challenges arising from the technology itself. First of all, the asynchronous environment of blog feedback means that comments usually cannot be followed up immediately (Dippold, 2009, p. 32). Moreover, the blog entries cannot be edited directly by the feedback providers, which stands in stark contrast to the wiki interface (Dippold, 2009, p. 32; see section 3.5) or to word processing software (Bilki & Irgin, 2021, p. 156). On the one hand, this helps to preserve the author’s voice and autonomy (cf. Pham & Usaha, 2016, p. 742); on the other hand, it makes the provision of specific feedback very difficult (cf. Huang, 2016, pp. 43-44). There is hardly any possibility for easy cross-referencing to concrete passages of the blog post (Dippold, 2009, p. 28). Accordingly, surface-level corrections, such as for spelling and punctuation, but also for grammar, are difficult to realize (cf. Huang, 2016, p. 41). Rather, blog comments might be more suitable for macro-level feedback on content 119 3.6 Feedback in blogs <?page no="121"?> and organization (see e.g. Bilki & Irgin, 2021, p. 156) as well as for meaning negotiation (Huang, 2016, p. 44). The teachers’ workload was sometimes cited as another concern: Since blog tools usually do not offer a revision history of all changes, teachers would need to compare the first drafts with the revisions in order to detect changes and improvements (Zhang et al., 2014, p. 680). The monitoring thus becomes difficult and may lead to a higher workload as compared to wikis (section 3.5) or word processors (sections 3.1 and 3.3), for instance. Further limitations cited in the literature seem to result from general challenges with peer review (see section 2.2.3). To exemplify, students were anxious about sharing their blog posts with their peers (student cited in Hernandez et al., 2017, p. 113), but also about receiving (Dippold, 2009, p. 25; Kitchakarn, 2013, pp. 159-160) and writing feedback comments (Çiftçi, 2009, p. 109; Nguyen, 2012, p. 18). By contrast, the students in Gedera’s (2012) study did not feel embarrassed or disappointed about peer feedback (p. 26). Moreover, several comments were social, informal (e.g. using abbreviations and emoticons) and superficial (e.g. “I agree”, “good job”) rather than informative and critical (Wu, 2006, pp. 132-133; Xie et al., 2008, p. 23; cf. Nguyen, 2012, p. 18). In addition, an adverse chain effect was observed: If a peer did not thoroughly engage in critical thinking, their partner would do so neither (Xie et al., 2008, p. 23). Overall, much seems to depend on appropriate training and familiarization with feedback and peer feedback in particular (see section 2.2.1). However, the very nature of blogging itself could make it less suitable for peer feedback in educational settings. In blogging, the reflection on personal experiences is often foregrounded (cf. Dippold, 2009, p. 33), but it is made publicly available at the same time (Sayed, 2010, p. 58). Correspondingly, some students may avoid writing about certain topics and views if they think that their perspectives will not be shared by others (Xie et al., 2008, p. 23). They may hence be more conservative in their writing in order to avoid ridicule (Xie et al., 2008, p. 23). This, in turn, might not be conducive to their self-reflection (Xie et al., 2008, p. 23) and self-confidence. On the other hand, a few students in Zhang et al.’s (2014) study were likewise aware of the public display of their blog, which motivated them to put more effort in their writing in order to receive favorable feedback (pp. 677-678). Apart from peer feedback, authors might, however, receive unsolicited comment spam if their blogs are visible to the public (Wu, 2006, p. 136). This stands in contrast to forums or other feedback methods that are integral to a password-protected LMS (Novakovich, 2016, p. 26). Teachers and students may therefore raise doubts concerning the security and privacy of their contributions (Huang, 2016, p. 42; Pham & Usaha, 2016, p. 726). Finally, some “blog services require registration before users can make comments on others’ blog posts” (Zhang et al., 2014, p. 680). If external software is used, all learners and the teacher would need to set up a new account (Zhang et al., 2014, p. 680). Some service providers even delete comments, for example if they contain “politically 120 3 Digital feedback methods <?page no="122"?> sensitive words” (Zhang et al., 2014, p. 680). The choice of adequate software could thus be decisive, which leads us to the required equipment for blog feedback. 3.6.5 Required equipment As with most other methods, blog feedback works with different hardware devices, such as a desktop PC, laptop, tablet PC or mobile phone. To work on the blog entries and feedback comments, internet access would be needed to open the blog in a web browser (cf. Çiftçi & Kocoglu, 2012, p. 75; Wu, 2006, p. 136). Blog software is a builtin tool of some LMS (Dippold, 2009, p. 19), but in most of the previous studies, an external application had been utilized, notably Blogger (https: / / www.blogger.com/ ; see e.g. Çiftçi, 2009; Gedera, 2012; Kitchakarn, 2013; Wu, 2006; Xie et al., 2008). It “allowed users to publish [and edit] their posts easily, add videos and photos, access […] archived entries, and customize their templates” (Çiftçi, 2009, p. 59). Another common service provider is WordPress (https: / / wordpress.com/ ), which had recently been used by Bilki and Irgin (2021). Suggestions regarding the single steps for peer review in blogs will be given next. 3.6.6 Implementation (how to) First of all, a purposeful choice of a blogging theme is crucial because this might have an impact on students’ emotions when writing their blogs and blog feedback (see disadvantages in section 3.6.4). The topic should not be too personal or controversial, but this also depends on the specific context in which the blog feedback activity is implemented. Dippold (2009), for instance, suggested creating “an online-self-presen‐ tation page through which students can present themselves to potential employers” (p. 34). Similarly, one writing exercise in Wu’s (2006) study was to compose a jobrelated paragraph (p. 137). This way, blogs in educational contexts do not only fulfill the blog’s original function as “self-presentation and self-expression” (Dippold, 2009, p. 33), but also lift it to a more professional layer, which is relevant for the students at the same time. Accordingly, their intrinsic motivation in the blog writing and blog feedback activity may increase. In addition, “the choice of blog services is critical” (Zhang et al., 2014, p. 680) because some external providers might require registration or have other disadvantages (see section 3.6.4). If possible, “a safe password-protected […] environment” should be created (Kitchakarn, 2013, p. 158) and precautions should be made in order to avoid “unsolicited comment spam” to the students’ blogs (Wu, 2006, p. 136). Access to blogs might be regulated and facilitated through a meta-blog, a meta-wiki (see section 3.5.6) or an entry on the course page of the LMS (cf. Huang, 2016, p. 40). This way, the teachers and students can navigate to the individual students’ blogs quite easily (as had been criticized by e.g. Xie et al., 2008, p. 24). 121 3.6 Feedback in blogs <?page no="123"?> Next, the learners should be familiarized with the blogging application (Bilki & Irgin, 2021, p. 145; Huang, 2016, p. 40). In that regard, instructors may also show several blogs for illustration (Huang, 2016, p. 40). Moreover, teachers would provide in-process support during the peer activity (Dippold, 2009, p. 32; Pham & Usaha, 2009, p. 8; Xie et al., 2008, pp. 23-24; see section 2.2.3). The actual peer review phase usually starts after each student (or each group; see Zhang et al., 2014, p. 681) has created their blog. Usually, the names of the commentators are visible (Sayed, 2010, p. 60), but sometimes it could make sense to conduct the peer review in an anonymous way (student cited in Nguyen, 2012, p. 18). However, it is also possible to start with intra-group peer review if several students create a blog collaboratively (Kitchakarn, 2013, p. 158; see the procedure for feedback in wikis described in section 3.5.6). Most often, however, either inter-group feedback (Zhang et al., 2014, p. 681) or individual blog feedback was conducted, which aligns with the role of blogs as personal presentations. The external reviewers can only use the comment function to provide feedback (Çiftçi, 2009), whereas group-internal feedback may already occur in the blog editing program itself. To solve the issue of only being able to provide general feedback, the reviewers may copy sentences from the blog to the comment section to indicate the errors (teacher in Huang, 2016, p. 43; student in Nguyen, 2012, p. 20). Alternatively, blog feedback can be combined with other feedback methods to provide direct corrections for language errors (see also section 3.6.8). To instantiate, Dippold (2009) utilized a printout for specific feedback (p. 30). In a digital environment, it would likewise be feasible to create a PDF of the blog or to utilize a web extension in order to give detailed feedback in written or multimodal ways (see e.g. the social annotation tool Hypothes.is in section 3.3.5). After the feedback has been exchanged and clarified, the students would use the edit function to revise their blog drafts (Çiftçi, 2009; Huang, 2016, p. 40; Sayed, 2010, p. 60). Altogether, there can hence be several rounds of (peer and teacher) review and revisions (see Bilki & Irgin, 2021, p. 146). The final round mostly centers on editing surface-level aspects (Hernandez et al., 2017, p. 110). Lastly, the students would publish the finalized version of their blog (Sayed, 2010, p. 60). 3.6.7 Advice for students In this section, no further advice for students will be presented because it has already been part of the peer feedback procedure that was described above. 3.6.8 Combinations with other feedback methods While blogs can be rich in multimedia (Hernandez et al., 2017, p. 104; see also Huang, 2016, p. 42), the comment function is often restricted to text only. However, via hyperlinks, additional resources can be integrated that may direct the blogger to 122 3 Digital feedback methods <?page no="124"?> external websites or video platforms or to a shared cloud space. This way, the reviewers could share various materials, including personalized feedback in written, audio or video format (see sections 3.11 to 3.13). For example, the reviewers may print a PDF of the blog or copy its contents into an offline or online text document (see sections 3.1 and 3.3) in order to annotate it with specific comments. This document would then be shared with the blogger in addition to the more general or global comments that are typically written in the comment section of blogs. Moreover, some browser extensions exist that enable the creation of website annotations (section 3.3.5) or feedback videos (section 3.12). These, in turn, can be shared through file uploads in the blog posts or by inserting the link to the video server in the comment box. To facilitate collaboration, also regarding specific aspects of writing, blogging could be preceded by collaborative writing in a wiki (Dippold, 2009, p. 32; section 3.5) or in a cloud document (see section 3.3). In other words, students may prepare their blog post in a wiki (or other collaborative tool), which would also ease intra-group feedback. In the end, blog-based reviews could be followed up by other methods, as for instance videoconferences (section 3.14) or feedback in the face-to-face classroom (cf. Pham & Usaha, 2009, p. 8; 2016, p. 731). This way, not only learners may gain a deeper understanding, but also teachers can learn about the students’ progress and the reasons for making certain changes while omitting others in their revisions (Wu, 2006, p. 136). 3.6.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Are you familiar with any blog platform? Have you used it yourself already or do you enjoy reading blogs written by others? (2) Regarding your students’ privacy, what might be a better alternative to public blogs? (3) List two disadvantages of blog feedback for both students and teachers. What could you do in order to overcome them? Application questions/ tasks: (4) Compare feedback in forums, wikis and blogs. a. Identify their similarities and differences. You may create a table to gain an overview. b. When would you use which method? Add a column to the table in which you specify potential contexts of use. c. Try out the method for at least one of the purposes/ contexts that you have listed and evaluate the success. Is there anything that you would do differently next time? 123 3.6 Feedback in blogs <?page no="125"?> 3.7 Chat/ Messenger feedback 3.7.1 Definition and alternative terms/ variants Chats are usually classified as a “synchronous computermediated communication” (SCMC) method (Arroyo & Yil‐ maz, 2018, p. 944; Bower & Kawaguchi, 2011, p. 41; Dao, Duong, & Nguyen, 2021, p. 1; Samani & Noordin, 2013, p. 46; Ziegler & Mackey, 2017, p. 80) even though response times are often slightly longer than in spontaneous talk (“quasi-instantaneous” according to Wigham & Chanier, 2013, p. 260). Quite aptly, chat communication was char‐ acterized as “[c]onversations in slow motion” by Beauvois (1998; cf. Lai & Zhao, 2006, p. 102). With advanced typing skills, voice dictation and the sending of audio messages, the communication is almost instantaneous, though; hence the term instant messenger or “instant messaging software” (Arroyo & Yilmaz, 2018, p. 944; see also Liang, 2010, p. 46; Soria, Gutiérrez-Colón, & Frumuselu, 2020, p. 800). Assessors can consequently provide rapid feedback via chats and messenger apps. These may exceed the written format and include multimedia attachments as well. Mostly and traditionally, however, it included text messages only, which is why many scholars called it “synchronous text-based communication” (Udeshinee, Knutsson, Barbutiu, & Jayathilake, 2021, p. 176) or “text-based SCMC” (Bower & Kawaguchi, 2011, p. 42) or considered it as a type of “written oral-like conversation” (Lai & Zhao, 2006, p. 102). Ziegler and Mackey (2017) also stressed its interactional affordances by calling it “interactional feedback in synchronous computer-mediated communication” (p. 80), which, however, also comprised video chats. Similarly, SCMC not only refers to text chats, but also voice, video and multimodal chats in some publications (e.g. Dao et al., 2021, pp. 2, 4). Other common terms are “online chats” or “online chatting” (Avval, Asadollahfam, & Behin, 2021) that appear to progressively replace the “computer-mediated” charac‐ terization because chats can in fact be used at multiple devices. While chatting was previously done at desktop PCs or laptops, the devices are more mobile nowadays, e.g. smartphones or tablets. However, with the recent rise of videoconferencing applications for educational, professional and private purposes during the Covid-19 pandemic, chats have turned into a common component of live exchanges during web conferences, which are typically used at personal computers, laptops or tablets rather than smartphones (see also section 3.14). 124 3 Digital feedback methods <?page no="126"?> Overall, chat feedback could be defined as synchronous messaging via online applications that serves to support learners through a rather instantaneous exchange of (corrective) information and recommendations. 3.7.2 Contexts of use, purposes and examples Chats enable a quick exchange of information and therefore seem suitable for feedback in multiple directions. They were frequently used for peer feedback (Chang, 2009; Dao et al., 2021; Liang, 2010), including corrective feedback during e-tandem exchanges between native speakers and non-native speakers (Akbar, 2017; Bower & Kawaguchi, 2011; Liu, 2015), but also for instructor feedback (Avval et al., 2021) and formative student feedback to the teacher (Chen, 2021). In most of the reviewed literature, text chats were implemented in language learning contexts, even though they could be useful in other disciplines as well. To instantiate, Baumann-Wehner, Gurjanow, Milicic and Ludwig (2020) used chat feedback for outdoor learning in mathematics. Regarding foreign language learning, some early research investigated online text chats between native speakers and language learners, notably corrective feedback from native speakers to non-native speakers in e.g. e-tandems. However, such feedback turned out to be very limited, as the primary focus was on communication rather than grammar or vocabulary instruction (Bower & Kawaguchi, 2011, p. 59). The results of such experiments therefore often did not turn out to be satisfactory (e.g. Liu, 2015; Loewen & Erlam, 2006). Another direction is feedback given by the teacher in language courses. In Samani and Noordin’s (2013) study, for example, the teacher gave synchronous corrective feedback while the students solved specific grammar exercises. A similar approach was taken by Soria et al. (2020), where students received a video via WhatsApp and had to answer questions about it, with the teacher providing feedback. Hence, chat feedback functioned as in-process support while learners engaged in a task. A different, sequential approach was chat feedback after the provision of asynchro‐ nous feedback. For instance, in Ene and Upton (2018), the students first submitted writing assignments. Next, the teacher gave written feedback in the text editor Word (see section 3.1) before discussing it with the students in the chatroom of the LMS. Chats thus helped to clarify open questions regarding the previously provided feedback. Similarly, in Chang (2009), peers first exchanged their assignments via the MSN Messenger before they supplied peer feedback via text chats. In older studies (e.g. Liu & Sadler, 2003), the electronic feedback exchanges were sometimes performed in a computer lab, though. Since the students sat in the same room, they could have talked orally to each other, thus producing a rather artificial set-up. By contrast, with the recent rise in popularity (and oftentimes necessity) of vid‐ eoconferencing in remote learning, chat feedback has become an integral part of live exchanges in other modalities. Notably, if chats are part of a videoconference, assessors could use the text chat in order to type a correction or hint into it without 125 3.7 Chat/ Messenger feedback <?page no="127"?> interrupting the flow of a conversation or presentation that is taking place (e.g. Guichon, Bétrancourt, & Prié, 2012, p. 187; Wigham & Chanier, 2013; see section 3.14). Likewise, learners can formulate a feedback request in the chat if they do not understand something, for instance. Due to the online environment, hyperlinks to useful resources with further information could be shared easily to provide guidance for the learners. Moreover, learners might use private chats to interact with their peers or teachers if they are afraid of losing face when asking a question publicly. These and additional advantages will be surveyed in more detail below. 3.7.3 Advantages of the feedback method In most prior research, text chats had been investigated, while nowadays multimodal chats have become more prevalent (cf. Dao et al., 2021, p. 2). The discussion will therefore center on the benefits of text chats, but also point at potential affordances of multimodal chats. First of all, the slightly delayed nature of chats affords learners more time for planning their messages, reflecting on them (Chang, 2009, p. 57; Razagifard & Razza‐ ghifard, 2011, p. 13; Satar & Özdener, 2008, p. 605; Udeshinee et al., 2021, p. 176) and understanding them (Dao et al., 2021, p. 20). They might hence be better able to formulate clear feedback messages in response to foregoing messages (Dao et al., 2021, p. 20). Moreover, the learners can reconsult the messages any time, not only the feedback from others (Chang, 2009, p. 57), but also their own contributions (cf. Liu, 2015, p. 146). Overall, this gives them an “opportunity to notice, analyze and internalize the corrective feedback” (Sauro, 2009, p. 111) or to engage in self-corrections (Bower & Kawaguchi, 2011, p. 43; Lai & Zhao, 2006, p. 112; Liu, 2015, p. 146; Sotillo, 2010, p. 365). Accordingly, researchers lauded the additional “processing and planning time” (Razagifard & Razzaghifard, 2011, p. 5; Sauro, 2009, p. 110) of chats as compared to live interactions (cf. Arroyo & Yilmaz, 2018, p. 944; Lai & Zhao, 2006, p. 102). However, chats also enable instantaneous corrections (Arroyo & Yilmaz, 2018, p. 967; cf. Honeycutt, 2001, pp. 48, 50) if not much planning time is needed. This is valuable for learners’ in-process support. Moreover, if used in an audioor videoconference, teachers or peers could provide corrections or suggestions in the text chat without interrupting the flow of communication (Avval et al., 2021, p. 212; Guichon et al., 2012, p. 187; Wigham & Chanier, 2013, p. 274). At the same time, learners may feel that they have a voice in the classroom. They can contribute in the chat whenever they want to and do not have to wait until the teacher allows them to speak. Some learners might also be too anxious to participate orally (AbuSeileek & Rabab’ah, 2013, p. 49; Satar & Özdener, 2008, p. 606; Udeshinee et al., 2021, p. 187), for example because they are not yet fluent in the target language (cf. Udeshinee et al., 2021, p. 188). On the other hand, the written modality gives them the time to think about formulating their message and it may therefore be “suitable for lower proficiency learners” as well (Satar & Özdener, 2008, p. 606; cf. Dao et al., 2021, p. 19). Moreover, it 126 3 Digital feedback methods <?page no="128"?> appears to be more anonymous and less face-threatening than live interactions, which may encourage greater participation among the students (cf. AbuSeileek & Rabab’ah, 2013, p. 55). Consequently, they might also be less afraid of pointing out mistakes or addressing problems (Chang, 2009, p. 57). For this, they may even use the private chat function of a videoconference to provide peer and teacher feedback, which helps them save their own and others’ face. Altogether, researchers have therefore argued that “SCMC may promote more equal participation […] or reduce anxiety” (Ziegler & Mackey, 2017, p. 89, based on a review of studies; e.g. Satar & Özdener, 2008, p. 606; Udeshinee et al., 2021, p. 187). Furthermore, in contrast to the ephemeral character of oral interactions, text chats offer an enduring written record of the feedback (e.g. Honeycutt, 2001, p. 52; Lai & Zhao, 2006, p. 112; Sauro, 2009, p. 100). As a result, learners can access and re-read the feedback whenever and wherever they want to (Arroyo & Yilmaz, 2018, p. 944; Chang, 2009, p. 57; Liang, 2010, p. 57; Sotillo, 2010, p. 367; Udeshinee et al., 2021, p. 185). In messenger apps, chats are typically saved automatically, whereas live chats in videoconferences need to be saved manually by the users. There usually is a “save” button or else the students may mark the text and copy it into a separate document. If this does not work, they could take a screenshot of the feedback chat. This permanency and flexible availability of text chats serves as a record of their (linguistic) development and can encourage deeper reflection about errors (Lai & Zhao, 2006, p. 112; Sotillo, 2010, p. 367) but also about feedback discourse as such (Liang, 2010, p. 57). Hence, chats “combine the […] benefits of the spoken and written forms of commu‐ nication” (Bower & Kawaguchi, 2011, p. 43) and may additionally contain multimedia attachments. For example, users could record audio and video feedback and disseminate them via messengers (see also sections 3.11 to 3.13). These multimodal chats (Dao et al., 2021, p. 2) can also include pictures and emojis that especially younger students may find engaging (Soria et al., 2020, p. 797). If only typing is possible, then smileys could be reproduced by means of emoticons, such as “: -)”. Furthermore, the texts themselves may contain hyperlinks to websites with related learning resources. Thus, chat feedback is not necessarily restricted to texts, but can be complemented by external media and non-linguistic signs (Rassaei, 2017, p. 2). Moreover, the social presence of teachers and peers can be enhanced through chat feedback. Learners may feel that they “receive the individual attention of teachers” (Udeshinee et al., 2021, p. 184) and that they “build meaningful learning partnerships” with their peers (Sotillo, 2010, p. 364). For instance, Honeycutt (2001) noted a higher use of firstand second-person pronouns in chats as compared to email peer feedback (p. 43; section 3.8), which may testify to a greater personal involvement. Its conversational nature (Honeycutt, 2001, pp. 48, 50; Tare et al., 2014, cited in Udeshinee et al., 2021, p. 176) makes it more interactive so that chat feedback not only fulfills informative or corrective, but also social functions. Indeed, scholars have observed that learners use chats for social talk during feedback activities (Chang, 2009, p. 51; see also Liang, 2010), which has a positive impact on their affect (cf. Liu & Sadler, 2003, p. 193; 127 3.7 Chat/ Messenger feedback <?page no="129"?> Murphy, 2011, cited by Ziegler & Mackey, 2017, p. 88). This might also be due to their familiarity and positive experiences with chats from their free time (Udeshinee et al., 2021, pp. 184-187). As many students enjoy chatting (Avval et al., 2021, p. 212; Honeycutt, 2001, pp. 49-50), they may likewise find chat feedback very motivating in instructional contexts (Ahmed et al., 2021, p. 295; Soria et al., 2020, p. 806). To better connect with the learners (and their lifeworld), it could also complement traditional face-to-face teaching and not just digital courses (Ziegler & Mackey, 2017, p. 90; cf. Udeshinee et al., 2021, p. 185). Even in poorly equipped classrooms, most students have a messenger-compatible device so that chats will be feasible (Udeshinee et al., 2021, p. 185). Indeed, researchers noted a high engagement with the chat feedback contents, which led to some learning gains, such as (perceived) improvements in writing (Ahmed et al., 2021, pp. 212, 295; Chen, 2021, p. 76; Ene & Upton, 2018, pp. 9-10) or higher passing rates (Chen, 2021, p. 87). Presumably, this is because students have to process the chat messages more deeply than in several other methods: In contrast to direct corrections and edits in a text (see section 3.1) or cloud document (see section 3.3), the original writing is preserved so that learners cannot simply accept ready-made corrections (e.g. through track changes), but need to transfer the ideas from the chat to their own draft. Furthermore, the chat makes the incorrect forms or the suggestions (visually) more salient as compared to oral interactions (Arroyo & Yilmaz, 2018, p. 944; Liu, 2015, p. 146; Razagifard & Razzaghifard, 2011, p. 5; Sauro, 2009, p. 100). Researchers have found that corrections and hints appeared to be more implicit in speaking (e.g. through a change in intonation; Lai & Zhao, 2006, pp. 112-113), whereas reviewers tended to formulate their feedback more explicitly in chats. For instance, they may directly ask “What is …? ” if they want to clarify meaning (Lai & Zhao, 2006, pp. 112-113) or pinpoint corrections through typographic markers, such as through question marks or capital letters, as in “recEIved” (for correcting “recieved” [sic]) (cf. Guichon et al., 2012, p. 193). The visibility of the incorrect and/ or corrected form can raise learners’ awareness of the mistakes they have committed (Ene & Upton, 2018, p. 7; Samani & Noordin, 2013, p. 51). Hence, in chat feedback, noticing becomes more likely and focus on form is facilitated (cf. the reviews by Lai & Zhao, 2006, p. 104, and by Ziegler & Mackey, 2017, p. 83). For instance, Lai and Zhao (2006) compared text chats with face-to-face conversations and detected “20 % more noticing of negotiation of meaning in online chat” (p. 110). Moreover, they found that most of the recasts in online chats dealt with morphosyntactic aspects, whereas recasts about lexical items predominated in face-toface interactions (Lai & Zhao, 2006, p. 111). This attests to a more fine-tuned focus on form that is mainly afforded by the asynchronicity (prolonged processing time) and availability (permanency) of the chats (Lai & Zhao, 2006, p. 112). As one participant in Lai and Zhao’s (2006) study explained, “after typing, I can look at the sentence. After I saw the sentence, I noticed my mistake” (p. 112). The same is true of peer review, as a student in Dao et al.’s (2021) research remarked: “In the text-chat, since we could see our messages, it was easier to notice the errors and thus we could correct each 128 3 Digital feedback methods <?page no="130"?> other” (p. 19, emphasis omitted). Similarly, Sotillo (2010) counted “more opportunities for error noticing and awareness of linguistic forms” in text chats as compared to voice chats (p. 363), but also confirmed that both offer opportunities for conscious reflection about language use. This may likewise result in self-corrections by the learners (Bower & Kawaguchi, 2011, p. 43; Lai & Zhao, 2006, p. 112; Liu, 2015, p. 146; Sotillo, 2010, p. 365). It appears that focus on form is more likely when chat feedback is employed synchronously while learners work on a certain task or make oral contributions. However, it is also suitable for higher-order aspects in some types of tasks. To exemplify, in brainstorming tasks, ideas can be exchanged and negotiated quickly (Liang, 2010, p. 55; cf. the review by Honeycutt, 2001, p. 52). Similarly, chats that are used as a follow-up to asynchronously provided feedback showed a stronger emphasis on higher-order concerns (Ene & Upton, 2018, p. 10). Overall, then, it seems that chat feedback might foster learners’ critical thinking (cf. Ahmed et al., 2021, p. 295) regarding both higherand lower-order aspects (cf. Avval et al., 2021, p. 212). While there seem to be many benefits of chat feedback (“linguistic, pragmatic, affective, and communicative benefits” according to Ziegler & Mackey, 2017, p. 90), several findings are still contradictory. One reason is that research is still scarce, especially concerning voice chats, video chats and multimodal chats (see the reviews by Ziegler & Mackey, 2017, p. 86, and Dao et al., 2021, pp. 2, 4-5). Beyond the modality, personal preferences and contextual conditions (such as task demands or the learning environment) can turn into important mediating factors, which necessitate further investigation (Ziegler & Mackey, 2017, p. 86). Further limitations will be addressed below. 3.7.4 Limitations/ disadvantages If chat feedback is limited to written text only, as it was in most of the existing studies, several challenges arise for the users. First of all, typing is usually slower than speaking, which may negatively affect the flow of the exchange (e.g. Chang, 2009, p. 59; Dao et al., 2021, pp. 19, 21; Ene & Upton, 2018, p. 8; Lai & Zhao, 2006, p. 112). Accordingly, text-based chat feedback was frequently characterized as time-consuming (Chang, 2009, p. 57; Dao et al., 2021, pp. 18, 21; Ene & Upton, 2018, p. 8). Especially learners and teachers with “[l]imited typing skills” (Lai & Zhao, 2006, p. 115) could feel discouraged and frustrated (see also Chang, 2009, p. 57, for peer feedback; Ene & Upton, 2018, for instructor feedback). What is more, they might feel pressured to give a swift response because a time lapse can be annoying for their interlocutors who are waiting for a reply (cf. Dao et al., 2021, pp. 19, 21). The time efficiency is further reduced if chat sessions are held after the asynchronous delivery of feedback in a different mode. For instance, peers (Chang, 2009, p. 57) and teachers (Spencer & Hiltz, 2003) reported that the scheduling of chat meetings was problematic or that nobody showed up at the agreed time. With chatting via 129 3.7 Chat/ Messenger feedback <?page no="131"?> mobile messengers, though, learners and teachers would not be dependent on a chat appointment, but could flexibly send their messages at their own convenience (see section 3.7.3 on advantages). Still, the textual mode bears several restrictions regarding the level of detail and the resultant clarity of the messages as opposed to other modes (Chang, 2009, p. 56; Ene & Upton, 2018, p. 8). As typing is tedious, the texts tend to be rather short so that the chat partners could easily misunderstand each other’s messages (Chang, 2009, p. 56). In addition, if the chat feedback is given in a foreign language, further communication problems may arise (Chang, 2009, p. 59). Overall, there are both quantitative and qualitative differences between chat feed‐ back and other methods. Regarding the quantity, Ene and Upton (2018) discovered a significantly lower amount of feedback and uptake from the feedback when chats were used as compared to asynchronous feedback in a text document (pp. 7, 10). Furthermore, others noted that chat feedback centered more on local aspects than on higher-order concerns (Chang, 2009, p. 58), probably because longer explanations would have been too time-consuming. By contrast, asynchronous modes free off further time for reflection and the formulation of more comprehensive advice. Similarly, Honeycutt (2001) remarked that students’ peer responses in chats seemed to lack focus and reflection and that they tended to take chat feedback less seriously than email feedback (see section 3.8). This low level of detail may also result from the fact that references to specific passages of an assignment are problematic in chat feedback (Honeycutt, 2001, pp. 28- 29). Assessors would need to copy and paste or type the pertinent passage into the chat in order to provide feedback on it (cf. Chang, 2009, p. 49). Similarly, cross-references to earlier chat sequences could be difficult. Quite frequently, there is an overlap of parallel contributions, which may cause “several intervening turns unrelated to the initial error” (Loewen & Erlam, 2006, p. 10; see also Sauro, 2009, p. 110). It would then again be necessary to quote the initial passage in order to refer back to it. Otherwise, learners may easily oversee the feedback that has been provided (cf. Sotillo, 2010, p. 364). However, many messengers, such as WhatsApp, nowadays make it possible to specifically respond to earlier chat sequences and thus to resume the discussion. Still, this “lack of adjacency” (Sauro, 2009, p. 110) and resultant “split negotiation” (Smith, 2003, cited by Giguère & Parks, 2018, p. 181) might be cognitively challenging for everyone who is involved (cf. Honeycutt, 2001, pp. 30-32), no matter whether it occurs in delayed or instant chats. Moreover, the referencing to concrete assignment passages costs time and effort and appears to be less handy in chats than in emails, for instance (Honeycutt, 2001, pp. 44-45). This might also be due to the perceived fast-paced nature of chats in which comprehensive cross-references would slow down the interactive exchange (Honeycutt, 2001, p. 45). Thus, references are typically given in a rather brief manner (e.g. by mentioning the page and paragraph number), but the recipients still have the 130 3 Digital feedback methods <?page no="132"?> possibility to ask questions if they are unable to find the passage promptly (Honeycutt, 2001, p. 47). Apart from the comparison of electronic feedback in text chats or text editors, several researchers also contrasted text chats with video chats and face-to-face communication. Again, there were quantitative and qualitative differences between the modes. Quantitatively, Dao et al. (2021) identified 278 feedback instances in video chats as opposed to 69 in the text chat (p. 14). This difference applied to all feedback types that they had investigated (p. 15) and it reached statistical significance (p. 16). Moreover, video chat was preferred to text chat by most learners for several reasons (Dao et al., 2021, p. 16). Aside from the time-consuming typing and slow interaction speed, the learners complained about the lack of “facial expression and emotions” (Dao et al., 2021, p. 16). In video or face-to-face communication, by contrast, gestures, intonation and facial expressions provide helpful cues for clarifying meaning (Chang, 2009, p. 59; Dao et al., 2021, pp. 6, 16; Lai & Zhao, 2006, p. 113; Liu & Sadler, 2003, p. 221). In addition, the social presence conveyed by the visibility of the interaction partner helps to build rapport and create positive learning relationships (Dao et al., 2021, p. 1; see also Ene & Upton, 2018, p. 8). Furthermore, the chat mode itself might incline learners to engage in “more off-task or about-task interactions” than in reviewand revision-related discourse (Chang, 2009, p. 58; cf. Honeycutt, 2001, pp. 43-44, 49-50; Liang, 2010, p. 57; Loewen & Erlam, 2006, p. 11). Likewise, when feedback was investigated as part of e-tandem exchanges, feed‐ back sequences tended to be rare as the participants focused more on communication than on correction (Bower & Kawaguchi, 2011, p. 59). Even though social interaction is important for learning, including peer support and collaborative writing (Liang, 2010), there might nevertheless be too many distractions so that teachers could have difficulties “in keeping the students on task” (Loewen & Erlam, 2006, p. 10). Finally, there are a few more potential obstacles to consider. First, not all students across the globe might have (permanent) access to chats due to lacking hardware or internet availability (Udeshinee et al., 2021, p. 185). As a result, assessors might refrain from sending large multimedia attachments in the chats (Soria et al., 2020, p. 807). Furthermore, for personal or cultural reasons, teachers, students or their parents could be against chatting since this is considered too informal or unethical (Udeshinee et al., 2021, p. 185). For instance, parents may fear that their children begin chatting with strangers if their teachers introduce chats as a communication tool. Beyond that, neither teachers nor students might want to share their private phone numbers for educational purposes, especially if commercial messengers, such as WhatsApp or Telegram, are utilized. In that regard, they might also fear infringements of data protection rights. Therefore, the use of a chat system that has been approved by the educational institution should be preferred. On the other hand, it might not be as handy or as frequently accessed as the ordinary messengers that they are utilizing anyway on their mobile devices. The next section will therefore survey some alternatives. 131 3.7 Chat/ Messenger feedback <?page no="133"?> 3.7.5 Required equipment To provide feedback via chats, users need a chat software and a device on which it can be installed. This could be a desktop computer, a laptop, a tablet or a smartphone. However, some apps are restricted to e.g. mobile phones only, whereas others work on several devices and operating systems. Moreover, the available tools are developing and changing continuously, with new ones being launched and version updates being run regularly. Others are disappearing from the market, such as the Yahoo Instant Messenger that was cited by Lai and Zhao (2006), Razagifard and Razzaghifard (2011) as well as Sotillo (2010). Alternative chat tools are Skype (Arroyo & Yilmaz, 2018), Facebook (Dao et al., 2021; Liu, 2015), Twitter, Instagram, Telegram, WhatsApp (Soria et al., 2020), MSN Messenger (Bower & Kawaguchi, 2011; Chang, 2009; Liang, 2010), WeChat (Chen, 2021), Visu (Guichon et al., 2012) or the chat tools of virtual worlds (e.g. Second Life; Wigham & Chanier, 2013). Calvo, Arbiol and Iglesias (2014), for instance, compared fifteen commercial chat applications from different platforms and devices to evaluate their suitability for learning purposes, also for people with disabilities. WhatsApp fared pretty well and was also preferred by students in other studies as compared to the LMS Blackboard (e.g. Bere, 2012, cited by Calvo et al., 2014, p. 252, and Soria et al., 2020, p. 801). However, chats are also a common component of videoconferencing applications, such as Zoom, Microsoft Teams or BigBlueButton (see section 3.14), and have become increasingly popular during remote teaching under Covid-19 conditions. Preferably, a program should be chosen that complies with the data protection regulations of the educational institution and is supported by its IT service. An integrated solution would be ideal as it would work with learners’ and teachers’ institutional accounts so that no private mail addresses or telephone numbers would need to be exchanged in order to chat with each other. Some tips for the implementation will be sketched next. 3.7.6 Implementation (how to) Chats can be used to provide immediate feedback during oral or written interactions or after the submission of an assignment. However, when giving feedback in chats to previously submitted or displayed assignments (e.g. in a videoconference), localization might be challenging (see section 3.7.4). To ease the identification of particular passages in an assignment, assessors should clearly refer to them in the chat, e.g. by copying them as a quote into the chat before commenting on it (Chang, 2009, p. 49). Some messengers even allow for direct responses to earlier chat sequences, e.g. the WhatsApp messenger. Generally, though, it has been argued that synchronous chats and instant messengers are particularly suitable for the provision of immediate feedback rather than delayed feedback (Arroyo & Yilmaz, 2018, p. 967). Accordingly, corrective feedback should be provided right when a mistake appears because back-references to earlier chat 132 3 Digital feedback methods <?page no="134"?> sequences might be hard to accomplish. For instance, in a videoor audioconference, corrections or suggestions could be typed into the chat while students are interacting (e.g. Guichon et al., 2012; see section 3.14). Teachers and peers can use a public chat, group chat or private chat, depending on what is considered most appropriate. To give feedback, they may add a question or simply a question mark after the erroneous phrase that has been copied into the chat. In addition to or as an alternative to question marks as an error signal, capital letters could be utilized to pinpoint the erroneous parts more clearly (Guichon et al., 2012, p. 193). In several chat apps, it is also possible to insert emojis that express doubt or reflection, for instance. Analogously, smileys might be employed alongside positive feedback (Soria et al., 2020, pp. 804-805) or for clarifying the meaning of potentially ambiguous feedback. They could also be reproduced as an emoticon by typing the “: -)” symbol into the chat. Beyond this simple corrective information, many messengers enable the exchange of multimedia materials, such as images (Soria et al., 2020, pp. 804-805), voice messages, videos and other file attachments (PDFs, text documents, tables etc.). Moreover, hyperlinks to helpful learning resources can be shared and directly accessed in the online environment. Certainly, assessors may also type longer explanatory feedback after a task performance into a private or public chat if deemed relevant. Usually, though, written chat feedback tends to be rather brief. Depending on the type of error and learners’ presumed prior knowledge about it, the feedback strategies can range from implicit to explicit. Samani and Noordin (2013) suggested using prompts and recasts, such as the following (pp. 48-49): ■ Recasts in which the assessor reformulates a phrase or sentence by using the correct form, ■ Prompts in which the assessor does not give the correction directly, but draws learners’ attention to the erroneous part, for instance by asking for clarification, using metalinguistic information or simply by repeating the false passage together with question marks. Razagifard and Razzaghifard’s (2011) experiment showed that both recasts and meta‐ linguistic corrective feedback were effective in the chatroom. However, those who obtained “feedback in the form of metalinguistic information outperformed the group that received […] recasts” (p. 12). By contrast, Sauro (2009) did not observe any significant differences between the two feedback conditions over time (p. 109). Further research therefore appears necessary to derive more specific recommendations for the implementation. 3.7.7 Advice for students If students engage in peer review via chats after a task has been accomplished, it would be important for them to exchange the assignment first, either directly via the messenger or via the LMS (cf. Chang, 2009, p. 48). During live interactions, 133 3.7 Chat/ Messenger feedback <?page no="135"?> however, they should immediately send their comments via private or public chats in a videoconference application or messenger app. Apart from the general training in peer review, they need to be coached in the particularities of providing feedback in synchronous chats (Rollinson, 2005, quoted by Liang, 2010, p. 49; cf. Dao et al., 2021, p. 9). The guidelines would include some practical advice, for example regarding cross-references and time on task. They may also copy these guidelines into the chat (or another easily accessible space) to have them available during the peer feedback activity (Liang, 2010, p. 56; for some guiding questions see pp. 49-50). Notably, to comment on specific passages of a text, it would be advisable to copy the selected text passage into the chat within quotation marks (Chang, 2009, p. 49). If the messenger allows direct responses to foregoing chat parts, then this option should be preferred. Finally, learners should avoid spending too much time on chat communication that is unrelated to the provision of feedback (Liang, 2010, p. 57), even though they might be inclined to it because social talk in chats is familiar to them from their free time activities. Turning off distractions would therefore be important. 3.7.8 Combinations with other feedback methods Chat feedback can be combined with several other methods and is often a natural part of them already, e.g. in LMS or videoconferencing platforms. When students submit an assignment via the LMS, the assessor could utilize an asynchronous feedback method first before engaging in a chat about its contents. To instantiate, they may comment on a document with the reviewing features of their text editor (see section 3.1) and then schedule a chat appointment with the student (Ene & Upton, 2018, p. 4). In this synchronous modality, the learner can then ask questions for clarification and the teacher may explain the feedback in more detail if needed (Ene & Upton, 2018, pp. 4, 7, 9). Based on their comparative study, Ene and Upton (2018) argued that each method has different affordances, so that a combination would be sensible (pp. 9-11). Similarly, Chang (2009) concluded that asynchronous and synchronous written feedback could be beneficial for different stages of the drafting process (p. 60). Instead of using an offline word processor (or similar program), chats could also supplement feedback in an online text editor, as for instance Google Docs (cf. Dao et al., 2021; see section 3.3). In the chat, assessors can provide procedural support, e.g. by giving strategic or organizational tips, while specific comments pertaining to single text passages would be inserted into comment bubbles or via track changes in the cloud document itself. Moreover, a combination with AWE (section 3.2) might be possible, depending on the chat program and device that is used. Since chats are a common component of videoconferencing software (see section 3.14), corrections or feedforward advice could be typed into the chat window while learners talk to each other or present an assignment (cf. Guichon et al., 2012). In 134 3 Digital feedback methods <?page no="136"?> those settings, corrections in the chat seem to be particularly helpful for form-focused interventions (see also Chang, 2009, p. 60). Guichon, Bétrancourt and Prié (2012) even utilized a special annotation solution in Visu. Teachers were able to annotate specific parts of the conversation, which initially remained invisible to the learners (Guichon et al., 2012, p. 185). However, they were able to access them in retrospection in order to facilitate the ensuing feedback exchange (Guichon et al., 2012, p. 185). By contrast, the text chats in common videoconferencing applications directly show a time stamp, which can ease back-referencing if learner interactions are recorded. Furthermore, if learners do not want to see the direct corrections during their oral contributions, they might also decide to hide the chat window and access it later. They could save the text chat and use it as a basis for further discussion in a subsequent appointment. 3.7.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) For what purposes do you normally use chats or messenger apps? (2) Please reflect on your usage of the different chat functions. a. Do you prefer text chats or voice comments? What are their benefits and challenges, respectively? b. For what purposes do you consider typed comments or voice messages suitable? c. Open one of your latest chats in your messenger app. Carefully analyze the way feedback was exchanged. What do you observe? What modalities and tools have you used? What would you do differently next time? (3) Would you as a teacher (or student) consider sharing your private phone number for educational purposes? Give reasons for your answer. Application questions/ tasks: (4) A fellow student posted a new comment into your class chat. On the one hand, you agree with some points; on the other hand, you find some aspects controversial. You therefore want to engage in further interaction with your fellow student. How do you proceed? Do you write into the class chat or do you send a private message? You know that your teacher appreciates interaction in the class chat, but you are not sure whether everything should be addressed in it. 135 3.7 Chat/ Messenger feedback <?page no="137"?> 3.8 E-mail feedback 3.8.1 Definition and alternative terms/ variants Email feedback can be defined as written asynchronous commentary that is exchanged online by using a mail service. It typically refers to feedback messages written in an email text, but it may also include file attachments of various kinds. Hence, it might be easily combined with other feedback methods (see section 3.8.8). Mostly, how‐ ever, it is just the text mail itself that potentially includes the reviewed assignment as an attachment (e.g. DeBard & Guidera, 2000, p. 222). 3.8.2 Contexts of use, purposes and examples In contrast to chats, emails are commonly classified as an asynchronous feedback method (e.g. Bloch, 2002, p. 118), even though both of them are sent and received within short time. However, emails are usually not associated with the immediacy and oftentimes informality of chats, but are rather seen as more formal, lengthier, letterlike and as not necessarily requiring an instantaneous response (cf. Honeycutt, 2001). As such, email feedback may resemble summative comments written at the end of an assignment, for instance. Beyond written text in the mail itself, files can be attached that contain further feedback contents, e.g. the reviewed document (Tafazoli, Nosratzadeh, & Hosseini, 2014, p. 357). Alternatively, the email may include a hyperlink to the online repository of the reviewed assignment in order to save space on the mail server. Moreover, hyperlinks could be inserted to direct the feedback recipients to further helpful resources or sample solutions (e.g. for exams see White, 2021). Overall, email feedback can be used for general feedback to a course or group (e.g. Keefer, 2020; White, 2021) or for personal feedback provided to individual students (Barton & Wolery, 2007; McLeod et al., 2019; Zhu, 2012). To exemplify, email feedback could be generic feedback after tests (Keefer, 2020; White, 2021) or personalized information about class performance. The latter may contain students’ “cumulative grades for each class item, along with their contribution score up to that specific point of time” (Voghoei et al., 2020, p. 20). On the other hand, email feedback for formative purposes is common (e.g. Zhu, 2012), including progress feedback, e.g. in blended learning (see van Oldenbeek et al., 2019) or distance learning settings (cf. Huett, 2004, p. 35). For instance, in Zhu (2012), “the teacher provided formative feedback to students with specific comments on the achievement of the students, the good points, the weaknesses and how to improve their work/ assignment via email communications” (p. 84). 136 3 Digital feedback methods <?page no="138"?> While it was often utilized for teacher-to-student feedback, it could likewise be employed for peer feedback among students (see for example Carswell, Thomas, Petre, Price, & Richards, 2000; Honeycutt, 2001; Motallebzadeh & Amirabadi, 2011) and colleagues (Clayton, 2018b), but also for student-to-instructor feedback (e.g. Bloch, 2002). Moreover, feedback loops can easily be established, e.g. by the student sending the assignment (and feedback request) via mail to the instructor, with the instructor attaching the reviewed files and asking further questions to be answered by the learner (cf. DeBard & Guidera, 2000, p. 222). As the contents and attachments of email feedback can be manifold, it has already been implemented in a variety of subject fields, such as psychology (Keefer, 2020; Smith, Whiteley, & Smith, 1999), English as a foreign language (De Coursey & Dandashly, 2015; Farshi, 2015; Hosseini, 2012; 2013), audit (White, 2021), engineering (Hassini, 2006), business (Hassini, 2006; Kurtzberg, Belkin, & Naquin, 2006; Nnadozie, Anyanwu, Ngwenya, & Khanare, 2020; Zhu, 2012), computer science (Voghoei et al., 2020), teacher training (Barton & Wolery, 2007; McLeod et al., 2019) and education (Yu & Yu, 2002). All in all, email appears to be a versatile tool for exchanging feedback messages of various kinds and for numerous purposes. The next section will offer a more specific description of the advantages before potential limitations will be reviewed. 3.8.3 Advantages of the feedback method Email feedback is often received very favorably because most teachers and learners already use it routinely in their daily life (DeBard & Guidera, 2000, p. 221; cf. Huett, 2004, p. 35). It requires little investment in softor hardware (Honeycutt, 2001, p. 53; see section 3.8.5), is relatively “easy to use” and thus very “convenient” (Yu & Yu, 2002, pp. 117-118; see also Zhu, 2012, p. 86). In that regard, its speed but also flexibility of access are important factors (Huett, 2004, pp. 35, 42). On the one hand, it allows for relatively quick and immediate feedback (Hassini, 2006, p. 33; see also Nnadozie et al., 2020, p. 143; Yu & Yu, 2002, p. 121; Zhu, 2012, pp. 86, 88); on the other hand, it enables asynchronous and repeated access to the feedback contents (Huett, 2004, pp. 38, 42; Smith et al., 1999, p. 20). This “freedom from the constraints of location and time” (Huett, 2004, p. 42; Zhu, 2012, p. 87) is advantageous in several ways. First, neither learners nor teachers need to wait until the next class meeting or office hour to exchange feedback, but they can do so instantaneously and independent of a particular location (Barton & Wolery, 2007, p. 56; Huett, 2004, p. 42; Nnadozie et al., 2020, p. 142; Zhu, 2012, pp. 87-88). Beyond that, email feedback is common in distance education, but it is also convenient for parttime students who cannot attend face-to-face consultations at any time (Hassini, 2006, p. 35). Email feedback has therefore been characterized as timeand cost-saving (Barton & Wolery, 2007, p. 56; De Coursey & Dandashly, 2015, p. 222; Yu & Yu, 2002, p. 121). It helps to disseminate large amounts of information in a quick and easy manner 137 3.8 E-mail feedback <?page no="139"?> (Nnadozie et al., 2020, p. 143). In that respect, it seems to be beneficial for general feedback (Nnadozie et al., 2020, p. 143), but also for individualized support (cf. e.g. Zhu, 2012, p. 80). The contents can be consulted repeatedly, as “e-mails provide an electronic record” (Barton & Wolery, 2007, p. 56; see also Smith et al., 1999, p. 16). At the same time, teachers can use their email record to observe the progress and development of individual learners over time (Barton & Wolery, 2007, p. 56). As there is no pressure to write or respond to mails instantaneously, both teachers and students can take their time to structure and process the contents, respectively. Teachers (and peer assessors) have time to think about how they want to formulate and structure the feedback as well as edit their mail drafts before sending them (Honeycutt, 2001, pp. 48, 51; cf. Barton & Wolery, 2007, p. 56). Due to the reduced time pressure to provide the feedback, it turned out to be rather detailed, clearly structured and easy to understand as compared to traditional (hand-)written feedback and face-toface feedback (Barton & Wolery, 2007, pp. 60-62, 66-69; Honeycutt, 2001, pp. 48, 52). Similarly, for the students, the pressure to process and respond to the feedback is reduced (Bloch, 2002, p. 118; Bond, 2009). They can take their time to read and reflect on the feedback (Honeycutt, 2001, p. 48), which might be particularly valuable for low-performing students or those with special needs (Smith et al., 1999, p. 23). Further, the written mode could ease communication with students who have heavy accents (Bloch, 2002, p. 118) or who tend to refrain from direct communication for cultural (cf. Powers & Mitchell, 1997, cited in DeBard & Guidera, 2000, p. 225) or personal reasons (cf. e.g. Yu & Yu, 2002, p. 122). Consequently, it has been argued that email feedback may cater for different student needs and rates of learning and that it encourages independent learning, active information processing as well as reflective thinking (Hanson & Asante, 2014, quoted by Nnadozie et al., 2020, p. 138; see also DeBard & Guidera, 2000; Yu & Yu, 2002, p. 122). Reflection indeed turned out to be the “predominant theme” in many studies about email feedback (Honeycutt, 2001, p. 48). This seemed to be particularly true for advanced learners (Honeycutt, 2001, p. 53), probably resulting from their higher metacognitive abilities. On the other hand, Zhu (2012) discovered that 50 % of the learners with high metacognition and 64 % of those with low metacognition reported an increased interest in the course due to the email feedback they obtained (p. 85). Similarly, Smith et al. (1999) remarked that the provision of email feedback to low-scoring students can greatly help them improve their academic performance (p. 23). Email feedback may thus be particularly useful for learners who need more time to process the feedback and reread it whenever needed (see also other asynchronous methods, e.g. feedback in text editors and chats). However, there also appears to be an increased potential for interaction (Barton & Wolery, 2007, p. 56; Huett, 2004, p. 42; Zhu, 2012, p. 87) and for building as well as maintaining contact with the instructor (DeBard & Guidera, 2000, pp. 222-223; Hassini, 2006; Zhu, 2012, p. 86) and with other students (Carswell et al., 2000, p. 42; Nnadozie et al., 2020, p. 143). Especially when connected with audio feedback (see section 3.11), 138 3 Digital feedback methods <?page no="140"?> many students perceived “a greater sense of belonging”, which is particularly useful in online environments (Woods & Keeler, 2001, p. 273). It can help to build and extend learning communities (Bloch, 2002, p. 119) and to “decreas[e] social isolation” (Huett, 2004, p. 42; Zhu, 2012, p. 87). It has therefore been concluded that email feedback “combines the positive feelings gained from social interaction with time for reflection” (Lamy & Goodfellow, 1999, quoted by De Coursey & Dandashly, 2015, p. 217). The lack of direct and visual contact might even be advantageous in certain ways (Lewis et al., 1997, cited by Hassini, 2006, p. 34; see also Vonderwell, 2003, p. 82). Due to their relative anonymity (Nnadozie et al., 2020, p. 138), emails could encourage more comments, questions and dialogue (Barton & Wolery, 2007, p. 56), but also more open criticism. In addition, the potential multimodality of email feedback has been foregrounded. Emails can include hyperlinks as well as file attachments of various kinds, such as the reviewed document or voice memos (Woods & Keeler, 2001; see also section 3.8.8). Moreover, videos and graphics can be shared as well as other supplementary materials, e.g. a reading list or step-by-step guideline (cf. Yu & Yu, 2002, pp. 119-122). Whenever the reviewed task is attached, assessors should make concrete references to document passages to avoid ambiguities (Honeycutt, 2001, pp. 32, 41). This cross-referencing in emails is advantageous as compared to chat feedback (see section 3.7), as Honeycutt (2001) had discussed. It may ultimately be more helpful for the revision process, which leads us to a review of the presumed learning gain from email feedback. In Honeycutt’s (2001) research, the peer reviewers concentrated more on the task and took the email feedback more seriously than chat feedback (pp. 26-27, 51). Moreover, “the organization and elaboration of e-mail comments” as well as their asynchronous availability were regarded as helpful in the revision process (Honeycutt, 2001, pp. 51-52). Several further studies reported positive effects of email feedback, such as more active participation (Voghoei et al., 2020, pp. 21-22) and task completion (van Oldenbeek, Winkler, Buhl-Wiggers, & Hardt, 2019), improved fluency (Li, 2000, and Warschauer, 1996, cited by Bloch, 2002, p. 118) and writing skills (Huett, 2004, p. 42). Nevertheless, the disadvantages likewise need closer inspection, as will be done in the next section. 3.8.4 Limitations/ disadvantages The disadvantages of email feedback result from its mainly text-based nature, its limited social cues and interactional potential, the risk of misunderstandings, the perceived higher workload as well as the required technology and digital literacy. These limitations are typically discerned when email feedback is compared to on-site faceto-face feedback. To start with, similar to other text-based feedback methods, email feedback lacks further information cues coming from e.g. facial expressions, gestures, voice and intonation (cf. Glei, 2016; Hassini, 2006, p. 35; Huett, 2004, p. 39; Nnadozie et al., 2020, p. 141; Smith et al., 1999, pp. 16, 24). As there are no additional verbal or visual signals 139 3.8 E-mail feedback <?page no="141"?> that could alleviate the pragmatic force of a message, the contents of email feedback tend to be perceived more negatively than in face-to-face communication (cf. Grenny, 2013; Nnadozie et al., 2020). This often results in the “negativity bias” of email feedback: It means that “every message you send gets automatically downgraded a few positivity notches by the time someone else receives it” (Goleman, cited by Glei, 2016). Hence, if the sender has a positive attitude towards an email, the recipient usually has a neutral attitude, and if the sender perceives the message as neutral, the recipient usually perceives it as negative (Glei, 2016; see also Kurtzberg et al., 2006, and their experiment). Even if emoticons are used, the text format usually cannot capture the same paraand non-verbal cues as face-to-face interaction (cf. DeBard & Guidera, 2000, p. 226). Accordingly, email feedback often results in less interaction than interpersonal onsite contacts (Honeycutt, 2001, pp. 41-43). In that respect, Honeycutt (2001) noticed a lower amount of firstand second-person pronouns, which could create greater social distance in comparison to chat feedback (pp. 41-43; section 3.7). Email feedback was therefore frequently characterized as rather “impersonal” (Kurtzberg et al., 2006, p. 13; Nnadozie et al., 2020, pp. 141-142; see also Vonderwell, 2003). Kurtzberg et al. (2006) even claimed that electronic communication (compared to on-site oral feedback and traditional written feedback) is the least appropriate medium for conveying feedback due to its impersonality (p. 8). Contrary to the advantages stated above (section 3.8.3), some scholars argued that the lack in social cues and immediate interaction can hinder relationship-building with learners (see the review by Kurtzberg et al., 2006, pp. 6-8; cf. Vonderwell, 2003, pp. 83-84). Notably, students tend to feel uncomfortable if they are asked to interact by mail with other people they have never met before (Vonderwell, 2003, p. 82). In addition, it appears to be specifically disadvantageous for auditory learner types (Huett, 2004, p. 38), just as other written feedback methods. In that regard, audio recordings could be a valuable supplement, but of course this does not replace live interactions (Woods & Keeler, 2001, p. 272). For this reason, probably, Woods and Keeler (2001) did not find socioaffective benefits when audio was used to enrich text-based communication (p. 272). Instead, email feedback is oftentimes perceived as a “one-way messag[e]”, for instance as compared to face-to-face communication and chats (Honeycutt, 2001, p. 43). There is no immediate interaction (Nnadozie et al., 2020, p. 141), but instead “the risk that not all students will seek to read the instructor’s messages” (Hassini, 2006, p. 35). Consequently, teachers do not know whether their learners read the feedback, how they react to it and whether they understand it in the intended manner or not (Zhu, 2012, p. 88). Due to its written and asynchronous nature, ambiguities may arise (Bloch, 2002, p. 119; De Coursey & Dandashly, 2015, pp. 223-224) that can neither be clarified immediately (Nnadozie et al., 2020, p. 141; Vonderwell, 2003, p. 84) nor with sufficient detail (Honeycutt, 2001, p. 50). Likewise, the spontaneity of live exchanges is lost (Smith et al., 1999, p. 24). In addition, assessors might be annoyed by the restricted formatting and review options as compared to text editors (see section 3.1) 140 3 Digital feedback methods <?page no="142"?> or handwritten annotations (including digital pens). These, however, would be useful for special notations or graphs in e.g. mathematics (cf. Hassini, 2006, p. 35). For these reasons, email feedback is often perceived negatively (Bloch, 2002; see also Glei, 2016; Grenny, 2013). On the other hand, “special efforts” and time investments would be needed if assessors aim to reduce or avoid misor non-understanding as well as to heighten the interactivity of email exchanges (Zhu, 2012, p. 88; cf. DeBard & Guidera, 2000, quoted by Huett, 2004, p. 39). For one, assessors would need to “take special care with [their] wording” (Glei, 2016). Second, they should ensure easy localization for the students by making explicit references to concrete parts of the assignment (Honeycutt, 2001, p. 29). While this is already time-consuming, further factors can increase instructors’ workload and stress level. Notably, they might feel pressured to check their emails “several times a day and sometimes on weekends” (Hassini, 2006, p. 35) to ensure timely feedback and interactive exchanges (similar to feedback in chats and cloud documents, for instance). The formulation of email replies to students’ specific questions would involve additional time investments (Brown et al, 2004, cited in Zhu, 2012, p. 88; Hassini, 2006, p. 35). In addition, the multitude of mails needs to be managed, which many teachers find overwhelming (De Coursey & Dandashly, 2015, p. 222). A system would thus be needed to keep track of the many mails and the concerns that are mentioned in them (see section 3.8.6). With regard to the quantity of mails, assessors might, however, tend to give rather quick and brief responses, which could be misinterpreted by the students. The danger then is that they could “be held accountable for the things put in writing”, which would, in turn, necessitate a “more considerate and cautious” wording (Zhu, 2012, p. 88). Furthermore, teachers need to bear in mind that emails usually cannot be recalled once they are sent (Sull, 2014, p. 1). Proofreading would therefore be crucial (Sull, 2014, p. 2; see implementation section 3.8.6). All this costs time and effort, though, not only for the teachers, but also for the students. They likewise need to manage the mails and utilize their contents for further learning. For similar reasons, this can cause “stress and frustration” (Nnadozie et al., 2020) for the students. One important factor in that regard could be their insufficient digital literacy (De Coursey & Dandashly, 2015, p. 217). However, not only students might need support (Carswell, et al., 2000; Nnadozie et al., 2020, p. 140), but also teachers (Yu & Yu, 2002, p. 123). De Coursey and Dandashly (2015), for example, noted that some teachers were not familiar with or even reluctant to using email (p. 222). This “techno-reluctance” (De Coursey & Dandashly, 2015, pp. 223, 225) may partly derive from negative prior experiences and the resultant belief that emailing would be too time-consuming. Other, and related, factors are lacking access (De Coursey & Dandashly, 2015, p. 225; DeBard & Guidera, 2000, p. 225; Yu & Yu, 2002, p. 123; Zhu, 2012, p. 88) and insufficient practice. The next sections will therefore specify the required equipment and outline suggestions for the implementation. 141 3.8 E-mail feedback <?page no="143"?> 3.8.5 Required equipment To provide and receive email feedback, only a simple equipment is needed (Honeycutt, 2001, p. 53). In terms of hardware, a device is required that can be connected to the internet, such as a desktop computer, laptop, tablet or mobile phone. Moreover, users need to register for an email address in case they do not have one yet. There are numerous email providers they can choose from, for instance Gmail, MSN, GMX, Yahoo, iCloud, Outlook and many others. Most of them can be used for free, but often with ads or limited functions in the free version. Users can then either directly use the web service online in order to write and read mails or install an email program or app that helps them better organize their inand outboxes (e.g. Microsoft Outlook). Most companies and many educational institutions, especially in higher education, but also at schools, nowadays offer a free mail service. Such an institutional email address should be preferred to exchange feedback, not only for data protection reasons (private mail addresses), but also to pre-empt potential delivery failures. If the institution uses an LMS, then mails can usually be sent directly from within the LMS so that no external mail program needs to be opened (Nnadozie et al., 2020, p. 139). This is convenient, also because the feedback providers and receivers can view the submitted assignment, the corrections, grades and further course materials in one place, i.e. the LMS (Nnadozie et al., 2020, p. 139). The contents of such email feedback and further steps of its implementation will be suggested below. 3.8.6 Implementation (how to) Even though email is already ubiquitous in almost everybody’s life, there are neverthe‐ less some preliminaries and particularities to email feedback that should be considered. Preliminaries: First, regarding data protection, the instructors’ and students’ institutional email addresses should preferably be used. If the addresses are not directly visible for everyone in the LMS (or on another course page), then they should be exchanged at the beginning of a course to facilitate and encourage communication (Hassini, 2006, p. 32). At the same time, certain rules for email communication (such as availability and email etiquette) should be established (Hassini, 2006, p. 31). Especially if personal mail addresses are utilized, teachers may need to check whether the students actually obtain these mails. This can be done by setting up an email list (Hassini, 2006, p. 32) and sending a message that everyone should respond to. In case students do not check their institutional mail addresses regularly, they should be reminded of the mail settings that exist for forwarding messages automatically to another mail address or to a mailing app. At the very outset, it should also be clarified whether the school (Hassini, 2006, p. 32) or the pupils’ parents have any reservations against the use of emails for educational 142 3 Digital feedback methods <?page no="144"?> purposes (analogously for other digital media). Teachers should also “[a]ddress the issue of virus protection” and explain how to use filters to avoid spam on the one hand and mislabeled spam on the other hand (Huett, 2004, p. 41). Mail management: Moreover, as email feedback can quickly lead to a mass of mails, a unified system for organizing and archiving them would be important (Huett, 2004, p. 40). In the email software, users should have the possibility to organize mails into different folders (e.g. one per course) and to search for mail contents within and across these folders (cf. Hassini, 2006, pp. 32, 39). Assessors are advised to turn on the automatic grouping feature that categorizes mails according to their subject line for easier organization. To that end, of course, it would be important to agree on subject line conventions and to use them consistently. Hassini (2006) suggested using the course name and main topic of the mail, e.g. the name of the assignment, project or exam (p. 33). A sample subject line could read “MSci 331: assignment 1, Q2, for a message that will talk about question 2 of assignment 1 of the MSci 331 course” (Hassini, 2006, p. 31). If mails are sent from within the LMS (see e.g. Nnadozie et al., 2020, p. 139), the emails will sometimes by default contain the course title as part of the subject line. Some LMS also allow for the construction of personalized emails based on templates (Voghoei et al., 2020, pp. 19-20). Alternatively, assessors may create their own repository of email templates that can be drawn on flexibly. For this purpose, they may use an application such as Google Keep, as has been recommended for feedback in offline and online text editors above (sections 3.1 and 3.3). Another option would be to utilize the mail merge function of Word, Outlook or other programs (cf. Lloyd, 2021; Smith, 2020; White, 2021). In that respect, a spreadsheet (e.g. in Excel) needs to be created first that contains columns with important student information which are required for the composition of the mails (Smith, 2020; White, 2021). This spreadsheet may include columns for ■ the students’ first name, ■ their last name, ■ their student ID, ■ their email address, ■ their results per task (out of the total score and/ or in comparison with other students), ■ their overall mark for an exam or assignment, ■ their final course grade (White, 2021). As becomes evident, this procedure appears to be particularly useful for post-exam feedback in which standardized mail templates can be utilized (for examples see Keefer, 2020, and White, 2021). However, it is also possible to insert a higher amount of personalized text in emails by using the mail merge system. In that regard, assessors would reserve 143 3.8 E-mail feedback <?page no="145"?> spreadsheet columns for individualized open feedback comments (see Smith, 2020, including his video at https: / / www.youtube.com/ watch? v=LxQVfvzbsqY). Detailed explanations about the mail merge function can also be obtained from Lloyd (2021) at https: / / m.wikihow.com/ Mail-Merge-in-Microsoft-Word, but also from many other online tutorials. Language and structure: Regarding the structure of those mails, McLeod et al. (2019) provided a helpful sample mail for further modification (p. 197). It resonates with the general structural recommendations that were outlined in section 2.1.4. Similarly, not only in terms of structure, but also regarding language use, the sample mail by White (2021) contains helpful formulations that can be adapted to one’s own context. Likewise, Clayton (2018b) offers several language suggestions as well as a feedback email writing exercise. Especially due to the lack of social cues, assessors need to be very “careful and clear” when composing their messages “to avoid ambiguity and unwanted consequences” (Hassini, 2006, p. 35; cf. Vonderwell, 2003, p. 86). Grenny (2013) even suggested drafting the email twice, first to express the contents and “then read it slowly, imagining the other person’s face” and afterwards “re-write it with safety in mind.” Those passages that might sound too negative or ambiguous should be reformulated (Grenny, 2013). Moreover, emoticons or emojis could be added to clarify the meaning further (Huett, 2004, p. 41; cf. Grenny, 2013). Similarly, acronyms may convey social cues and informality (Huett, 2004, p. 41). However, the use of capital letters (e.g. to highlight important points) should be avoided because this might feel as being shout at by the writer (Hassini, 2006, p. 33). Generally speaking, emails, in particular feedback mails, should be formulated positively (Huett, 2004, p. 41). This does not only have to do with the fact that assessors can be held accountable of the content (Huett, 2004, p. 41), but also with the negativity bias of email communication (Glei, 2016; see section 3.8.4). Huett (2004) therefore recommends to “keep email […] as warm, personal, friendly and positive as possible” (p. 41). To achieve a higher degree of personalization, the assessors may address the students by their first name in the greeting (White, 2021; see also Smith, 2020), but also to “repeat the name a few times” if the email is relatively long (Huett, 2004, p. 41). Overall, teachers can choose to be either formal or informal when writing email feedback. Formal writing can help to create a sense of professionalism, but it may sound more distant so that negative feedback could be perceived more harshly by the students. Informal writing, in turn, may help build better social and personal connections with the learners, making them feel more comfortable and relaxed with this type of feedback (cf. Huett, 2004, p. 41). At any rate, however, teachers and students should pay attention to spelling, grammar and typos in the emails to avoid misunderstandings and ensure a professional appearance (Huett, 2004, p. 41; Zhu, 2012, p. 88). In that regard, they might run the automatic spelling and grammar checker of their email software (Hassini, 2006, p. 33; 144 3 Digital feedback methods <?page no="146"?> see also section 3.2 on AWE) as well as proofread their message carefully before sending it (Sull, 2014, p. 2). Attachments: Attachments may further help to disambiguate the contents and provide additional guidance. Assessors could attach the reviewed document with more detailed annota‐ tions (see also section 3.1). Similarly, it is possible to upload reviewed handwritten assignments as a scan or picture file. Furthermore, important handouts could be directly attached to the mail. Care should be taken, though, to the file size of the attachments, as some service providers set up restrictions for receiving and sending emails. Therefore, emails could alternatively include hyperlinks to further resources, such as websites or the institutional cloud server or a relevant LMS site. Further advice: If a student’s question and the corresponding teacher response is important or interesting for everyone in a class, the message can be sent to the entire course instead of the individual student only (Hassini, 2006, p. 32). For complex or severe issues, however, email might not always be the best form, which is why a (follow-up) consultation could be preferred (Grenny, 2013). While most students find these out-of-class contacts helpful (Yu & Yu, 2002, p. 123), too many emails can cause opposite reactions. However, if students send mails, assessors should reply to them in a timely manner (Sull, 2014, p. 2; Yu & Yu, 2002, p. 123), even if it is just a simple “thank you” (Huett, 2004, p. 40). Also, teachers might need to explain their policies regarding email turnaround times and availabilities during the term or semester, during the holidays and at the weekends (cf. Hassini, 2006, p. 32). Further advice for the students’ perspective will be sketched next. 3.8.7 Advice for students If students send a feedback request or feedback reply to their teachers (or peers), they should try to reduce ambiguities as much as possible. First of all, it would be important to “[s]pecify the course name and topic (lecture, assignment, exam, project, etc.) in the email subject” (Hassini, 2006, p. 31) or at the beginning of the running text. This contextualization is crucial because students and assessors tend to receive hundreds of mails per week and attend or teach several courses. Furthermore, the reference to a particular page, slide or subtask should be clear (Hassini, 2006, p. 31) so that feedback communication is facilitated. For discussions on the same topic, it is better to use the reply function of the email rather than to write a new one because then it will be easier to review past conversations and exchanges (cf. Hassini, 2006, in section 3.8.6). Especially students should be careful and avoid deleting feedback mails because they would not be able to reconsult their contents for learning purposes on later occasions. 145 3.8 E-mail feedback <?page no="147"?> Alternatively, they might save the emails to their hard drive, either directly in their mail program or manually by copying the contents to a text document, for instance. When writing mails, learners should pay attention to their language use (spelling, punctuation, grammar), e.g. by turning on the auto-correct features of the mail program (Hassini, 2006, p. 33; Sull, 2014, p. 2; see AWE in section 3.2). It would be best to read them again before sending to avoid mistakes and enhance the clarity of the message. In that regard, it might be helpful to adopt the recipients’ perspective while rereading the contents (Grenny, 2013; see section 3.8.6). When waiting for the assessor’s feedback mail, students should check their spam filters well in advance so that the feedback message will not be classified as spam (Huett, 2004, p. 41). In that respect, they should enter the instructor’s (and/ or peers’) mail addresses to their safe senders list. In addition, they might want to check their spam folder to search for the desired mail. As for the implementation of email feedback, learners should seize the possibilities for further discussion of the feedback contents that the assessor has offered. Quite often, this might be a reply to the feedback mail, but it could also be part of their office hour or the next classroom session (cf. Yu & Yu, 2002, p. 120). This links up with the numerous ways in which email feedback can supplement other methods, as will be outlined below. 3.8.8 Combinations with other feedback methods Email feedback can be combined with several other feedback methods. For instance, the corrected handwritten work can be added to the email as an electronic file attachment (as a photo or scan). This will make it more convenient for the teachers to refer back to it in the email, and for the learners it will make the message more comprehensible (Huett, 2004, p. 41). More straightforwardly, email feedback can be linked to electronic feedback in a text editor (see section 3.1) or in another file format. Specific comments could be directly inserted into the attached file, while the mail text will offer general corrective feedback or advice (suggested by Honeycutt, 2001, p. 53; see for example Tafazoli et al., 2014, p. 357). Moreover, the mail text itself might be checked by using an AWE tool (section 3.2). In addition, hyperlinks or further helpful documents can be attached to the mail as well as voice messages or videos. To reduce mail server space, these files can be stored on a cloud drive that will be hyperlinked in the mail text. To exemplify, Woods and Keeler (2001) supplemented email feedback with audio feedback, whereas McLeod et al. (2019) combined it with video feedback. As Huett (2004) argued, these short audio or video clips could help instructors and students build better social relationships and construct a learning community more easily (p. 41). Moreover, the videos may showcase concrete samples of student work or suggest scenarios for further practice (cf. McLeod et al., 2019). The particularities of audio and video feedback will be outlined in sections 3.11 to 3.13. 146 3 Digital feedback methods <?page no="148"?> Beyond these asynchronous methods, the email exchange can be followed up by synchronous consultations either on-site or in a videoconference. Teachers may mention these possibilities for further exchange at the end of their feedback mails (see section 3.8.6). Before we move on to feedback in videoconferences, though, survey feedback will be dealt with, because this is another, typically written and asynchronous feedback method. In contrast to the other methods, it usually goes from students to teachers, even though all methods are eligible for any direction. 3.8.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) What are the advantages of email feedback as compared to conventional (hand-)written feedback? (2) For what tasks and educational sectors might email feedback be particularly suitable? Please explain. (3) What do teachers need to be aware of when providing email feedback? Application questions/ tasks: (4) Due to its popularity, you are probably a regular user of emailing. You receive and send several emails on a daily basis. Sometimes, you might feel overwhelmed by the amount of mails you receive and by the time it takes to compose a clear and comprehensive email. Try to optimize your usage of emailing by taking the following steps: a. First of all, try to get an overview of the types of mails you send and receive. You might create a spreadsheet for this. Classify your most recent mails (e.g. from last week) according to different contexts and directions of use (private vs. workplace; from/ to friends or relatives, colleagues or fellow students, teachers or employers or others; one-directional, e.g. newsletters, or bi-directional or multi-directional group mails; one-turn exchanges or multiple turns). Note down the number of emails you have written or received in each of these categories. b. Next, assess the importance of each email. You may use symbols or a ranking in a separate column. Which of these emails could you have ignored or should you have responded to (or maybe even replied to earlier or at a different time)? c. Look more closely at the contents of the most important mails. Were any feedback interactions taking place in them? Analyze the way feedback was given. How do you feel about it (on the one hand about the feedback that you have received and on the other hand about the feedback that you have provided)? Do you think that these mails were clear and comprehensible enough? d. Also inspect the length of these emails. How much time have you invested in writing your longest mails? Is there any other way that might have been more time-saving? Try it out next time. 147 3.8 E-mail feedback <?page no="149"?> 3.9 Survey feedback 3.9.1 Definition and alternative terms/ variants In this and the subsequent section, different survey and polling methods are discussed, ranging from (asynchronous) course evaluation questionnaires (current section) to quick interactive (synchronous) live polls during lessons and presentations (next section 3.10). For the purpose of course evaluations, instructors or institutions often design a survey that typically consists of several Likert-scale items, complemented by a few open questions, in order to obtain feedback from the students (Bir, 2017). Survey items may include questions about the course content, structure, time management, teaching style and student interaction, learning materials, such as PowerPoint slides, handouts or textbooks, as well as tu‐ torial support (cf. Bir, 2017, and Haddad & Kalaani, 2014, including sample questions). The survey might be completed during class time in order to reach a high response rate (Bir, 2017). On the other hand, the asynchronous availability of online surveys would give the students more time to think about their responses outside of class time (Haddad & Kalaani, 2014, p. 8). Since these surveys mostly serve to elicit students’ opinions, they have alternatively been called “student feedback questionnaires” (SFQ) (Coffey & Gibbs, 2001; Kember, Leung, & Kwan, 2002). 3.9.2 Contexts of use, purposes and examples Course evaluation surveys are usually administered by the teacher or institution at the end of a course or course unit, but may also be implemented at intermediate points to inquire into students’ perceptions and understanding. Based on mid-term or regularly conducted surveys, teachers are able to adjust the teaching contents and methods to the learners’ needs before the end of the term (cf. Bir, 2017; Haddad & Kalaani, 2014). Evaluation surveys at course end, by contrast, can only have an impact on subsequent courses if at all (Haddad & Kalaani, 2014, p. 2). Consequently, one main purpose of survey feedback is to improve the quality of teaching (Kember et al., 2002, pp. 411-412). It helps teachers (and institutions) identify potential areas for improvement which they could remedy in the future (Kember et al., 2002, p. 412). Beyond that, positive course evaluations have become increasingly important for successful job applications and decisions about contract renewals, tenure and promotion in academia (Kember et al., 2002, p. 412; cf. Bir, 2017). Quite often, it is thus not only “an implicit obligation” for personal and professional improvement, but “an explicit requirement” set by the university administrations (Kember et al., 2002, 148 3 Digital feedback methods <?page no="150"?> p. 412). Accordingly, survey feedback can be implemented in all disciplines as well as in face-to-face and online courses (cf. Bir, 2017). 3.9.3 Advantages of the feedback method Most of the advantages that are cited in the literature constitute benefits for teachers. However, the ultimate aim would be to improve the teaching quality (Kember et al., 2002, pp. 411-412) in order to enhance the learning experience for the students. Surveys may therefore cater for a student-centered teaching environment (Haddad & Kalaani, 2014, p. 9) in which the learners’ perspective is actively sought and considered (cf. Kember et al., 2002, pp. 411-412). For this purpose, periodical surveys during a term as well as mid-term evaluations can be particularly helpful (Bir, 2017; Haddad & Kalaani, 2014). The survey results are meant to help teachers identify the strengths and areas for improvement on a variety of dimensions (Bir, 2017; see also Kember et al., 2002, p. 412). At the same time, these questions may foster students’ metacognitive thinking and could help them take on more responsibility in shaping their learning environment (Haddad & Kalaani, 2014, p. 13). While student feedback can be collected in a variety of ways during class time, online surveys offer several advantages in terms of access, availability and analysis. The “anywhere-anytime-access” (Vasantha Raju & Harinarayana, 2016, p. 5) gives students sufficient time for reflection (Haddad & Kalaani, 2014, p. 8) before sharing their thoughts about the course. The web-based interface allows them to provide their answers from several mobile devices in a user-friendly manner (Vasantha Raju & Harinarayana, 2016, p. 2). Moreover, from the instructors’ perspective, the creation, administration and anal‐ ysis of the surveys is fairly fast and convenient (Vasantha Raju & Harinarayana, 2016, p. 12). Administrators can reach a large population independent of geographical boundaries, which is particularly useful for remote and distance learning contexts (Vasantha Raju & Harinarayana, 2016, p. 12). Likewise, costs for paper, print and postal mail are eliminated (Vasantha Raju & Harinarayana, 2016, p. 2). It is even possible to address different target groups at the same time, for instance to conduct course evaluations from multiple perspectives, e.g. colleagues and students (Zierer & Wisniewski, 2018, p. 94). In the end, these can also be compared to teachers’ selfperceptions, such as by the app FeedbackSchule (see section 3.9.5). Most programs support a wide range of question types (see section 3.9.5). Closed questions can be analyzed easily by the programs, oftentimes yielding a graphical illustration of the results immediately (Vasantha Raju & Harinarayana, 2016, p. 10). Open questions, in turn, often do not only provide insight into what teachers could improve, but also how they might do it (Bir, 2017). Frequently, the survey tools permit a direct export of the results into software packages for quantitative and qualitative data analysis (Vasantha Raju & Harinarayana, 2016, pp. 9-10). This may lead to a substantial time, cost and error reduction (Vasantha 149 3.9 Survey feedback <?page no="151"?> Raju & Harinarayana, 2016, p. 9) because the answers do not need to be copied or typed manually into an external program. Some survey applications, however, have inherent limitations that can be disadvantageous for certain purposes, as the next section will elucidate. 3.9.4 Limitations/ disadvantages Teachers and students might be dissatisfied with survey feedback, which mostly comes from an unsuccessful integration of the feedback into the learning context and the institutional setting on microand macro-levels. On the micro-level of the individual classroom, it was reported that students might not take the surveys seriously enough, as they doubt that their responses could eventually improve teaching (Huxham et al., 2008, p. 676; Kember et al., 2002, pp. 416-417). Especially if they need to fill in surveys in several of their courses, they might think that it is just a formal procedure that will have no effect at all. It would therefore be important to discuss the survey results together with the students so that they can perceive their impact (cf. implementation below). Otherwise, they might feel disappointed (Lake, Boyd, Boyd, & Hellmundt, 2017, p. 99) and could refrain from completing further surveys in the future. Even worse, students might see themselves more as “passive receivers” instead of “active seekers” of feedback (Winstone & Boud, 2019, p. 115) if their opportunities for feedback are restricted to one-way evaluation surveys only (Lake et al., 2017, p. 83). In that regard, a long time lapse between survey completion and changes in teaching procedures can aggravate this impression (cf. Winstone & Boud, 2019, p. 112). Quite often, no changes might be perceived at all if survey feedback is only sought at the end of the course (Haddad & Kalaani, 2014, p. 2). However, also for institutionally administered mid-term surveys, it sometimes takes a while until the evaluation results are communicated back to the teachers. They would thus have hardly any time left in order to adjust their teaching to the students’ needs (Winstone & Boud, 2019, p. 112) and thus to help them reach the course goals (Haddad & Kalaani, 2014, p. 2). What is more, many evaluation surveys are standardized for an entire university or several institutions. Kember et al. (2002) cautioned that these questionnaires may lack the flexibility and focus that is needed to obtain usable results (pp. 421-422). Certainly, one standardized survey cannot represent the variety of teaching styles and contents that is found across several subject fields (Kember et al., 2002, p. 422). For instance, they often do not give sufficient room for appreciating innovative teaching designs that some instructors might have tried out in their courses (cf. Kember et al., 2002, p. 422; Winstone & Boud, 2019, p. 114). Consequently, Kember et al. (2002) suggest using surveys that can be easily customized by the instructors to accommodate various needs (p. 422). Likewise, standard questionnaires that are reused over several years may no longer mirror contemporary teaching approaches (Kember et al., 2002, p. 422). 150 3 Digital feedback methods <?page no="152"?> This could lead to inaccurate and disappointing evaluation results for the teachers (cf. Winstone & Boud, 2019, p. 114). Apart from the actual teaching quality, there are several further factors that can cause low degrees of satisfaction on the students’ side. First of all, the survey items might not fully capture the intended construct, e.g. course satisfaction. Learners may have a different understanding of satisfactory courses (e.g. good classroom atmosphere) than teachers or institutions would have (e.g. good grades). Second, students who performed well in a course or perceived positive relationships are likely to give better evaluations than those who did not (as reviewed by Winstone & Boud, 2019, p. 114). Third, students’ attitudes towards evaluation surveys and their presumed impact (or lack thereof) can be influential, as has been described above. Fourth, the wording of survey items might be ambiguous (cf. Winstone & Boud, 2019, p. 113), which is particularly problematic for closed questions. In the end, ineffective survey designs can contribute to a “low response rate”, which is a problem of several surveys anyway (Vasantha Raju & Harinarayana, 2016, p. 11). Many participants also raise concerns about the anonymity of the surveys, especially in small classes (Bir, 2017). They fear that teachers could recognize them and have a negative perception of them. Beyond that, further “privacy and security issues” have been voiced (Vasantha Raju & Harinarayana, 2016, p. 11; see also Farmer, Oakman, & Rice, 2016). In part, this also depends on the program that is used. For instance, a survey created with Google Forms might not comply with the data protection regulations in a specific country or institution (cf. Dooley, 2021). Apart from that, teachers could feel controlled if their institutions conduct regular course evaluations and use the results as instruments for career decisions (Huxham et al., 2008, p. 676; Kember et al., 2002, p. 412). If these evaluations are done too frequently, staff may simply ignore the feedback instead of incorporating it into their teaching (Kember et al., 2002, p. 419). The motivation for using the survey feedback might be even lower if the institutions do not appreciate teaching quality as much as they value research or other activities (Kember et al., 2002, p. 421). Hence, both on the micro-level of the individual classrooms and on the macro-level of the institution or academic sphere, there are several factors that can have adverse effects. Suggestions for the successful implementation of survey feedback will therefore be outlined next. 3.9.5 Required equipment Surveys can be created and completed by using a desktop PC, laptop, tablet or mobile phone. At least for the creation of evaluation surveys, a sufficiently large screen would be recommended. Common platforms and software include Google Forms (https: / / www.google.com/ forms/ about/ ; e.g. Haddad & Kalaani, 2014), Smart Survey (https: / / www.smartsurvey.com/ ), Free Online Surveys (https: / / freeonlinesurveys.com/ ), Qual‐ trics (https: / / www.qualtrics.com/ ; e.g. Lake et al., 2017), Kwik Surveys (https: / / kwik 151 3.9 Survey feedback <?page no="153"?> surveys.com/ ), Quick Surveys (https: / / www.quicksurveys.com/ ) and Question Pro (https: / / www.questionpro.com/ ). In Germany, Lime Survey (https: / / www.limesurvey. org/ de/ ), Survey Monkey (https: / / www.surveymonkey.de/ ) and Soscisurvey (https: / / www.soscisurvey.de/ ) are used frequently, and in school contexts, the app Feedback‐ Schule (https: / / www.feedbackschule.de/ ) has gained in popularity in recent years. Moreover, many LMS have an integrated survey tool that can likewise be utilized for such purposes. For standardized evaluations, the institution usually provides access to a survey link so that teachers do not need to create them on their own. Each of the applications has distinct affordances and shortcomings (e.g. regarding the maximum number of questions and respondents), as comparisons of free online survey platforms have shown (e.g. Farmer et al., 2016; Vasantha Raju & Harinarayana, 2016). For instance, Google Forms offers many functions and permits an unlimited number of surveys and respondents (Haddad & Kalaani, 2014, p. 4; Vasantha Raju & Harinarayana, 2016, p. 4). In addition, surveys via Google Forms are said to be very easy to create, administer and analyze (Haddad & Kalaani, 2014, p. 3). For creating surveys, it offers a user-friendly modular structure which permits teachers (or students) to add, delete and modify a wide range of questions and question types (Haddad & Kalaani, 2014, p. 4). Assessors can choose from a variety of pre-designed templates (see https: / / docs.google.com/ forms/ u/ 0/ ? ftv=1&tgif=c; tgif=c for the gallery of templates), which can be customized to their needs, for example in terms of colors, fonts and logo usage (Google Forms, 2022; Vasantha Raju & Harinarayana, 2016, p. 4). Beyond text, they may insert images and videos in the question forms (Docs Editors Help, 2022). Moreover, as with other products from the Google family, Google Forms allows for collaborative work, i.e. to create surveys together in a team in real-time (Google Forms, 2022; Vasantha Raju & Harinarayana, 2016, p. 4). It also “supports logic branching”, which means that questions will be skipped or displayed depending on the foregoing answers (Haddad & Kalaani, 2014, p. 4; Vasantha Raju & Harinarayana, 2016, p. 4). To administer the finalized survey, teachers may share the link via email or embed the survey on their (course) website (Google Forms, 2022; Haddad & Kalaani, 2014, p. 4; Vasantha Raju & Harinarayana, 2016, p. 4). The survey can be completed at a PC or on mobile devices, including Android and iOS (Docs Editors Help, 2022). The survey creators can be notified whenever a survey response has been submitted so that they can immediately access the results that are stored in a spreadsheet on Google Drive (Haddad & Kalaani, 2014, pp. 3-4). The analysis is further facilitated by the statistical functions of Google Spreadsheet, but they could likewise be exported to an external program (Haddad & Kalaani, 2014, p. 4). Hence, calculations and graphical representations may already be conducted online on Google Drive (Vasantha Raju & Harinarayana, 2016, p. 4). Overall, Google Forms has often been praised for its user-friendliness and range of features, whereas data privacy has been identified as a potential downside. Feed‐ 152 3 Digital feedback methods <?page no="154"?> backSchule, by contrast, complies with the data protection regulations in Germany and is 100 % anonymous. The free version allows for an unlimited number of surveys, but only for 35 responses per survey. The app has been created in cooperation with reputable scholars in the field of feedback research. Many scientifically approved survey templates are therefore available (FeedbackSchule, 2022), but also a free creation of individual items is possible. The survey can be accessed conveniently via a QR code (teaCh, 2019). Its most outstanding feature probably is the consideration and comparison of multiple perspectives (teaCh, 2019). The survey responses given by different target groups, e.g. feedback by pupils or colleagues (FeedbackSchule, 2022), can be compared to teachers’ self-assessment after they have completed the survey themselves. The app then offers a graphical illustration of the results and permits an export into Excel (FeedbackSchule, 2022). More general recommendations regarding the implementation will be given next. 3.9.6 Implementation (how to) Apart from standardized evaluation surveys issued by the institution or quality control agencies, teachers are advised to create course surveys on their own in order to gather student feedback on a regular basis. These surveys can be customized to the particularities of each course and may constitute a foundation for discussion in the classroom. To create an online survey, teachers need to select an appropriate tool that they consider user-friendly and that nevertheless offers all the features which are deemed essential. The survey may contain rating scales, multiple choice questions, check boxes as well as open response fields (Vasantha Raju & Harinarayana, 2016, p. 5). Furthermore, it can be enriched by pictures or other hyperand multimedia elements if relevant. Commonly, (fiveor seven-point) Likert scales are used that ask for students’ degree of (dis-)agreement with a battery of statements (Bir, 2017). These statements may refer to the contents and delivery of the courses, e.g. the teachers’ subject-matter knowledge, teaching methods, course materials and clarity (Bir, 2017). For each statement, the survey creators may insert a “not applicable” (“N/ A”) option (Bir, 2017), which is very helpful when surveys are re-used across several courses. For instance, the item “The room size was adequate for this course” would be “not applicable” in online courses, but important for face-to-face classes. To clarify responses to closed questions and to gain deeper insights, open-ended questions would be valuable. Examples include “Please explain why you gave that rating” (Bir, 2017), “What did you like most about this course? ” and “What could be improved in this seminar? ” As they require more time to be completed (and analyzed), the number of open questions should be limited in evaluation surveys (Bir, 2017). In addition, evaluation surveys often contain a section that collects some basic personal details from the students, such as their age or age range, their gender and their year of studies (Bir, 2017). This may be helpful to discover potential patterns in the 153 3.9 Survey feedback <?page no="155"?> survey data, but it could likewise constitute a threat to anonymity in smaller classes. Since evaluation surveys should strive to protect students’ anonymity to elicit honest and open responses (Bir, 2017), these questions should be kept at a minimum. Certainly, the entire survey should be proof-read and piloted before it is administered to the students. On the one hand, this piloting is important to erase ambiguities in the way questions are formulated; on the other hand, it is crucial to check the proper technical functioning in terms of data transmission and analysis. To administer the survey, a hyperlink is usually generated by the program that needs to be distributed to the students, e.g. via email (Vasantha Raju & Harinarayana, 2016, p. 7) or the LMS. In some programs, this link can be customized to a certain extent so that it contains specific keywords instead of random letter and number combinations (e.g. via Soscisurvey). For end-of-term surveys, the response rates are sometimes rather low because students have either already completed the course requirements or because they are busily preparing for their exams. Bir (2017) therefore suggests that the evaluation survey might be completed during a course session. However, this could put too much pressure on some students, as they might feel observed by the instructor or because they need more time for reflection. For mid-term surveys, one possibility would be to restrict access to further course materials until the survey has been completed by the students (Bir, 2017). A less intrusive way would be to kindly remind the students of the survey if many of them have not yet submitted it even though the deadline is approaching. Teachers may also fill in the survey themselves and compare their own perceptions with those of the students later on (if the tool allows it, e.g. FeedbackSchule). Afterwards, the teachers need to analyze the survey results. Some programs, such as Google Forms, present the responses in a format that is directly suitable for analysis (Vasantha Raju & Harinarayana, 2016, pp. 9-10). Based on the results, the instructors should think about how they could adjust their course contents and teaching style to better accommodate to the learners’ needs. Especially for mid-term surveys, the feedback should be reviewed promptly and discussed with the students (Bir, 2017). If student feedback is sought regularly, it will help to achieve learner-centered teaching (Haddad & Kalaani, 2014, p. 2). As Lake et al. (2017) put it, evaluation surveys may not only serve as a tool for data collection, but also as a tool for enhancing student engagement (p. 101; see section 3.9.3). 3.9.7 Advice for students Students are advised to take course evaluations seriously even if they do not perceive an immediate change in teaching or course design, especially if the evaluation surveys are implemented at course end only. However, they might help improve future classes held by the same instructor. It would therefore be important to provide honest opinions. Notably, they should dare to use the open comment space to elaborate on rating items that might have been ambiguous as well as to adduce further aspects that the 154 3 Digital feedback methods <?page no="156"?> closed items have not yet covered. This may refer to particularly innovative concepts and approaches of the seminar (see the limitation mentioned by Kember et al., 2002, pp. 421-422). Beyond that, students should let their instructors know whether there is anything they could improve during class time already, i.e. they do not need to wait until the entire seminar is over. If possible, they should also tell their teachers in how far the course adjustments contributed to creating a better learning experience for them (e.g. modifications made as a response to the mid-term survey). 3.9.8 Combinations with other feedback methods As surveys merely allow students to provide feedback at pre-defined points in time (mostly at the end of a seminar) and in a pre-structured manner, survey feedback should not be the only feedback method that is used in a course. More immediate feedback can be collected via online response systems, as will be explained in the next section. Furthermore, teachers may engage in live exchanges with their students at any point in time in order to inquire into their perceptions, understanding and opinions. Especially for small classes, an open in-class discussion is probably more effective than digital feedback via surveys, unless anonymity is a critical issue content-wise (Bir, 2017). Beyond that, the variety of methods suggested in this book should not only be used by teachers, but also by students to give feedback to their teachers and peers (see section 2.1.7). Hence, the completion of surveys and live polls should definitely not be the only opportunities for learners to engage in feedback practices. 3.9.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Are there any standardized evaluation surveys at your institution? What purposes do they serve? (2) Have you ever created course evaluation surveys yourself ? a. In how far were they different from the standardized evaluation surveys at your institution? b. Have you implemented them online or on-site? (3) What are the limitations of end-of-term surveys? Application questions/ tasks: (4) Check whether your institution recommends a particular survey tool. Otherwise, please select one that you find suitable for your needs. Make yourself familiar with its basic functions and start creating a simple online survey. a. What are the advantages of the tool that you have chosen? b. What challenges have you encountered? Ask your IT service or a colleague/ fel‐ low student for help instead of giving up directly. Maybe they have some suggestions for you. c. As soon as you have finalized the survey, try it out with a small group. 155 3.9 Survey feedback <?page no="157"?> 3.10 Feedback via live polls (Poll feedback, ARS) 3.10.1 Definition and alternative terms/ variants While course evaluation surveys only convey feedback in a delayed manner, synchro‐ nous feedback can be collected by using audience response systems (ARS). They allow teachers to obtain more immediate feedback from the students and to make their lessons more interactive (Caldwell, 2007, cited in Chavan, Gupta, & Mitra, 2018, p. 465). Since mostly teachers use ARS to gather student feedback, they have alternatively been referred to as student response systems (SRS) (Little, 2016). Further terms are classroom response systems (CRS), classroom feedback systems (CFS), learner response systems (LRS) or clickers (Mork, 2014, p. 127). With these applications, typically teachers pose ques‐ tions to their students who select an answer option on their devices, similar to the TV show “Who wants to be a millionaire? ” (Mork, 2014, pp. 127-128). Formerly, (handheld) clickers had been utilized for that purpose, but they are increasingly being replaced by online student response systems (OSRS) that work on many internet-compatible devices, e.g. smartphones, tablets or laptops (Mork, 2014, p. 127). Apart from stand-alone software, live polls are nowadays a common component of many videoconfer‐ encing systems (see section 3.14), which is very conven‐ ient for online courses. The results of the voting are dis‐ played shortly afterwards so that teachers can use them instantaneously in order to adjust their teaching and clar‐ ify potential nonor misunderstandings of the teaching contents (Caldwell, 2007, as cited in Chavan et al., 2018, p. 465). 3.10.2 Contexts of use, purposes and examples ARS can be employed in a variety of fields, such as engineering (Chavan et al., 2018), biology (Voelkel & Bennett, 2014) and other STEM-based subjects (Evans, 2018), in the social sciences, including economics (Reinhardt et al., 2012) and law (Skoyles & Bloxsidge, 2017), as well as in the humanities, e.g. in second/ foreign language education (Mork, 2014). To exemplify, Mentimeter was used with students from “an interdisciplinary background covering various fields: psychology, pedagogy, linguis‐ tics, social anthropology, social work, medicine, computer engineering, mathematics and biology” in Pichardo et al.’s (2021, p. 2) study during the Covid-19 pandemic. However, polling tools, e.g. in apps or as part of videoconferencing software, are not only useful in remote teaching, but also in the face-to-face classroom. They help 156 3 Digital feedback methods <?page no="158"?> teachers gain immediate feedback from their students during a lesson, but learners may likewise use ARS in order to collect peer feedback, for example during their presentations. Furthermore, they can be utilized for quizzing purposes, i.e. to test the learners’ knowledge (Little, 2016, p. 2; Mork, 2014, p. 134), as well as for brainstorming, “idea generation and sharing” (Mork, 2014, p. 130). Overall, ARS appear to be well suitable to engage learners of different ages at schools, colleges and higher education settings as well as in adult education (Pichardo et al., 2021, p. 3). Further advantages will be summarized next. 3.10.3 Advantages of the feedback method In general, ARS offer plenty of benefits to learners and teachers. They enable the immediate collection of student feedback to particular questions (Chavan et al., 2018, p. 465; Evans, 2018, p. 26; Mork, 2014; Pichardo et al., 2021; Vallely & Gibson, 2018). The results are displayed instantaneously (Pichardo et al., 2021, pp. 3, 10), which helps teachers gain an insight into students’ understanding (Evans, 2018, p. 29; Mork, 2014, p. 134; Pichardo et al., 2021, p. 5; Skoyles & Bloxsidge, 2017, p. 236). Teachers may adjust their further teaching accordingly, e.g. by explaining the concepts that have not yet been properly understood (Evans, 2018, p. 29) or by adjusting the pace of their teaching (Mork, 2014, p. 134; Pichardo et al., 2021, p. 5). At a later point, they could repeat their questions in order to assess the effectiveness of their explanations or modifications (Evans, 2018, p. 26; cf. Pichardo et al., 2021, p. 12). Likewise, at the start of a new teaching unit or topic, polls can be helpful to learn more about students’ prior knowledge (Pichardo et al., 2021, p. 5) so that teachers might skip some of their intended explanations and tailor their teaching to learners’ needs. ARS have been found to be particularly useful in large classes (Pichardo et al., 2021, p. 12), especially those that adopt a lecture-style of teaching (Evans, 2018, p. 29). Teachers can then gain a quick overview of students’ understanding or collect a diversity of opinions. Importantly, all students can participate at the same time, i.e. answering a question does not depend on the teacher selecting a particular person to respond (cf. Pichardo et al., 2021, p. 13). The method has therefore been described as “democratic and inclusive” (Pichardo et al., 2021, p. 12; cf. Evans, 2018, p. 25) and as facilitatory of a “teacher-student dialogue in relation to the teaching-learning process” (Pichardo et al., 2021, p. 11). Learners may thus feel “co-responsible” for the learning success (Pichardo et al., 2021, pp. 11, 13). As their knowledge, interests and preferences are acknowledged, ARS can be highly motivating (Mork, 2014, p. 132; Pichardo et al., 2021, pp. 11, 13; Skoyles & Bloxsidge, 2017, p. 236). This may lead to a higher engagement in the learning process and increased participation in the classes (Evans, 2018, p. 25; see also Mork, 2014, pp. 131-132; Skoyles & Bloxsidge, 2017, p. 236). The aspect of “active learning” (Mork, 2014, p. 128; Pichardo et al., 2021, p. 11) has therefore been stressed frequently. 157 3.10 Feedback via live polls (Poll feedback, ARS) <?page no="159"?> The integration of polling and quizzing in the classes can encourage greater attention and focus on the course materials (Chavan et al., 2018, p. 468; Fitch, 2004, p. 76; Pichardo et al., 2021, pp. 5, 10). Students might already come to the classes more prepared than beforehand (Mork, 2014, p. 133). Even if learners do not respond to all questions, they may benefit from reading the questions, thus identifying gaps in their knowledge. Moreover, they learn from seeing the results subsequently as well as from the discussions that follow (Skoyles & Bloxsidge, 2017, p. 236). At least indirectly, ARS can thus contribute to learning (Mork, 2014, p. 133). It may not only enhance self-assessment (Mork, 2014, p. 132; Pichardo et al., 2021, p. 6), but also peer reflection and group discussions and thus a more critical engagement with the learning contents (Pichardo et al., 2021, p. 11; cf. Mork, 2014, p. 133). Apart from teacher-led questions, ARS tools can therefore be used to provide peer feedback on other students’ presentations, for instance (Mork, 2014, p. 133). For both peerand teacher-initiated questions, the anonymity of ARS is seen as advantageous (Evans, 2018, p. 30; Pichardo et al., 2021, p. 10). Usually, only the total number of respondents and (the distribution of) their answers are shown in the app (Pichardo et al., 2021, p. 3; Vallely & Gibson, 2018, p. 4). This gives the learners the freedom to express their ideas without being blamed or censored (Pichardo et al., 2021, p. 10). Clearly, this brings along several affective benefits for the students. They do not need to be afraid of making mistakes in front of their teachers and peers (Little, 2016, p. 3; Pichardo et al., 2021, pp. 5, 10, 13). In particular, students who do not usually participate, e.g. shy students or learners with speech disorders, dare to contribute their answers and opinions (Mork, 2014, p. 131; Pichardo et al., 2021, pp. 5-6, 13). Beyond that, polling via ARS was frequently described as enjoyable and “fun” by the learners (Pichardo et al., 2021, pp. 10-11, 13), which can be partly attributed to the novelty of the system, though (Mork, 2014, p. 132), but also to its game-like and playful nature (Pichardo et al., 2021, pp. 8, 10). Furthermore, its user-friendliness and practicality were emphasized (Mork, 2014, p. 132). As compared to hand-held clickers, online ARS are a costand time-saver because voting devices do not need to be purchased and distributed to the students in class (Little, 2016, p. 1). Teachers and learners only need a web browser with internet access in order to start a poll (Little, 2016, p. 1; Mork, 2014, p. 127; Pichardo et al., 2021, p. 2; see section 3.10.5). For this, they may use their own devices, e.g. a laptop or mobile phone (Little, 2016, p. 1; Rudolph, 2018, p. 35). If not everyone has such a device, students may pair up in groups in order to respond to a question (Pichardo et al., 2021, p. 11), which can lead to increased peer interactions in the classroom. However, the students and teachers even do not need to be in the same place (Mork, 2014, p. 132), which is beneficial for hybrid and remote learning contexts. With most systems, e.g. Pingo and Socrative, the participants do not need to register and set up an account (Evans, 2018, p. 25; Mork, 2014, p. 136), but simply need to enter the teachers’ access code in order to open the poll on their devices (Little, 2016, p. 1; Mork, 2014, p. 128). Consequently, students generally found ARS easy to use (Mork, 158 3 Digital feedback methods <?page no="160"?> 2014, p. 135; Pichardo et al., 2021, p. 2; see also Vallely & Gibson, 2018) and teachers considered it as “unobtrusive” during their lectures (Chavan et al., 2018, p. 466). Teachers are typically the only ones who need to sign up in order to create questions and conduct polls (Evans, 2018, p. 25). Usually, they likewise do not need to install any software on their devices (Pichardo et al., 2021, pp. 2-3). They are offered a variety of question types that they could utilize, such as singleand multiple-choice questions as well as open-ended questions (Evans, 2018, pp. 26-27). Quite often, the applications provide templates that can be used and modified (Evans, 2018, p. 27). Furthermore, the questions can be easily re-utilized in subsequent seminars and some systems even permit teachers to share their questions with colleagues (Mork, 2014, p. 135, regarding Socrative). Apart from that, some applications offer additional features. For instance, the LaTeX compatibility of Pingo was foregrounded by Evans (2018, p. 25), which is considered useful for STEM subjects in particular. The oftentimes teacher-directed initiation of the polls is a critical point, though, which leads us to the discussion of further limitations. 3.10.4 Limitations/ disadvantages The mostly teacher-directed initiation of the polls has been criticized in the previous literature. With ARS, students give feedback only when the teacher allows them to do so (Chavan et al., 2018, p. 465) and also the topics are typically determined by the instructor. However, apart from these “teacher-led comprehension checks” (Mork, 2014, p. 131), students may likewise create questions and pose them to their peers. In videoconferences, for example, students can be granted the right to moderate a session and start live polls, among other things. Further limitations result from the very nature of ARS. For instance, since polls usually require quick responses, students might accidently press the wrong button (Mork, 2014, p. 131). With most apps, they cannot change their originally selected answer, which can be quite frustrating for the students (Vallely & Gibson, 2018, p. 6). It is thus possible that the poll does not accurately reflect learners’ knowledge. Apart from that, the actual impact of ARS on students’ learning gain is unclear (Mork, 2014, p. 129). If it is used too frequently, it may even have adverse effects on student engagement (Vallely & Gibson, 2018, p. 6) Moreover, since the feedback is often anonymous, teachers cannot relate the responses to individual students (Vallely & Gibson, 2018, p. 6) so that they are unable to identify those who might need special support. On the other hand, if teachers spend lots of time on clarifying concepts because a minority answered incorrectly, the progress of the entire class might be slowed down. The same is true of unexpected answer patterns that require further discussion (Pichardo et al., 2021, p. 13). This, however, should be seen as an asset, as it can lead to a deepened processing of the contents. Even though ARS polling is characterized as inclusive of all students’ opinions (see section 3.10.3), it might not be suitable for all of them. To instantiate, students with 159 3.10 Feedback via live polls (Poll feedback, ARS) <?page no="161"?> visual impairments or other special needs might not be able to read the questions and results properly or to answer fast enough before the countdown has expired (Pichardo et al., 2021, p. 13). Some of these challenges arise from the limited configuration options of the programs (Pichardo et al., 2021, p. 13). Also more generally, the space limitations for open-ended questions could be an issue for everyone (Pichardo et al., 2021, p. 15; Skoyles & Bloxsidge, 2017, p. 236). A final downside is that online ARS require internet-compatible devices as well as internet access. This might still be a challenge in some regions of the world if WiFi is not available (Pichardo et al., 2021, p. 16; see also Vallely & Gibson, 2018) or if mobile data of students’ private phones need to be consumed (cf. Mork, 2014, p. 128). In the end, some argued that using the phone in class might cause phone addiction and distraction. Further details on the required equipment and implementation will be specified below. 3.10.5 Required equipment ARS systems can be used at desktop or tablet PCs as well as at laptops or mobile phones (e.g. Evans, 2018, pp. 25-26; Mork, 2014, p. 128). Quite often, no external app needs to be installed, but the ARS software can be directly accessed via a web browser (Mork, 2014, pp. 127-128; Pichardo et al., 2021, p. 3). Popular online ARS are Mentimeter (https: / / www.mentimeter.com/ ; e.g. Little, 2016), Pingo (http: / / trypingo.com/ ; Evans, 2018), Poll Everywhere (https: / / www.pollev erywhere.com/ ) and Socrative (https: / / www.socrative.com/ ; Mork, 2014). Beyond that, many other applications exist that can be used for polling and quizzing purposes, such as Edkimo (https: / / edkimo.com/ de/ ), Particify (https: / / particify.de/ ), Tweedback (https: / / tweedback.de/ ), Wooclap (https: / / www.wooclap.com/ ), Kahoot (https: / / kahoot.com/ ) and AnswerGarden (https: / / answergarden.ch/ ). Furthermore, polling is nowadays a common component of many videoconferencing systems, such as BigBlueButton (see section 3.14), and of LMS, e.g. the ILIAS plug-in called “Live Voting”. To exemplify, Pingo is a web-based ARS that was created for educational purposes by the University of Paderborn in Germany (Evans, 2018, pp. 25-26). Users register for a free account at http: / / trypingo.com/ and create their questions (Evans, 2018, p. 26). These questions can be categorized with the help of tags so that they can flexibly draw on them whenever a specific topic is addressed (Evans, 2018, p. 27). Several questions can be assigned to a survey session (Evans, 2018, p. 28) so that students may answer multiple questions in a row. They simply need to enter the access code at https: / / pingo.coactum.de/ to participate in the poll (Evans, 2018, p. 26). A countdown reminds the participants of the time that is left to submit their responses (Evans, 2018, p. 28). Afterwards, the results will be visualized as a bar chart (e.g. for multiple-choice questions) or as a word cloud or as a list of comments (for the open-ended questions). With the “highlight correct answer” option (Evans, 2018, p. 29), the right response will be shown in green. Moreover, questions can be asked repeatedly at different points in 160 3 Digital feedback methods <?page no="162"?> time and the answers can be compared to earlier rounds (Evans, 2018, p. 26). A Pingo tutorial is available at https: / / tinyurl.com/ JSchluerPingo (Schluer, 2021b). For Mentimeter, the number of questions is limited in the free version (Little, 2016, p. 2; Pichardo et al., 2021, p. 15), but users may find it attractive because of the different question types it offers (multiple choice, ranking, open-ended questions etc.) (Pichardo et al., 2021, pp. 7-8). Furthermore, Mentimeter can be embedded into PowerPoint presentations, “allowing a seamless blend of lecture slides and interactive voting activities” (Little, 2016, p. 2). The content can thus be enriched by images, videos, citations and other information (Pichardo et al., 2021, p. 8). To give answers, the participants need to enter the access code at http: / / www.menti.com/ or scan a QR code from the presentation (Pichardo et al., 2021, p. 3). Apart from the ordinary answer types, students can interact by using “an Instagram-style heart, a thumbs-up or thumbs-down in the style of Facebook’s “like” or “dislike” function, or a question mark if they find the presentation unclear” (Pichardo et al., 2021, p. 8). This might be very appealing for the learners (Pichardo et al., 2021, p. 8). More general advice regarding the implementation will be sketched in the following section. 3.10.6 Implementation (how to) Creating polls requires competencies in formulating questions and selecting the most suitable question type for a particular purpose. As with all other methods, ARS can only be effective if employed purposefully (Mork, 2014, p. 134; Pichardo et al., 2021, p. 11) and if both teachers and students are sufficiently familiar with it. When used for the first time, teachers should therefore familiarize the learners with the procedure, its objectives and the potential benefits for the students (Pichardo et al., 2021, p. 11). They might use some trial questions before the actual quizzing or polling phase starts (Skoyles & Bloxsidge, 2017, p. 237). To conduct such polls, teachers need to register in advance and create a set of questions that they want to draw on. The question types and the duration of the countdown should be chosen wisely (cf. Pichardo et al., 2021, p. 11). For instance, Fitch (2004) observed that sometimes even one second could make a difference (p. 76). After the students have submitted their responses, the teachers should display the overall results and discuss the answers together with the students (Mork, 2014, p. 128). If several participants gave a wrong reply, it will be necessary to talk about the topic in more detail, whereas teachers might move on to another topic or question if most of the responses were correct (Mork, 2014, p. 128). Especially if some students do not have a mobile device in order to take part in the poll, teachers may ask them to work together in small groups, discuss the questions and decide on an answer option (Pichardo et al., 2021, p. 11). If the polling does not work at all, teachers should have an alternative plan for continuing their session (cf. Skoyles & Bloxsidge, 2017, pp. 236-237). Some further advice for students is given next. 161 3.10 Feedback via live polls (Poll feedback, ARS) <?page no="163"?> 3.10.7 Advice for students Students should come to lessons well-prepared in order to participate in live polls and to check their understanding. As soon as they identify gaps in their knowledge or understanding, they should try to recapitulate the contents and ask the teacher or peers for further support. During the live polls, the learners should pay attention to the time limit that is set for a particular question. If time allows, students do not need to respond instantaneously, but may think about the answer for a few seconds. This can reduce the chance of accidently giving a wrong reply (see section 3.10.4). Moreover, they should try to incorporate ARS in their own presentations whenever suitable. This will help to make them more interactive. At the end of their presentation, they might utilize an open-response question to quickly gather anonymous feedback from their peers. The collected feedback can then be discussed further with their teacher, which constitutes a possible combination of feedback methods (see below). 3.10.8 Combinations with other feedback methods As users can commonly only give short answers via ARS, the feedback should be supplemented by other methods. In the first place, the voting results can be directly discussed in the classroom in order to initiate deeper feedback dialogues. Moreover, if ARS are merely used for quizzing purposes, assessors may provide more detailed feedback afterwards by choosing a different method. Similarly, teacher feedback would be a valuable complement to peer feedback that was collected during student presentations by means of ARS. Depending on the time that is available, teachers could talk to the students in the same course session or provide feedback on a later occasion, for instance in a follow-up email on the same day (section 3.8). For more complex or comprehensive matters, recorded feedback could be a useful alternative. The next sections will introduce audio, talking-head video and screencast feedback as three possible variants of recorded feedback. 3.10.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Have you ever participated in a live poll? a. If yes, did you enjoy the procedure? b. Was there anything that you would change? (2) For what contexts of use would you consider live polls suitable? Application questions/ tasks: (3) Please check whether the web-conferencing tool that you are currently using has an integrated polling function. If so, please try it out next time. You may start with a simple yes-or-no question in order to familiarize yourself and your students with the procedure. 162 3 Digital feedback methods <?page no="164"?> (4) If there is no integrated polling function in your web-conferencing platform or if you consider it unsuitable for your purposes, search for an alternative tool. For instance, you might try out Pingo or Mentimeter. a. What are the advantages of these tools? b. What do your students think about it? (5) Your teacher asks you to deliver a presentation about a topic or debate that is currently going on in the media. You want to make this an interactive presentation because your topic invites many different opinions. a. Note down the questions that you would like to ask to your fellow students. b. Think about the kinds of answers you want to elicit (e.g. text response, numerical data etc.). c. Select a suitable polling tool and familiarize yourself with its functions. Online manuals and video tutorials might be helpful in that regard. d. Create the questions and practice the polling procedure together with a friend (or by opening the poll from the participants’ perspective in another web browser or device). e. Check whether there is anything you still need to change. Afterwards, imple‐ ment these interactive questions during your presentation in the classroom. f. You may finish your presentation by collecting open-response feedback from your fellow students via ARS. 3.11 Audio feedback 3.11.1 Definition and alternative terms/ variants Audio feedback is an oral asynchronous feedback method in which an assessor’s voice is digitally recorded (cf. Bless, 2017; Renzella & Cain, 2020, p. 174). It has there‐ fore been alternatively labelled as “recorded spoken feed‐ back” (Bond, 2009, p. 1), “recorded oral feedback” (Solhi & Eğinli, 2020) or “recorded audio feedback” (RAF) (Heim‐ bürger, 2018), with each term placing special emphasis on the recorded nature of the feedback. However, the literature also abounds with several other terms, such as “asynchro‐ nous audio communication” (AAC) (Oomen-Early, 2008), “aural feedback [in the form of a sound file]” (Gleaves & Walker, 2013, p. 252) or “podcasting feedback”/ podcast feedback (France & Wheeler, 2007; Kettle, 2012). The use of the latter originates from the popularity of podcasts during the time when iPods first came into usage in the mid-2000s. By contrast, the terms “audiotape feedback” (Kirschner, van den Brink, & Meester, 1991) or “cassette-tape” feedback (Cryer & Kaikumba, 1987) represent the technological means that had been available in earlier decades. It had already been in use from the 1960s onwards (Warnock, 2008, pp. 201-204), but the recording has switched to digital formats in recent decades. Clearly, the tape-recording of cassettes 163 3.11 Audio feedback <?page no="165"?> or the storage and dissemination of audio feedback via CD-ROMs was much more laborious and time-intense than the easy storage and fast distribution that is afforded by cloud space and the internet nowadays. Moreover, modern applications do not only permit a quick and convenient recording of audio files on comput‐ ers, laptops, tablets and smartphones, but also an insertion of audio clips into various document types (such as text documents, PDFs or cloud applications). Hence, one may distinguish between recorded audio files that are separa‐ ted from the reviewed document and those that are inte‐ grated into the reviewed document. Even though the lat‐ ter option erases several shortcomings of the former, it has been less researched and implemented so far. On the one hand, this can come from the relative recency of integrated audio recordings as compared to the common association of audio feedback with standalone audio files in past decades; on the other hand, it may result from an unawareness of the affordances that many programs offer (such as Mi‐ crosoft Office or Adobe PDF software, but also some LMS tools, e.g. in Moodle). Ac‐ cordingly, most of the research reported below used separate audio files; however, also the affordances of integrated audio files will be discussed. 3.11.2 Contexts of use, purposes and examples Even though audio feedback can be used for communication between teachers and students as well as students and their peers (Middleton & Nortcliffe, 2010, p. 209), it has mostly been employed by teachers to give feedback to learners rather than in any other direction. This becomes evident in definitions such as “giving students comments in audio form” (Cavanaugh & Song, 2014, p. 123) or “instructor feedback that is spoken and recorded” (Bless, 2017, p. 9). Likewise, it has mostly been implemented in higher education settings (mainly undergraduate) rather than in school contexts. Overall, audio feedback appears suitable for summative and formative purposes (Hennessy & Forrester, 2014, p. 778; Rotheram, 2009, p. 22), either for the work of individuals or groups (Heimbürger, 2018, p. 107). Macgregor, Spiers and Taylor (2011) even claimed that it can be used for “feedback of all types” (p. 40). Indeed, the perusal of the literature attests its usefulness for a variety of assignments in different disciplines, for example in sociology (Bond, 2009; King, McGugan, & Bunyan, 2008), biology (Bond, 2009; Cann, 2014; Chalmers, MacCallum, Mowat, & Fulton, 2014; Merry & Orsmond, 2008; Rawle, Thuna, Zhao, & Kaler, 2018), the health sciences (Bond, 2009), nursing (Gould & Day, 2013), business (Carruthers, McCarron, Bolan, Devine, & McMahon-Beattie, 2014; Chew, 2014), public policy (Bond, 2009), programming (Renzella & Cain, 2020), engineering (Heimbürger, 2018; Nortcliffe & Middleton, 2008), geography (Bond, 2009; Ekinsmyth, 2010), English (EFL and ESL) (Morra & Asís, 2009; 164 3 Digital feedback methods <?page no="166"?> Olesova, Weasenforth, Richardson, & Meloni, 2011; Xu, 2018) and other language courses. This type of feedback has been mainly used to provide feedback for written tasks, such as essays (Cann, 2014), seminar papers (Carruthers et al., 2014) or written reports (Brearley & Cullen, 2012), but also for group presentations (Carruthers et al., 2014). It also had a positive effect in pronunciation teaching (Yoon & Lee, 2009). Further advantages will be surveyed next. 3.11.3 Advantages of the feedback method Students and teachers mostly perceive audio feedback very positively, especially when compared to written feedback. For example, in Stockwell’s (2008) study, 94 % of the Engineering students agreed “that the audio feedback was a better experience than previous written feedback received” (p. 3). Similarly, all students in the survey by Brearley and Cullen (2012) found the feedback helpful (p. 28). The positive perceptions mainly resulted from the benefits of the recorded speech. As speech is much quicker than writing or typing (Lunt & Curran, 2010, pp. 761-762; Orlando, 2016, p. 157), assessors tend to provide more feedback than in the written mode (e.g. Cavanaugh & Song, 2014; Ice, Curtis, Phillips, & Wells, 2007, p. 19; Lunt & Curran, 2010, p. 764; Merry & Orsmond, 2008; Stockwell, 2008). They can elaborate on ideas and thus provide “clearer [feedback], with less scope for ambiguity” (Bond, 2009, p. 2; cf. Cavanaugh & Song, 2014; McCarthy, 2015, p. 161). Also, students do not need to decipher handwriting (Bramley, Campbell-Pilling, & Simmons, 2020; see also Merry & Orsmond, 2008). Accordingly, learners found it easier to understand than written feedback (Butler, 2011, p. 103; Howard, 2018, pp. 167-168; Merry & Orsmond, 2008; Sipple, 2007). Moreover, assessors can modulate their voice to accommodate learners’ needs (Gould & Day, 2013, p. 562; Merry & Orsmond, 2008). The tone and pace of the voice can help the students understand the message better (McCarthy, 2015, pp. 162, 164; Merry & Orsmond, 2008, p. 8; see also Lunt & Curran, 2010, p. 764). Furthermore, in the case of language teaching, the oral modality immerses the students with target language input and helps them improve their pronunciation (Sheffield Hallam University, 2019). Consequently, audio feedback can convey “more than just words - tone, expression, pronunciation and emphasis all ad[d] to the depth of the communication” (McCarthy, 2015, p. 155, with reference to Middleton, Nortcliffe & Owens, 2009; cf. Gould & Day, 2013, pp. 556, 562; Merry & Orsmond, 2008, p. 4). Not only is the tone of the instructor often perceived favorably by the students (Cavanaugh & Song, 2014, p. 126; see also Marriott & Teoh, 2012), but there also appears to be a tendency among assessors to include more positive comments (Butler, 2011, p. 106) and more suggestions for improvement than in the written mode (Emery & Atkinson, 2009, in McCarthy, 2015, pp. 155-156; Merry & Orsmond, 2008, p. 4). This is definitely beneficial for future learning (cf. Ice et al., 2007, p. 19), not only content-wise, 165 3.11 Audio feedback <?page no="167"?> but also from a motivational perspective. Concerning the contents, learners may gain a better understanding of the reasons for the grade they have received and of concrete aspects they could work on (Butler, 2011, p. 107; cf. Rodway-Dyer, Dunne, & Newcombe, 2009, p. 63). Students often value its elaborate, narrative nature (McCarthy, 2015, p. 161; cf. the review by Henderson & Phillips, 2014, p. 5). Furthermore, some learners regarded the “personal nature and the detail provided” as “evidence that the lecturer had carefully considered their work” (Rotheram, 2009, p. 23) and had invested time as well as effort in formulating the feedback (Hennessy & Forrester, 2014). Showing interest in the students’ work and care for their progress fulfills important motivational functions and can lead to higher engagement in the learning process (Rotheram, 2007). In online courses, this may strengthen the bond between teachers and students (Sipple, 2007, p. 31) and might even have community-building potential through increased teaching presence and decreased social distance (Ice et al., 2007, p. 19; see also Butler, 2011, p. 103; Gould & Day, 2013, pp. 560-561; Xu, 2018, p. 2). Consequently, learners favor the audio mode because it gives the feedback a caring and more “personal touch” (Rotheram, 2007, p. 8), especially as compared to “dead” texts (Bond, 2009, p. 2; see also Cann, 2014; Cavanaugh & Song, 2014; Gould & Day, 2013; Marriott & Teoh, 2012; Merry & Orsmond, 2008). Audio feedback was perceived as “almost like a mini consultation” (student cited by Butler, 2011, p. 103; cf. Howard, 2018, pp. 167-168). In contrast to face-to-face meetings, however, the students can consult the feedback “in the comfort and privacy of [their] own home” or from anywhere else they want to (Gould & Day, 2013, p. 561). Due to the asynchronicity, they do not need to feel the threat of “losing face” or of being “under pressure to react or explain” to the teacher (Bond, 2009, p. 2). Of course, this does not mean that the learners should not respond to the feedback message. Quite on the contrary, they should seize the chance to reflect deeply about the feedback message and discuss it in a follow-up communication (see section 3.11.8). In that respect, another advantage of audio feedback is that it can be replayed various times (e.g. Bond, 2009, p. 2; Butler, 2011, p. 105) and may thus foster learners’ retention and uptake of the contents (Gould & Day, 2013, p. 561; Ice et al., 2007, p. 3). Hence, audio feedback captures the immediacy of oral communication but makes it permanently available for future reference (Lunt & Curran, 2010, p. 765). Learners can access the audio files at their computers (Cann, 2014; Carruthers et al., 2014), smartphones or tablet PCs (McCarthy, 2015, p. 161) whenever they want to. The oral modality is also particularly suitable “for students with reading and sight issues” (Lunt & Curran, 2010, p. 765; see also Bond, 2009, p. 2). Likewise, instructors find audio feedback convenient and easy to produce. For example, the software Audacity was regarded as “easy to download, understand and use” (Lunt & Curran, 2010, p. 761; cf. Butler, 2011, p. 105). Many also state that it is “fast to record” (McCarthy, 2015, p. 164; Ryan et al., 2016, p. 2) and that it may even get faster with practice (Rotheram, 2007). Overall, it has been found to be quicker than written feedback. For example, Lunt and Curran (2010) estimated that one minute of audio equaled about six minutes of written feedback. Ice et al. (2007) stated that audio 166 3 Digital feedback methods <?page no="168"?> feedback helped “reduce the time required to provide feedback by approximately 75 %” (p. 19) while the quantity of feedback provided increased by 255 %. However, there is also variation from lecturer to lecturer in terms of the way and the amount of feedback they provide in the oral mode (Gould & Day, 2013, pp. 562-563). Crucially, the question arises of how much feedback is enough or even too much (Borup et al., 2015, p. 164). Adequate feedback literacy is thus important, for which some tips will be given further below. In addition, audio files can be shared easily (Ryan et al., 2016, p. 2), preferably by using an LMS because emailing may suffer from file size restrictions (Lunt & Curran, 2010, p. 761). If the feedback is recorded on the smartphone (see e.g. Chronister, 2019, pp. 36-37), it can be shared via a secure messenger (cf. Saeed, 2021, p. 8, on the use of the WhatsApp messenger for the distribution of the audio and video feedback files; see also section 3.7). An additional time-saving factor could be the reduction of follow-up meetings that might no longer be necessary if the audio feedback contains sufficiently detailed and comprehensible information (Bond, 2009, p. 3; see also Bramley et al., 2020; Kirschner et al., 1991). On the other hand, it is important to engage in feedback dialogues with the learners instead of considering audio feedback as a one-way route. In that regard, messenger apps or social media can turn into platforms for dialogic feedback and meaning negotiations (cf. Xu, 2018). Moreover, through the combination of chat feedback and audio feedback, they might encourage more instantaneous replies (Xu, 2018, p. 7). As a corollary, the use of audio feedback can reduce physical problems, such as hand strain from excessive written corrections (Bond, 2009, p. 3). Similar to the learners, some instructors may feel more comfortable with this method because it is not directly face-to-face (Mellen & Sommers, 2003, cited by Silva, 2012, p. 4). The decisive factor, however, should be the usefulness of the method with regard to reaching the intended learning goal while catering for the learners’ needs. Due to the vastly positive perceptions by the students, many assessors in the reported studied intended to continue using it (Rotheram, 2009, p. 23) as it may contribute to making teaching more effective and students more satisfied (Bond, 2009, p. 3). However, there are also several limitations that will be examined subsequently. 3.11.4 Limitations/ disadvantages Several challenges of audio feedback arise from the technology itself and from a lack of didactic training in this method, but also because of time pressures and attitudes towards changing established habits. In early research, instructors were mainly worried about the equipment and time it takes to produce audio files. During production, the batteries of dictaphones had expired rather quickly (Stockwell, 2008, p. 2) or the sound quality was perceived as rather bad (cf. e.g. Merry & Orsmond, 2008), either for technological reasons or because it was difficult to find a quiet place for the recording (Hennessy & Forrester, 2014, 167 3.11 Audio feedback <?page no="169"?> pp. 781, 786-787; cf. Butler, 2011, p. 105). In addition, the dissemination was seen as burdensome, especially if no LMS or cloud space was available but the distribution occurred via cassette tapes, CD-ROMs or email only. Several teachers reported that large files could not be attached to emails (Hennessy & Forrester, 2014, p. 781; Merry & Orsmond, 2008) or that it took a long time to upload (and download) them due to a slow internet speed (Cann, 2014, p. 37). Some therefore decided to split a clip into two parts or to reduce the file size in other ways (Cavanaugh & Song, 2014, p. 126). Moreover, if assessors want to edit their audio files, the process might be prolonged even further (cf. Bond, 2009, p. 3). However, with advances in technology, the procedure is speeded up substantially. LMS and cloud servers typically provide sufficient storage space for the dissemination of audio feedback. Short audio clips can be inserted into documents as a replacement of written comment boxes, thus reducing the need of editing or re-recording longer files (Bond, 2009, p. 3). Hence, not only the technologies may ease and accelerate the workflow for producing audio feedback, but it may additionally become faster with practice (Bond, 2009, p. 3; see also Merry & Orsmond, 2008). On the other hand, despite training in the production of audio feedback, established habits and an unwillingness to change them can be a hindrance to its actual implemen‐ tation (Ekinsmyth, 2010, p. 76). Staff might be worried about recording themselves and about possible student complaints (Ekinsmyth, 2010, p. 77) or simply lack the time to acquire new skills. To become literate in the technology (Cann, 2014) and the didactic realization, several skills need to be developed. For instance, the tone of voice is important to build up an engaging atmosphere. Otherwise, the audio commentary could be perceived as harsh, especially if the teacher’s frustration is transported through this medium (Rodway- Dyer et al., 2009, pp. 63-65). In Rodway-Dyer et al.’s (2009) study, this mainly resulted from the lecturer reading out aloud the written comments when giving audio feedback (p. 65). Since written feedback is typically very brief and formal, it is obvious that the tone was perceived as harsh by students. Students felt that such negative feedback was “harder to take” in audio format than in the written modality (Bond, 2009, p. 2; cf. Sommers, 2002, quoted by Warnock, 2008, p. 204). The personal feel of the recorded speech might thus make students feel uncomfortable or even upset (Ekinsmyth, 2010, p. 75). Hence, instructor’s communication skills are essential, “taking care both in terms of the choice of words and the way in which they are expressed” (Butler, 2011, p. 104). This also necessitates an appropriate structure (Rodway-Dyer et al., 2009, p. 65) with an adequate “balance of negative to positive feedback” (p. 68; cf. Comiskey, 2012; see section 2.1.4). Also, when producing audio feedback, learners’ limited attention span while listening to the one-way audio comments needs to be borne in mind (Hepplestone, Holden, Irwin, Parkin, & Thorpe, 2011, p. 123). Moreover, further proof of the effectiveness of audio feedback as compared to written feedback is needed (Chalmers et al., 2014; Gleaves & Walker, 2013; Macgregor et al., 2011; Voelkel & Mello, 2014). The reduced effectiveness could be caused by “the 168 3 Digital feedback methods <?page no="170"?> separation [of the audio file] from the work being assessed, unlike comments that are written or typed in the margins” (Bond, 2009, p. 2; see also Brearley & Cullen, 2012; Heimbürger, 2018; Sipple, 2007). As Silva (2012) put it, “cognitive overload may occur due to the split-attention effect, which would result in less processing of essential information” (p. 4, with reference to Mayer, 2009). This makes it difficult for the learners to locate their mistakes in their documents and revise them (cf. Harper et al., 2018, p. 279; Howard, 2018, pp. 16-17, 197; Kerr & McLaughlin, 2008, p. 2; Rodway-Dyer et al., 2009, p. 63; Warnock, 2008, p. 213). To spot the relevant passages, they would have to listen to the audio feedback repeatedly, which stands in contrast to skimming feedback in a written text (Cann, 2014; see also Xu, 2018). What is more, instructors tend to focus on macro-level dimensions (such as idea development and organization) when giving audio feedback so that micro-level aspects (such as grammar, spelling, punctuation, word choice) might be neglected (Cavanaugh & Song, 2014; Ice, Swan, Diaz, Kupczynski, & Swan-Dagen, 2010; Kim, 2004). Also, an embedding of further resources is unfeasible, e.g. hyperlinks to websites or further learning materials. Thompson and Lee (2012) therefore argued that instructors would need to create written feedback in addition to the audio file in order to ease the revision process for the learners (see also the student attitudes reported in Rodway-Dyer et al., 2009, p. 64; cf. Carruthers et al., 2014, p. 9). For instance, the tutors in Lunt and Curran’s (2010) study annotated the written assignment and additionally created audio files by means of Audacity (p. 761). However, this procedure might go along with further time and effort (Thompson & Lee, 2012). A solution could be to use the text editor (Bakla, 2020; Warnock, 2008, p. 201) or a PDF program and insert audio comments at particular points in students’ submitted files (Chiang, 2009; Olesova et al., 2011, cited by Harper et al., 2018, p. 279). This way, localization would no longer be a problem (Orlando, 2016, p. 162; see also Howard, 2018, p. 261). While the students clearly favored this approach, this type of feedback delivery still turned out to be tedious for the tutors in Chiang’s (2009) study (cf. Still, 2006, p. 463, quoted in Warnock, 2008, p. 204). Moreover, audio comments that are inserted into specific documents may lead to a sequential processing by the learners. They might thus be unable to see the bigger picture, similar to feedback in text editors (see section 3.1.4). Therefore, the integration of audio and written feedback would help to move beyond “a single channel of information” (Ryan et al., 2019, p. 1510). Still, it would not include the instructor’s facial expressions and hand gestures (Borup, West, & Graham, 2012, p. 196) or any other visual elements (McCarthy, 2015, p. 164; cf. Olesova et al., 2011, p. 39; Saeed, 2021, p. 13). This, however, would be possible with talking-head video and screencast feedback, respectively (see sections 3.12 and 3.13). Apart from that, an immediate learner response is impossible (Ekinsmyth, 2010, p. 75), as it would be in oral face-to-face conferences (Butler, 2011, p. 104) or phone calls, for instance. It has therefore been suggested to schedule follow-up meetings (Carruthers et al., 2014, p. 9) to clarify open questions or give the learners the chance 169 3.11 Audio feedback <?page no="171"?> to respond to the feedback (Ekinsmyth, 2010, p. 75). Certainly, this is true of any asynchronous feedback method. A combination with other feedback methods as part of an ongoing learning dialogue is consequently crucial (see section 2.2). Further practical tips will be given below. 3.11.5 Required equipment Teachers and learners need technological equipment to produce and receive audio feedback, respectively (e.g. Butler, 2011, p. 105; McCarthy, 2015, p. 164). As compared to other digitally recorded feedback methods, the requirements are rather simple, though. First of all, assessors need a recording device. This could be a PC, tablet or laptop with a (preferably external) microphone, an iPod or a smartphone (e.g. Bond, 2009, p. 2; Cann, 2014; Kettle, 2012; Nortcliffe & Middleton, 2011). Else, any other digital voice recorder could be utilized (Bond, 2009, p. 2; Brearley & Cullen, 2012; Rotheram, 2007). The sound quality of modern devices is usually sufficient for feedback recordings, but the use of a headset (e.g. at the smartphone or computer) or other external microphone might enhance it further. Definitely, assessors should do a trial recording of their voice before starting the feedback recording. For this, a quiet place would be recommended (Butler, 2011, p. 105). Furthermore, a software for audio recording is needed unless it is already preinstalled on the recording device. A freeware that allows for audio recording and editing is Audacity (https: / / www.audacity.de/ ). It was frequently used in the reported literature (e.g. Killoran, 2013, p. 38; Lunt & Curran, 2010; Merry & Orsmond, 2008). If, in turn, assessors wish to insert audio comments into a document, then Microsoft Office (Bond, 2009), Adobe PDF (Oomen-Early, 2008) or other appropriate programs, such as Evernote or Google documents, need to be chosen (cf. Centre for the Enhancement of Teaching and Learning [CETL], 2020, p. 1). For Google documents, it might be necessary to install a browser extension, such as “Read & Write for Google Chrome” (see Sammartano, 2017). Likewise, social media, e.g. messenger apps (see section 3.7), could be utilized for the recording and sharing of audio feedback (Xu, 2018). Moreover, the storage space (either on the hard drive or on a secure server) should be sufficiently large. Some studies resorted to Google Drive (Gonzalez & Moore, 2018), DropBox or other file hosting services (e.g. Sound Cloud; Cann, 2014). The (private) link to the audio file can then be shared via email instead of sending the entire file as an email attachment (cf. Brearley & Cullen, 2012; Cann, 2014; Rotheram, 2009). At any rate, though, data protection rights need to be respected, which is why a storage on a secure learning platform should be preferred. Some educational institutions nowadays offer their own secure cloud or LMS that allows for the dissemination of audio feedback (see e.g. the report by Bond, 2009). An integrated solution on the LMS would facilitate the entire procedure substantially, as the audio files could be directly associated with particular assignments and responded to by the students. Some LMS (e.g. Moodle) also enable the direct insertion of audio comments into specific parts of a submitted 170 3 Digital feedback methods <?page no="172"?> assignment, similar to the voice comments in Microsoft Word or Adobe PDF. If the LMS does not allow for audio commenting, an external tool (such as GradeMark by Turnitin) could be connected to the LMS or used separately (cf. Buckley & Cowap, 2013; Cann, 2014). 3.11.6 Implementation (how to) A familiarization with the technological tools and devices is a prerequisite for their use. Their specific selection will depend on the purpose of the feedback, but is also conditioned by their availability at a particular institution. If an integrated solution in an LMS is available, assessors may use it for the production and dissemination of the audio feedback. Otherwise, a recording needs to be done offline and uploaded onto a secure server before sharing the link with the students (Cann, 2014). Alternatively, audio feedback could be sent via messenger apps (Xu, 2018). Another important distinction is between the creation of separate audio files on the one hand and audio comments that are directly inserted into students’ documents on the other hand (e.g. Bond, 2009; CETL, 2020, p. 1; Oomen-Early, 2008). They seem to be suitable for different kinds of contents. While document-internal audio clips may rather lead to microand meso-level comments, separate audio files might be more beneficial for macro-level issues (cf. Cavanaugh & Song, 2014; Ice et al., 2010; Kim, 2004). Especially for more comprehensive separate audio files, careful planning is crucial (cf. CETL, 2020, p. 1). Notably, for longer assignments or feedback messages, a script might need to be prepared, e.g. by taking some notes about the key aspects one wants to address and by structuring them in a learner-friendly way (cf. CETL, 2020, p. 1). In that regard, assessors can use the assessment criteria as a guide for structuring their feedback (Bond, 2009, p. 4; see section 2.1.2). Moreover, if a separate audio file is used (instead of document-internal clips), instructors should ask their students to add page and line numbers before submitting their written work. This way, back-references and the localization of passages will be eased (Brearley & Cullen, 2012, p. 31). The assessors would then mention these page and line numbers during the recording. However, this would not be necessary if document-internal voice comments are positioned at the exact location that is talked about. To insert an audio file in MS Word for Windows, assessors first need to click on the intended position in the text document. Then they go to “Insert”, “Object”, “Create from File” and browse to the audio file stored on their computer. They might want to select “Display as icon” as well as give a caption to the file if wanted. Otherwise, the original file name will be displayed (for further explanations see Wright, 2022). Some PDF programs, such as Adobe Pro, even permit a direct recording of audio clips in PDF documents. For this, users would go to the “Comment” tab and choose “Record Audio Comment”. An audio recorder will open as soon as they click on the intended position 171 3.11 Audio feedback <?page no="173"?> in the PDF. Assessors may either record their voice comment spontaneously or insert a previously recorded one by choosing “Browse” and the file location. Throughout the feedback, providers should try to keep an encouraging tone (CETL, 2020, p. 1), even though this might be hard when giving negative comments and when several works are assessed in a row. Moreover, they should speak in a clear manner and modulate their voice to draw learners’ attention to important aspects (CETL, 2020, p. 1). Overall, the feedback should not be longer than five minutes because otherwise students might get distracted or overwhelmed (Bond, 2009, p. 3; CETL, 2020, p. 1; Rotheram, 2009, p. 23). The audio feedback should therefore be kept as short as possible, also to avoid longer upand download times as well as a high consumption of storage space (Cann, 2014, p. 39). Preferably, a common file format, such as .mp3, is chosen (Rotheram, 2007; 2009) so that students can also easily access the audio files on their mobile devices (Cann, 2014, p. 39). Before sending the files to the students, assessors should briefly review the contents and check the sound quality (CETL, 2020, p. 2). However, they do not need “to make a perfect recording” because stumbles, pauses and mishaps are natural parts of speech and make the feedback more human (Bond, 2009, p. 4). If assessors intend to utilize audio feedback for summative assessment, they should make sure that the administrative and quality assurance section of their institution accepts audio feedback (Rotheram, 2009, p. 23). Many educational institutions still require a written report for grading. When used for grading, though, the return of the grades should be separated from the audio feedback itself (Bond, 2009, p. 4; Cann, 2014, pp. 38-39) so that students will concentrate on its contents. Finally, teachers should make sure that the learners are familiar with the technology and know how to play and use the audio file. They might want to create “a user guide” for their students and demonstrate the procedure to them (Carruthers et al., 2014, p. 10). The manual might contain the advice that is given below. 3.11.7 Advice for students The consulted literature hardly ever offered explicit advice for the recipients of audio feedback. However, it might be inferred from some reports and the author’s own experience. Before playing the audio file, the students should have received a training in the use of audio feedback (Carruthers et al., 2014, p. 10). In case their teacher had prepared a manual (e.g. written guide or video tutorial), they should consult it and follow the steps described in it. Learners should also show an open-minded attitude towards the feedback instead of being scared of it (see section 2.2.1 regarding the importance of a trustful and error-friendly atmosphere). The concrete steps of working with the audio feedback would then be as follows. For the reception of standalone audio feedback, students need to have a copy of their (printed or electronic) draft available in order to relate the teachers’ comments 172 3 Digital feedback methods <?page no="174"?> to concrete passages (Lunt & Curran, 2010, p. 766; Silva, 2012, p. 4). Furthermore, they might need internet access to load the feedback file. Next, they should find a quiet place to access the recording or use headphones in order to listen to it. They should have an audio player installed on their device and be familiar with its pausing function. This allows them to take notes or revise the document instantaneously while listening to the audio file. Learners can also replay the entire file or specific sequences as often as they want to (cf. Harper et al., 2018, p. 278; McCarthy, 2015, p. 162). For documentinternal audio clips, learners should likewise know how to play them and listen to them in conjunction with the corresponding passage. In case they are confused or uncertain about the contents, they are advised to seize the opportunities for further communication with the instructor, e.g. via email (Brearley & Cullen, 2012, p. 28) or in a follow-up meeting (Carruthers et al., 2014, p. 9). Finally, as soon as they have implemented the feedback, it would be a good idea to reconsult the feedback to check whether they have considered all points or to give reasons for not adopting certain suggestions. 3.11.8 Combinations with other feedback methods Audio feedback can be used together with written feedback, which has been recom‐ mended in many reports (e.g. Bond, 2009; Brearley & Cullen, 2012). As shown above, a direct integration is possible by inserting voice comments into a document that also contains written feedback (see sections 3.1 and 3.3). Moreover, written feedback can be handed out separately, e.g. in the form of a completed assessment rubric or any other written summary. The brief written remarks (for instance as bullet points) can then serve as a guideline for (producing and) processing the more detailed audio comments (Heimbürger, 2018, p. 114). Especially for learners who have difficulties in understanding the oral comments (e.g. due to hearing impairments or a low proficiency in the target language), transcripts could be created. Hence, the teacher (or even the students) might use speech-to-text software to obtain the transcripts (cf. Bond, 2009, p. 2; Rotheram, 2009, p. 23). Depending on the clarity of the recording and the quality of the software, such a transcript may, however, contain several errors and might need to be corrected manually by the teacher. Otherwise, it could at least serve as a rough orientation to navigate the contents of the audio file and locate specific passages when revising a submission (cf. Bahula & Kay, 2020, p. 6539; Langton, 2019, p. 54). Apart from the combination with written feedback, there is often a need for followup exchanges which can be organized in manifold ways, e.g. as follow-up meetings with the lecturer (Carruthers et al., 2014, p. 9), as chat or mail exchanges (Brearley & Cullen, 2012, p. 28) and many more. 173 3.11 Audio feedback <?page no="175"?> 3.11.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) In what ways can audio feedback be provided? (2) For what assignments could audio feedback be useful? (3) List possible reasons why learners might prefer audio feedback to written feedback. (4) What do you need to consider when (a) producing and (b) receiving audio feedback? Application questions/ tasks: (5) How would you begin an audio feedback recording (introductory sentence)? Give reasons for your choice. (6) Your students have just begun to jot down first ideas for a project they would like to work on in the next lessons. They have uploaded their suggestions onto your course site in the LMS. As this is the stage of idea development, you feel that audio feedback would be a useful way to provide feedback to your students and help them refine their ideas. Please practice the provision of audio feedback for the document that your students have turned in. (7) Imagine you have created a cloud document (e.g. Google Docs) and want to practice the provision of audio feedback with your students. How would you proceed? You may think of a peer feedback approach and of using a browser extension for inserting voice comments in a cloud document. Describe the steps you would take and feel free to practice its implementation with your students. 3.12 Video feedback (talking head, no screen) 3.12.1 Definition and alternative terms/ variants At the very outset, a note on terminology is due since the term “video feedback” has been used inconsistently in the previous literature (see the review by Mahoney, Macfarlane, & Ajjawi, 2019, p. 158). Several terminological variants exist, such as “video podcast feedback” (cf. Flood, Hayden, Bourke, Gallagher, & Maher, 2017, p. 100; Letón et al., 2018, p. 189), “vodcast feedback” (cf. Haxton & McGarvey, 2011, p. 19), “webcast feedback” (cf. Hewson & Poulsen, 2014) or “videomail feedback” (cf. Hase & Saenger, 1997). Here we note clear parallels with the alternative labels “audiocast” or “audiopodcast” for audio feedback in section 3.11.1. Moreover, in some papers, video feedback has been abbreviated as “veedback” (Soltanpour & Valizadeh, 2018, p. 126; Thompson & Lee, 2012). However, these terms were sometimes employed to refer to “screencast feedback” (Thompson & Lee, 2012). In fact, “video feedback” could be understood as a cover term of screencast feedback with or without a webcam (e.g. Mayhew, 2017, p. 181) as well as of standalone webcam recordings that exclusively show the screencaster’s face (e.g. Hall et al., 2016; Lamey, 2015). In that respect, one also finds the terms “screencast 174 3 Digital feedback methods <?page no="176"?> video feedback” (Ali, 2016, p. 108; Thompson & Lee, 2012) or “screencasting and/ or video (SCV)” feedback (Honeyman, 2015). For instance, Fang (2019) utilized the label “screencast feedback”, not only to refer to SCFB itself, but also to recorded talking-head videos as well as “synchronous virtual meetings with screen-sharing components as screencasting for feedback” (p. 106). On the other hand, video feedback was sometimes used as a cover term for both asynchronous screencast feedback and synchronous videoconferences via which feedback is provided (Hartung, 2017, p. 206). Honeyman (2015), by contrast, ascertained that “screencasting” typically refers to a recording of the computer screen including mouse movements, typing and highlighting that is accompanied by narration, whereas the label “video feedback” usually contains a webcam film (p. 1, with reference to Henderson & Phillips, 2014). In this book, we will try to separate these notions along these lines. Synchronous videoconferences will be discussed in section 3.14, whereas section 3.13 will deal with screencast feedback in which the recording of the screen is predominant. The focus of the present section will be on recordings of the assessors’ face, though. Therefore, to avoid the above-mentioned ambiguities, the term “talking head” video feedback has been pro‐ posed. It designates videos that exclusively display the as‐ sessor who is talking about the learner’s work (Mahoney et al., 2019, p. 158). Commonly, these videos only show the feedback providers’ face as well as their shoulders and oc‐ casionally their hands when gesturing (Henderson & Phil‐ lips, 2014, p. 6). For screencast videos in which a small talking head is included, Mahoney et al. (2019) suggested the term “combination screencast” (pp. 158-159), whereas “screencast feedback” re‐ ferred to screen-captures only (see section 3.13). Similarly, Borup (2021) distinguished between three categories, i.e. “webcam video”, “screen recording” and “screen recording with webcam video”. Bahula and Kay (2020) likewise used the label “webcam video” for this type (p. 6536). However, the term “webcam” seems to be too restrictive here because talking-head video feedback can also be recorded by means of an ordinary camera, camcorder or smartphone camera. Accordingly, Lamey (2015) defined it as “videos [in which one can] see the professor speak into the camera about the student’s assessment and then make the video available to the student” (p. 692). All in all, many studies do not (yet) resort to such a uniform terminology. Compli‐ cating matters further, the term “video feedback” has been coined in many ways in different subject fields. For example, in disciplines such as music (e.g. Boucher, Creech, & Dubé, 2020), sports (e.g. Potdevin et al., 2018) and medicine (e.g. Hawkins, Osborne, Schofield, Pournaras, & Chester, 2012; Pinsky & Wipf, 2000), video feedback means that the participants are filmed during a particular performance or interaction so that these videotapes can be analyzed micro-analytically with the educational aim of improving the performance further. It is also a helpful practice in teacher education (e.g. Stannard & Sallı, 2019, p. 462), for example during school internships (cf. Prilop, Weber, 175 3.12 Video feedback (talking head, no screen) <?page no="177"?> & Kleinknecht, 2020). The repeated playing of these tapes “allows a detailed analysis of a person’s behavior” (Fukkink, Trienekens, & Kramer, 2011, p. 46). Commonly, these tapes are used for “self-confrontation” (Fukkink et al., 2011, p. 47) and selfreflection (e.g. Ritchie, 2016; Tochon, 2008), but sometimes also for peer assessment (e.g. Hunukumbure, Smith, & Das, 2017). This, however, is not the focus of the present section. What is more, some scholars utilized the term “video feedback” for generic feedback that is directed at a larger group (Crook et al., 2012) instead of personalized feedback for individual learners. In such cases, it is often meant to serve as a kind of benchmark video and is thus highly similar to tutorial videos. In the current work, the term “video feedback” will mainly be employed for personalized feedback, either for submissions by an individual learner or by a group of learners. In the sense of “talking-head video feedback”, these videos will show the assessor’s head, facial expressions, shoulders and hand gestures, but not a screen display (Henderson & Phillips, 2014, p. 6; Mahoney et al., 2019, p. 158). 3.12.2 Contexts of use, purposes and examples Video feedback has been used in several disciplines, such as education, business, the humanities and the natural sciences (cf. the review by Bahula & Kay, 2020, p. 6536; Crook et al., 2012). Therein, talking-head videos seem to be specifically suited for “feedback that doesn’t require you to show student work” (Borup, 2021, n.p.). Hence, it is less appropriate for written assignments in which the focus is placed on “comment[ing] on grammatical and mechanical aspects” (Lamey, 2015, p. 692). Rather, and thus similar to audio feedback, it is helpful for discussing the intellectual content of an assignment (Lamey, 2015, p. 692) or to communicate a general impression of the students’ work (Crook et al., 2012, p. 393). For example, Hall et al. (2016) investigated talking-head video feedback in philosophy and ethics courses and felt that it helped “to give better, more explanatory feedback to students” (p. 14, draft version). Apart from the still more common feedback direction from instructors to students (e.g. Henderson & Phillips, 2015; Parton, Crain-Dorough, & Hancock, 2010), Hung (2016) employed talking-head videos for peer feedback purposes. 60 English as a Foreign Language (EFL) learners were divided into 15 Facebook groups, with four members each (p. 93). First, everyone was asked to produce a 3-minute clip to respond to oral discussion questions (p. 93). Afterwards, they watched each other’s video clips within their Facebook group (p. 93). Finally, they were requested to create 2-minute video clips to provide feedback to their peers (p. 93). Most of them (91.67 %) agreed that “peer video feedback can better promote interaction among classmates” (Hung, 2016, p. 97). In that regard, the audiovisual nature of the video feedback created additional benefits as compared to audio only. Further advantages will be outlined next. 176 3 Digital feedback methods <?page no="178"?> 3.12.3 Advantages of the feedback method Notwithstanding the terminological inconsistency, it is generally argued that video feedback commonly leads to a higher feedback quantity and quality (see e.g. the review in Adiguzel et al., 2017, p. 239) as compared to written or audio feedback, for instance (e.g. McCarthy, 2015, p. 162). The greater richness in detail (Cann, 2007, cited by McCarthy, 2015, p. 156; Hall et al., 2016) results from the visual information that is shown in addition to oral information (Crook et al., 2012, p. 387; Ryan et al., 2016, p. 2). Learners see the body language and facial expressions of the feedback provider in talking-head videos (Borup et al., 2012, p. 201; Hall et al., 2016, p. 30; Lamey, 2015, p. 701; Ryan et al., 2019, p. 1510). This multimodal nature can make it easier for the learners to process the contents as it comes closer to their ordinary multi-sensory perception of the world (cf. Stephan et al., 2010, p. 139, quoted by Griese & Kirf, 2017, p. 236). Correspondingly, some students highlighted that video was useful to explain com‐ plex or difficult concepts (Borup et al., 2015, pp. 176-177). The students perceived the video comments as more understandable, especially since the multimodal cues helped reduce nonor misunderstandings (Borup et al., 2015, p. 177). However, in Borup et al.’s (2015) study, some instructors used screencast feedback in addition to talking-head videos, which may have distorted the findings. Likewise, in Lee and Bailey’s (2016) research, video clips were inserted into an online discussion forum (see section 3.4), which is another way of combining feedback methods (cf. document-internal audio clips in section 3.11). The non-native speaking students even found video feedback easier to understand than the written feedback, also because the instructors “spoke clearly and slowly enough” (Lee & Bailey, 2016, p. 145). Furthermore, the learners had the impression that the instructors’ explanations were more detailed (Hall et al., 2016; Marshall et al., 2020, p. 5) and elaborate in the video (Borup et al., 2015, pp. 176-177; Parton et al., 2010). In their research, Borup et al. (2015) indeed found that the word count of the video feedback was higher than that of the written feedback (p. 172; cf. section 3.11.3 on audio feedback). More precisely, the instructors tended to provide “more general and specific praise and more general correction than instructors using text” in the video format, while in the written mode “more specific corrections” were offered (Borup et al., 2015, p. 172). Moreover, as compared to written comments, video feedback tended to contain more feed-forward advice, “indicating how the next assignment can be improved” (Lamey, 2015, p. 694; cf. Ketchum, LaFave, Yeats, Phompheng, & Hardy, 2020, p. 95). In addition, more “relationship-building comments” were detected in the video feedback (65.82 %) as compared to the written feedback (51.70 %) (Borup et al., 2015, p. 172). To the students, the video feedback felt more “like a conversation”, with the instructor talking to them (Borup et al., 2015, p. 176; cf. Lee & Bailey, 2016, p. 144). They had the impression that the instructor cared about them (Marshall et al., 2020, p. 10), giving them emotional and affective support (Borup et al., 2015; Marshall et al., 2020, p. 5; cf. the review by Bahula & Kay, 2020, p. 6538). There also appeared to be more praise in the videos, and even criticism sounded nicer to some students’ ears (Borup et 177 3.12 Video feedback (talking head, no screen) <?page no="179"?> al., 2015, p. 177). The students in Lee and Bailey’s (2016) study likewise appreciated the instructor’s friendly tone and kindness, which made them feel more comfortable and connected to the instructor (p. 144). This helps reduce the “negativity bias” of written feedback (see section 3.8) and is similar to audio feedback (section 3.11). In addition, though, seeing the lecturers’ facial expressions felt more genuine to the students (Borup et al., 2015, p. 177) than written or audio feedback. Video feedback can therefore be regarded as an effective tool for “personalized learning and attentive engagement” (Adiguzel et al., 2017, p. 239). Especially for online learning settings, the video may help to increase the instructors’ social presence and strengthen the rapport between lecturers and learners (cf. Bahula & Kay, 2020, p. 6536; Borup et al., 2012, p. 201; Espasa, Mayordomo, Guasch, & Martinez-Melo, 2019; Hall et al., 2016; Ketchum et al., 2020; Lamey, 2015, p. 696; Marshall et al., 2020, p. 5). The approach might be more in line with the idea of feedback being a “dialogue” or “conversation” (Ferris, 2014, p. 18). However, it should to be borne in mind that the conversation in the video is only a “one-sided dialogue” (Lamey, 2015, p. 698) that needs to be followed up further in a different way, e.g. by inviting the students to talk to the lecturer on a personal basis afterwards (p. 699). To summarize, both learners and teachers tended to perceive video feedback as “more conversational, supportive, and understandable than text feedback” (Borup et al., 2015, p. 178). It was seen as more “nuanced and sensitive” regarding corrections as well as “more personal and encouraging” (Borup et al., 2015, pp. 165-166, referring to Moore & Filling, 2012; Lee & Bailey, 2016, p. 144). The learners not only felt more motivated to engage in revisions (Bahula & Kay, 2020, p. 6537; Henderson & Phillips, 2015), but also considered their teachers to be more engaged in the feedback process (Hall et al., 2016). The instructors likewise perceived a change in their view and approach to feedback (cf. Crook et al., 2012, p. 395) insofar that feedback “comments no longer felt like an exercise in defending a grade (i.e., justifying the evaluation) but rather providing valuable advice” (Henderson & Phillips, 2015, p. 63). The explanations in the videos helped the students to understand the contents better and to feel more confident as writers (Marshall et al., 2020, p. 5). Hence, video feedback might not only offer affective benefits, but also (long-term) learning gains. In that respect, also the time factor is worth considering: On the one hand, video feedback can be produced faster than written feedback (Hall et al., 2016; Henderson & Phillips, 2015) and simultaneously allows greater elaboration as compared to written comments (since speaking is usually faster than writing; see section 3.11.3). Especially instructors with poor typing skills considered video feedback as more efficient than writing (Borup et al., 2015, p. 176). On the other hand, it also appears to be less timeconsuming than one-to-one conferencing (Carvalho, 2020; Walters, 2020), while still being very personalized (Moore & Filling, 2012, p. 5). Another affordance as compared to synchronous online or face-to-face conferences is that the students can re-watch the videos (Carvalho, 2020; Crook et al., 2012, p. 387). This accessibility from remote is not only beneficial for part-time and distance learners (Crook et al., 2012), but it 178 3 Digital feedback methods <?page no="180"?> also enables the sharing of the feedback with learners, parents and tutors (Hynson, 2012, p. 54; Moore & Filling, 2012, pp. 10-11; Stieglitz, 2013, p. 59). Furthermore, the videos can be accessed flexibly on many devices, such as computers, smartphones and tablet PCs (McCarthy, 2015, p. 161). At the same time, this technological affordance also constitutes a central challenge, as will be further explained below. 3.12.4 Limitations/ disadvantages All in all, the disadvantages cited for talking-head video feedback can be clustered into two groups: first, the visibility of the instructor (only), and second, technological difficulties. The talking-head style creates a presence that can be challenging in several ways. On the one hand, even though video feedback conveys a sense of closeness with the instructor, “[t]here’s still an emotional divide that [video communication] doesn’t quite breach” (participant cited in Borup et al., 2012, p. 201). On the other hand, students might perceive it as too invasive and personal, which is why they experienced discom‐ fort during its reception (Hall et al., 2016). For instance, some students in Henderson and Phillips’ (2015) study were initially anxious “about seeing the assessor’s face while receiving feedback” (p. 58), especially negative feedback. These initial concerns, however, typically faded off rather quickly (Lamey, 2015, p. 698). Nevertheless, learners could have difficulties in concentrating on the contents of the videos instead of its way of delivery when seeing the talking head only. It may even result in nervousness, hesitancy and other negative emotions (as reviewed by Bahula & Kay, 2020, pp. 6536, 6539). Similarly, some instructors reported that “the video made it difficult […] to hide their disappointment” (Borup et al., 2015, p. 177) and frustration (Borup, 2021). Still, it is usually perceived as less harsh and more encouraging than (depersonalized) written comments (cf. Stieglitz, 2013, p. 59; see also Lamey, 2015, p. 696; Mayhew, 2017, p. 181). However, foreign language learners or international students might experience comprehension problems due to the pace of the spoken language and their limited listening skills (Kim, 2018, quoted by Bahula & Kay, 2020, p. 6538). Beyond that, students may have the impression that recorded video feedback inhibits further dialogue because they cannot directly reply to it (Hall et al., 2016; Lamey, 2015). Some instructors were not sure whether the students were watching the videos at all because they did not seem to value or utilize the comments (Ketchum et al., 2020). However, this problem can be solved by uploading the videos on an LMS or video server that enables further interactions, e.g. VoiceThread (https: / / voicethread.com/ ) or EduBreak (https: / / edubreak.de/ ). Moreover, a number of studies did not find significant differences in student performance compared to audio and written feedback (Espasa et al., 2019; see also Ketchum et al., 2020). All this may lead teachers and learners to conclude that video feedback was “physically and emotionally taxing, that it did not lead to change, that it was time wasted, and redundant” (Ketchum et al., 2020, p. 95). In total, comments like 179 3.12 Video feedback (talking head, no screen) <?page no="181"?> these occurred relatively rarely. By contrast, most of the cited disadvantages centered on technological aspects, as will be summarized next. Notably, video feedback may require special equipment from the teachers and learners as opposed to written feedback. Accordingly, they would need time to become familiar with the technology (Crook et al., 2012), which might make some assessors more hesitant towards producing it (Cann, 2007). In addition, they could be too “worried about their personal appearance in the video” (Fang, 2019, p. 118). Moreover, they might be afraid of making mistakes, which would remain permanently visible. Consequently, some may refrain from video feedback at all, whereas others would spend additional time on re-recording or editing their videos (cf. Borup et al., 2015, p. 176). In that respect, they might decide to buy video software, which would result in extra costs. The same is true of purchasing new hardware, such as a webcam or camcorder and tripod (cf. Crook et al., 2012; Parton et al., 2010; but see section 3.12.5). Apart from the recording devices and software, the instructors need to find a quiet place to do the recordings without interruptions (Borup et al., 2015, p. 175; Lamey, 2015, p. 695). Furthermore, the finalization of the recording might be slowed down during the rendering and dissemination processes because the file size of video clips is automatically much larger than that of electronic written comments or audio feedback (McCarthy, 2015, p. 164). For instance, in Ketchum et al.’s (2020) study, “[m]ost mentioned that providing video feedback took too much time” (p. 92) and “that a fiveminute video could equal a half-hour upload time” (p. 93; cf. Borup et al., 2015, p. 175). Similarly, a slow internet connection could impair the reception process by the learners (Lee & Bailey, 2016, p. 146; see the review by Bahula & Kay, 2020, pp. 6536, 6538). Therefore, assessors should try to keep the file size of the video at an acceptable length (Silva, 2012, p. 3), e.g. by compressing the video. Despite this, videos usually cannot be directly attached to mails due to file size restrictions (cf. Inglis, 1998, in Henderson & Phillips, 2014, p. 3) so that other solutions need to be found (see section 3.12.5). Furthermore, instructors need to pay attention to the file format so that students can play the videos on their devices (cf. the review by Bahula & Kay, 2020, p. 6538; e.g. Lee & Bailey, 2016, p. 146). Moreover, while text comments can be read without further equipment, listening to video necessitates headphones or a private space (Borup et al., 2015, p. 173; cf. Hyde, 2013). In addition, video feedback requires students to take notes while watching the video, whereas in the written modality all corrections are already in the right place (Borup et al., 2015, p. 174). Similar to audio feedback (see section 3.11.4), the feedback recipients might have difficulties in remembering specific comments in the videos and in spotting them in their own submission (cf. Hall et al., 2016; Henderson & Phillips, 2015; Lamey, 2015; Orlando, 2016, p. 156). Likewise, some feedback providers stressed that they had to watch their videos again in order to “remember what [each student] needed to fix” (Borup et al., 2015, p. 176). These “linearity limitations” of the video format (Bahula & Kay, 2020, p. 6536) may thus result in additional time expenses (cf. Lee & Bailey, 2016). However, some tools (e.g. within the LMS) permit the direct integration of video 180 3 Digital feedback methods <?page no="182"?> clips into the submitted student work so that localization problems might be overcome (cf. the integrated voice comments in section 3.11). Furthermore, even though video allows for elaborate explanations, teachers cannot spend an infinite amount of time on one assignment since many of them need to be corrected (Silva, 2012, p. 3). Consequently, it might not be feasible in large classes (Borup et al., 2015, p. 179). Since the clips are unique, individual excerpts cannot be re-used by the teachers, whereas passages from written electronic feedback can be copied from paper to paper in the case of recurring mistakes (Borup et al., 2015, p. 175; see section 3.1). However, producing a video clip that synthesizes the most common errors could compensate for this shortcoming of individualized video feedback (see also section 3.13). 3.12.5 Required equipment As with audio feedback, the production and dissemination of video feedback has become easier in the past years due to advances in technological developments. Usually, assessors do no longer need to buy a camcorder, tripod and microphone (e.g. Crook et al., 2012; Hase & Saenger, 1997; Parton et al., 2010), but the video and audio quality of smartphones is oftentimes sufficient (Henderson & Phillips, 2014). Alternatively, assessors may use the external or internal webcam of their PC, laptop or tablet (Hewson & Poulsen, 2014). As with audio feedback, the sound quality might be enhanced by using an external microphone (see section 3.11.5). Attention should also be paid to adequate lighting (Martin Mota & Baró Vivancos, 2018, p. 35). Some instructors may even want to show a “decent” background, whereas others would not worry about showing their “natural setting” they work or live in (Fang, 2019, p. 119). Several recording tools nowadays offer virtual backgrounds that blur the real background or replace it by a static or animated picture (e.g. Loom or Photobooth, but also videoconferencing applications; see section 3.14). Certainly, instructors may also book a time slot in the university’s media studio instead of doing the recording in their own office or home, but this would require prior scheduling, extra effort and additional time (Fang, 2019, p. 120). Typically, computers, tablets and smartphones already have a video recording software pre-installed so that videos can be recorded fairly easily. Otherwise, several apps (including freeware) exist that can be downloaded and installed on one’s personal devices for the purpose of recording and editing videos. To exemplify, the instructors in Hall et al.’s (2016) project utilized the app Photobooth, which is a Mac software for video recording. Alternatively, QuickTime could be run on a Mac (Marshall et al., 2020). On Windows computers, the Windows Movie Maker used to be a popular software (Henderson & Phillips, 2015), but has been replaced by the “Video Editor” in the latest Windows version. Beyond that, many further software programs exist so that assessors can choose one that suits their preferences. For instance, to utilize virtual backgrounds, the software Loom could be chosen (see section 3.13.5). However, it would also be 181 3.12 Video feedback (talking head, no screen) <?page no="183"?> possible to record a video of oneself by opening a video call or webmeeting application, such as Skype, Zoom, Google Meet or BigBlueButton, without necessarily having a student joining it (see section 3.14). As with all other feedback types, the dissemination should ideally occur via the institutional LMS (Cann, 2007; Hall et al., 2016) or by sharing a secure hyperlink to a cloud space or other server (cf. Marshall et al., 2020), e.g. via email (Ferguson, 2021). If recorded on a smartphone, assessors may also utilize a messenger app to disseminate the video files (Saeed, 2021, p. 8, on the use of the WhatsApp messenger as well as section 3.7). Generally, though, a secure platform should be selected. Some assessors also utilized YouTube for the distribution of the videos by choosing the “unlisted video” option that is not accessible to the general public (Hewson & Poulsen, 2014). The affordance of YouTube would be the automatic creation of transcripts that can ease the navigation in the videos and their comprehension by foreign language students. The same goes for the video platform of the Loom app (https: / / www.loom.com/ ). Alternatively, a speech-to-text software could be utilized. The recording software Descript, for instance, already incorporates this functionality and additionally allows for the editing of the transcript and its corresponding audio. 3.12.6 Implementation (how to) Planning ahead is important for the delivery of feedback, especially via video. However, for feedback videos, there is no need to write a detailed script, but a few notes and a loose structure would be sufficient (cf. audio feedback in section 3.11.6). For example, the assessors in Hall et al.’s (2016) study followed a simple script, consisting of a greeting, positive feedback, critical feedback, and an invitation to continue the discussion (see also Henderson & Phillips, 2014, p. 7; 2015, p. 56; cf. section 2.1.4). Regarding the specific contents, it was considered “helpful to fill out an assignment-specific worksheet or rubric for each paper, and to incorporate that into the basic script” (Hall et al., 2016, p. 32, draft version; see section 2.1.2). This way, instructors are more likely to refer to the assessment criteria in their video, which students can likewise use as a guide during the reception of the feedback. Nonetheless, learners should be additionally familiarized with the purposes (Hase & Saenger, 1997, p. 365) and particularities of talking-head video feedback (Hall et al., 2016). In that regard, teachers may e.g. show their students a sample video in advance (Hall et al., 2016) and “communicat[e] instructions for access” (Bahula & Kay, 2020, p. 6539; see section 3.12.7 below). With respect to the overall length of the video, it is generally recommended to “keep videos short and focused” (Carvalho, 2020, n.p.). Usually, a maximum of five minutes seems sufficient (Hall et al., 2016; Henderson & Phillips, 2015; cf. Cann, 2007). Regarding the manner, the assessors’ tone of the voice plays a decisive role (Borup et al., 2015, p. 177), similar to audio feedback (see section 3.11). Hearing the lecturers’ tone of the voice can help establish a relationship with the students because it is usually perceived as less harsh and more encouraging than (depersonalized) written comments 182 3 Digital feedback methods <?page no="184"?> (cf. Stieglitz, 2013, p. 59; see also Lamey, 2015, p. 696; Mayhew, 2017, p. 181). Therefore, instructors should strive to deliver feedback in a friendly manner (Borup, 2021). Moreover, the feedback providers should “ensur[e] adequate audio and video quality” (Bahula & Kay, 2020, p. 6539) so that “the student can see the facial expressions and clearly hear the teacher” (Henderson & Phillips, 2014, p. 6; 2015, p. 54). However, there is no need for a high resolution (Henderson & Phillips, 2014, p. 6; 2015, p. 54), which would increase the file size unnecessarily. It is recommended to record the videos in a quiet, distraction-free environment with sufficient lighting and an appropriate camera angle (Hall et al., 2016, pp. 31-32, draft version). The camera should “focu[s] on the heads and shoulders of the teachers with enough space in the frame to allow some movement and capturing of hand gestures” (Henderson & Phillips, 2014, p. 6; 2015, p. 54). Its position should not make assessors look down to the students (Hall et al., 2016, p. 32, draft version). Importantly, eye contact should be maintained with the viewer, which means that assessors should directly look at the camera instead of the video of themselves that is displayed on the screen (Hall et al., 2016). Even though all this can be quite challenging, the resultant video does not need to be perfect. Instead, pauses and stumbles are natural and usually favorably received by the students (Carvalho, 2020; see section 3.11.6 on audio feedback). Depending on the age of the learners, teachers might even think about using a puppet at some points in the video to relax a potentially tense atmosphere (cf. Cann, 2007). Moreover, they may choose to use a virtual background to create a pleasant surrounding and to hide details of the setting they find themselves in. Depending on the kind of assignment, it might be necessary for assessors to make a clear (oral) reference to specific pages or slides. In that regard, students could be requested to number the pages or paragraphs of their submissions (Hall et al., 2016; cf. audio feedback in section 3.11.6). For document-internal clips, though, this challenge can be overcome by inserting the clips next to the relevant passages. This would work by embedding or linking the video as an external object in the document (e.g. by going to “Insert”, “Object”, “Create from File” in MS Word for Windows). Since this might increase the file size substantially, an alternative solution would be to insert the hyperlink to the externally saved video (e.g. on a cloud space or video server) into the comment bubble of an offline or online text editor (see sections 3.1.6 and 3.3.6). Finally, the feedback providers should briefly review their video before sharing it with the students. The distribution process is facilitated by using a secure cloud, video server or LMS so that teachers and learners would not need much space on their hard drive. Alternatively, the dissemination via a messenger might be possible if free or lowcost internet access is warranted. 183 3.12 Video feedback (talking head, no screen) <?page no="185"?> 3.12.7 Advice for students Students likewise need to familiarize themselves with the particularities of video feedback. To this end, assessors might have provided a sample video and a user guide, which the learners should consult before opening their video feedback file. For instance, they need to know where to find the video (e.g. LMS; cf. Bahula & Kay, 2020, p. 6538) and should have their assignments “in front of them (physically or electronically)” when playing the video (Hall et al., 2016, p. 22, draft version). The learners might then take notes while watching the videos (cf. Borup et al., 2015, p. 174) or insert annotations in their assignments to ease the revision processs (Hall et al., 2016). For this, they should be familiar with the pausing and rewinding functions of their media player (Hall et al., 2016). Moreover, the feedback recipients should definitely seize the chance to engage into further dialogue with their teacher (or peers) to discuss the feedback and clarify open questions (cf. the invitation suggested by Henderson & Phillips, 2014, p. 7; 2015, p. 56). It would also be a good idea to reconsult the video feedback before writing a similar assignment. Eventually, they should also consider producing video feedback themselves, e.g. as part of peer review activities (Hung, 2016), or to provide a feedback request/ response to their instructors (see section 2.2.4). 3.12.8 Combinations with other feedback methods In order to overcome the linearity and localization problems of talking-head videos, written feedback could be provided as a supplement, e.g. by means of a completed assessment sheet or by typing comments into the submitted assignment (Bahula & Kay, 2020, p. 6539). If the submission was an electronic one, teachers can use the “track edits” and “insert comments” functions of their word processor (Borup, 2021; see sections 3.1 and 3.3). Such a combination could, however, be directly accomplished when screencast feedback is used together with a webcam video (e.g. Mayhew, 2017; see section 3.13 below). Depending on the size of the displayed talking-head video, its effects might diminish, though. As Borup (2021, n.p.) cautions, With most tools, […] the webcam video is fairly small, so it can be difficult for students to connect with you, if that is your purpose. Furthermore, if you are not careful, the webcam video can cover up portions of the screen that you are trying to describe. Other tools, in turn, enable flexible adjustments of the different video components (screen and camera) so that they can be modified as needed (Borup, 2021). The next section will elucidate the purposes of screencast feedback to help assessors make wellfounded decisions about the type of video-based feedback that would be most fitting. 184 3 Digital feedback methods <?page no="186"?> 3.12.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Have you ever produced a video? a. If so, for what purposes? b. What equipment (hardware and software) did you use? (2) In what ways can video feedback be provided? What are the advantages and limitations of the different presentation formats? (3) What do you need to consider when (a) producing and (b) receiving video feedback? (4) Compare audio and video feedback with each other. What are their affordances? What are their limitations? When would you use which method? (5) Why is social presence stronger with video feedback than with digital written feedback? Application questions/ tasks: (6) How would you begin a video feedback recording in order to establish rapport with your students? Think about visual aspects and content. Make some trial recordings in order to discuss the potential effects on the viewers. 3.13 Screencast feedback (screen only, talking head optional) 3.13.1 Definition and alternative terms/ variants Screencasts are recordings of a digital screen together with voiceover narration (e.g. Thompson & Lee, 2012, n.p.; Stannard, 2019, p. 62; Vincelette & Bostic, 2013, p. 258). Screencasts have alternatively been termed screencaptures since the screen is captured during the recording (cf. Betty, 2008, cited by Sugar, Brown, & Luterbach, 2010, p. 2). In contrast to podcasts (see section 3.11), the audio experience is thus enriched by visual components (cf. Hynson, 2012, p. 54). Moreover, as opposed to videocasts (section 3.12), screencasts do not necessarily incorporate a video of the speaker, but may do so optionally. This would usually be in a small window in one corner of the screen (e.g. Cheng & Li, 2020; Mayhew, 2017; Özkul & Ortaçtepe, 2017; Stannard & Sallı, 2019, p. 459). Most screencasts, however, do not contain a speaker’s video (see the review by Schluer, 2021c). Screencasting is a common practice for creating software tutorials and step-by-step instructions ( Jones et al., 2012, p. 594; Kılıçkaya, 2016, p. 73; Séror, 2012, p. 106; Stannard, 2015). This usually includes cursor movements, highlighting, typing and demonstrating software-specific operations (Honeyman, 2015, p. 1). Apart from software tutorials, PowerPoint recordings with voice-over narration are common (cf. e.g. Arnold, Kilian, Thillosen, & Zimmer, 2018, p. 241). Further applications of screencasts in educational settings are explanations of grammar rules or marking schemes as well as descriptions 185 3.13 Screencast feedback (screen only, talking head optional) <?page no="187"?> of relevant websites (Stannard, 2015) or of step-by-step task instructions (Stannard & Sallı, 2019, p. 460). Moreover, in the past years, screencasting has increas‐ ingly been used for feedback purposes (e.g. Ali, 2016, p. 108; Jones et al., 2012, p. 594; Mathisen, 2012, p. 102; Séror, 2012, p. 106; Silva, 2012, p. 1; Stannard & Mann, 2018, p. 96; Stannard & Sallı, 2019, p. 465; Thompson & Lee, 2012; Warnock, 2008, p. 211). Screencast feedback (SCFB) can be defined as an asynchronous audiovisual feedback method in which assessors record their screen while com‐ menting on an electronic assignment in oral and visual ways (Anson, 2015, p. 376; Bakla, 2020, p. 109; Brick & Holmes, 2010, p. 339; Mann, 2015, p. 162; Schluer, 2020b, p. 44; Vincelette & Bostic, 2013, p. 258). This may include highlighting, drawing, editing, correcting and using other annotations synced with oral commentary, e.g. explanations, demonstrations and suggestions for further learning (Henderson & Phillips, 2014, p. 5; Nourinezhad, Hadipourfard, Bavali, & Kruk, 2021, p. 3; Schluer, 2021d, pp. 166-173; 2021e). Jones et al. (2012) state that it gives the im‐ pression of a live marking session that is available for future use (pp. 593, 594; cf. Stannard, 2006, p. 18; Turner & West, 2013, p. 290). As a result, Canals, Granena, Yilmaz and Malicka (2020) described it as “a novel type of technology-mediated feedback which is delayed in time but which incorporates features of immediate feedback” (p. 200). As with video feedback (see section 3.12), many alternative but also overlapping terms exist. For example, some common notions are “audiovisual feedback” (Cavaleri et al., 2019; Hyde, 2013; Mathieson, 2012; Nourinezhad et al., 2021, p. 3; Warnock, 2008; Woodard, 2016) or “AV feedback” in short (Kerr et al., 2016), “multimodal feedback” (Campbell & Feldmann, 2017; Elola & Oskoz, 2016; Lenards, 2017; Schluer, 2021d; Zhang, 2018), “multimodal electronic feedback” (Cunningham, 2017a) or else “multimodal screen feedback” (Tyrer, 2021). Sometimes, screencasting was called “vodcasting” (Haxton & McGarvey, 2011, p. 19) in short for “video podcasting” (Letón et al., 2018, p. 189), or more generally “video feedback”, even though these terms are ambiguous (see section 3.12.1). Occasionally, more specific terms, such as “video grading” (Schilling & Estell, 2014, p. 28) or “online video feedback” (Turner & West, 2013) were utilized in the literature. In addition to the visual screen display and markups, SCFB often only conveys the assessors’ verbal and paraverbal information, but not their non-linguistic gestures and facial expressions. SCFB thus comprises all feedback videos that either show the screen or the screen in combination with a webcam video of the person who is speaking. Feedback clips that solely show the assessor’s face, by contrast, are termed “talking-head” video feedback, as was explained in section 3.12. 186 3 Digital feedback methods <?page no="188"?> 3.13.2 Contexts of use, purposes and examples SCFB has been employed in a great number of disciplines and for a variety of assignment types already (for an overview see Schluer, 2021c). Most commonly, it was used in language learning contexts, notably for improving students’ writing skills (e.g. Ali, 2016; Bakla, 2020; Cunningham, 2017b; Grigoryan, 2017a; 2017b; Moore & Filling, 2012; Silva, 2017; Vincelette & Bostic, 2013). Mostly, it was implemented to foster English language competencies, but occasionally also of other languages, such as Spanish (Elola & Oskoz, 2016; Fernández-Toro & Furnborough, 2014; Harper et al., 2018), German (Speicher & Stollhans, 2015) or Japanese (Langton, 2019). Moreover, it was integrated into teacher education and teacher training (e.g. Borup et al., 2015; Cheng & Li, 2020; Mathisen, 2012; Schluer, 2020b; 2021a; 2021d; Stannard & Mann, 2018). Apart from that, SCFB was utilized in many further disciplines, including social and natural sciences, e.g. business and management (Comiskey, 2012; Griesbaum, 2017; Jones et al., 2012; Kim, 2018; Marriott & Teoh, 2012; Phillips et al., 2017), psychology (Anson, 2018; Anson, Dannels, Laboy, & Carneiro, 2016; Chiang, 2009; McVey, 2008; Ryan et al., 2016), political science (Anson, 2015; Mayhew, 2017), law (Phillips et al., 2017), social policy and social work (Soden, 2017), physical education (Cavaleri et al., 2013; Cranny, 2016; Ryan et al., 2016), women studies and gender studies (Anson, 2018; Anson et al., 2016; Mathisen, 2012) as well as in medicine (Sprague, 2017) and nursing (Brereton & Dunne, 2016; Mathisen, 2012), mathematics (Ryan et al., 2016) and engineering (Dunn, 2015; MacKenzie, 2021; Ross & Lancastle, 2019). In most reports, SCFB was mainly used by higher education staff to provide feedback to students. In a few studies, peer-to-peer SCFB (Schluer, 2021a; Silva, 2017; Walker, 2017) or learner-to-instructor SCFB (Fernández-Toro & Furnborough, 2014; McDowell, 2020a; 2020b; 2020c; Speicher & Stollhans, 2015) was employed. Moreover, SCFB was primarily considered beneficial for formative assessment, i.e. in-process support (e.g. Kerr, Dudau, Deeley, Kominis, & Song, 2016). However, also summative assessments can be done via screencasting if institutional regulations allow it (e.g. MacKenzie, 2021; McCarthy, 2015; 2020). Due to the predominant focus on writing skills, most reviewed assignments were written in nature. However, as the range of disciplines implies, SCFB can be utilized for numerous other task types. In fact, anything that can be displayed on a screen could be assessed by means of SCFB, such as electronic texts, presentations, simulations or websites (e.g. Borup et al., 2015, p. 179; Delaney, 2013, p. 299; MacKenzie, 2021, p. 220; Martínez-Arboleda, 2018, p. 39; Perkoski, 2017, pp. 45, 47, 51-52). For instance, in lan‐ guage teacher education, Stannard and Mann (2018) employed SCFB for online courses created by students that comprised various materials and documents, assignments, quizzes, polls, videos and discussion boards (p. 103). Even photographs of handwritten or handcrafted assignments as well as videos of real-world artefacts or interpersonal interactions can easily be used for SCFB. Overall, it appears to be especially useful if the reviewed contents are visual in nature, such as film projects (McCarthy, 2015, p. 162), three-dimensional architectural drawings (Comiskey, 2012), games (Law, 2013, p. 329) 187 3.13 Screencast feedback (screen only, talking head optional) <?page no="189"?> or other 3D computer animations (McCarthy, 2020). Similarly, O’Malley (2011) outlined its affordances in subjects like chemistry, where graphic visualization and dynamic illustration are crucial (p. 28). Beyond that, Canals et al. (2020) employed it for learners’ video-taped oral interactions in an online space, which enabled the instructors to make corrections without disturbing the flow of communication. Thus, it becomes clear that virtually “[a]ny document, image, website, or video file one may access digitally with a laptop, desktop, or smartphone can become the focal point of a screencast review” (Walker, 2017, p. 368). The reasons for this flexible use derive from its affordances that will be discussed next. 3.13.3 Advantages of the feedback method The multimodality of SCFB and its resultant suitability for a variety of assignment types and learner needs is a central affordance of it. Multimodality is “the social practice of making meaning by combining multiple semiotic resources” (Siegel, 2012, p. 671; see section 2.1.6). This integration of various semiotic resources (texts, pictures, animations, videos, gestures etc.) is what distinguishes SCFB from written feedback, audio feedback and talking-head video feedback because it can combine all of them. In contrast to multimodal videoconferences (see section 3.14), though, it is asynchronous feedback. Overall, the students (and instructors) in most prior studies felt that SCFB enhanced “the quality and quantity of the feedback” (West & Turner, 2016, p. 400; see also Alharbi, 2017, p. 246; Bissell, 2017, p. 6). Regarding the quantity, SCFB “provides students with more information […] than written corrective feedback” (Ali, 2016, p. 108) because speaking is faster than typing or writing (Orlando, 2016, p. 157; cf. Kerr et al., 2016, p. 20; Vincelette & Bostic, 2013, pp. 266-267; see also section 3.11 on audio feedback and section 3.12 on video feedback). For example, Anson et al. (2016) measured that a five-minute screencast contained 745 words as opposed to only 109 words in written feedback (p. 388). Similarly, others observed an average of about 140 to 160 words per minute (e.g. Kim, 2018, p. 47; Soden, 2016, p. 215; Stannard, 2019, p. 61). The greater number of words, however, does not result in a linear increase of the number of comments, but rather in greater depth and elaboration of the distinct points that are raised (Soden, 2016, p. 221; cf. Cunningham, 2017b, pp. 52-53, 55; 2019a, pp. 231, 233). Thus, “markers not only say more, they explain more, and they explain more clearly” (Hall et al., 2016, n.p., original emphasis). Learners profit from the depth of explanation and specificity, which helps them understand what they need to revise, how they could do it and why (Cavaleri et al., 2013, p. 30; Özkul & Ortaçtepe, 2017, p. 870; Thompson & Lee, 2012). As opposed to written feedback, for instance, learners did not feel the need to ask the teacher any further questions about the feedback they received (Cruz Soto, 2016, pp. 61-62; Cunningham, 2017b, pp. 31, 57, 61-62, 147; 2019a, pp. 222, 233, 235). SCFB thus appears to afford the “ability to provide expansive feedback and detailed explanations in a time-effective manner” (West & Turner, 2016, p. 402). 188 3 Digital feedback methods <?page no="190"?> In that respect, the multimodality enhanced the specificity and clarity of the feedback message through a synchronized showing and telling (Anson et al., 2016, pp. 380, 395-396; Cunningham, 2017b, pp. 56-57; Elola & Oskoz, 2016, pp. 63-64, 70; Mathisen, 2012, p. 105; Vincelette & Bostic, 2013, p. 258). It helps learners to contextualize, locate and understand the aspects an assessor is talking about (as reviewed by Soden, 2017, p. 3; cf. Bissell, 2017, pp. 7-8; Martínez-Arboleda, 2018, p. 34). In that regard, assessors frequently move the cursor to the passage they are referring to (Ali, 2016, p. 108; Edwards et al., 2012, p. 97; Elola & Oskoz, 2016, pp. 63-64, 70; Thompson & Lee, 2012, n.p.) or use underlining, bolding, coloring, zooming as well as other visual highlighting techniques (Bakla, 2017, p. 323; Schluer, 2021d). Learners may even “see the transformation from incorrect to correct form” (Harper et al., 2018, p. 286) through real-time corrections, including the deletion, insertion and movement of text (Orlando, 2016, p. 159). The track changes function would then help keep all corrections visible and traceable for the learners (Harper et al., 2018, p. 277; cf. Dunn, 2015, p. 9; Sprague, 2016, p. 24; see section 3.1). Apart from simple on-the-spot corrections, SCFB seems to be particularly suitable for demonstrating procedures and solution paths ( Jones et al., 2012, p. 593), e.g. for calculations or the formatting of citations, page numbers or hanging indents (Anson et al., 2016, p. 396). Furthermore, instructors can navigate to course documents, slides, tutorial videos or other web resources which offer further help (Séror, 2012, p. 111; cf. Anson, 2018, p. 25; Cunningham, 2017b, pp. 9-10; Jones et al., 2012, pp. 594, 597; Mathieson, 2012, p. 149). Overall, SCFB thus not only seems suitable for local, lowerorder corrections (Özkul & Ortaçtepe, 2017, p. 869), but especially for global, higherorder aspects, “such as the thesis, research question, organization, and claims and evidence” (Silva, 2012, p. 9). It even appears to encourage assessors to give macrolevel comments instead of mere proofreading or editing (Boone & Carlson, 2011, p. 20; Cavaleri et al., 2013, pp. 19-20; Howard, 2018, p. 223; Orlando, 2016, p. 160; Vincelette & Bostic, 2013, pp. 257, 267). Sometimes, this prioritization was also triggered by the restrictions in recording time of some screencasting software (Howard, 2018, p. 220; Vincelette & Bostic, 2013, p. 270; see section 3.13.5). It helped assessors to stay succinct (Harper et al., 2018, p. 289; Woodard, 2016, pp. 31-32) instead of overwhelming the recipients (Walker, 2017, p. 366). Moreover, the visual presentation of SCFB allows learners to “re-vision (seeing again)” their paper from a different perspective (Thompson & Lee, 2012, n.p.). They gain insight into the assessors’ reasoning and the way they are seeing and marking their work (cf. Anson et al., 2016, pp. 394-395; Mathisen, 2012, p. 109; Turner & West, 2013, p. 293; Vincelette & Bostic, 2013, pp. 270-271; West & Turner, 2016, pp. 400, 406). Likewise, the students’ progress and teachers’ support will become more transparent to the parents and other caretakers as opposed to the ephemeral oral comments during parent conferences or the terse remarks written on reviewed papers. Consequently, the clarity and comprehensibility of SCFB was emphasized in virtually every study (e.g. 189 3.13 Screencast feedback (screen only, talking head optional) <?page no="191"?> Ali, 2016, pp. 106, 115, 117; Anson, 2018, pp. 34-35; Cunningham, 2017b, pp. 53-59; Harper et al., 2018; Moore & Filling, 2012, pp. 10-11; West & Turner, 2016, p. 406). In addition, SCFB could be enriched by webcam captures of the feedback provider’s face, i.e. include a talking head (see section 3.12; e.g. Cheng & Li, 2020, p. 5; Mayhew, 2017; Özkul & Ortaçtepe, 2017; Stannard & Sallı, 2019, p. 459; for an example see https: / / www.youtube.com/ watch? v=4oeuzlN49bg). This may help build up relationships by strengthening the social and teaching presence of the instructor. Moreover, seeing the teacher’s face can clarify the feedback content further via paraverbal and nonverbal clues, e.g. their gestures and facial expressions (section 3.12.3; see the review by Killingback et al., 2019, p. 37; cf. Mathieson, 2012, p. 143, McCarthy, 2020, p. 180; Thompson & Lee, 2012). Furthermore, the oral modality brings benefits in addition to the visual strategies. The oral comments convey emotional color, praise and encouragement as well as a depth and subtlety that cannot be transported by written feedback alone (cf. Séror, 2012, p. 111; Silva, 2017, p. 334). For instance, Middleton (2011) emphasized that “the recorded voice is suited to emphasising key points, and to motivating, orientating, and challenging the learner, and for encouraging them to take time to reflect on their work” (p. 26). It can feel encouraging even in the case of critical comments (Edwards et al., 2012, p. 105; Moore & Filling, 2012, p. 11; Walker, 2017, pp. 364-365). Beyond that, Mahoney et al.’s (2019) research review elucidated that positive feedback appeared to be more prevalent in SCFB than in many other modes (p. 161; cf. audio feedback in section 3.11, video feedback in section 3.12 and chat feedback in section 3.7). Students might thus feel more motivated to engage in future work (Kim, 2018, pp. 41-42; Mathisen, 2012, p. 111) and more confident in revising their papers (Moore & Filling, 2012, p. 11). Hence, apart from the helpfulness of the information content (Ali, 2016, p. 115; MacKenzie, 2021, p. 225), the affective benefits of SCFB were frequently highlighted. Notably, the personal and individualized nature of SCFB was identified as one of its main advantages (Ali, 2016, pp. 106, 114, 117; Anson, 2015, pp. 378, 384, 386; Cheng & Li, 2020, pp. 12-13; Grigoryan, 2017a; Jones et al., 2012, pp. 594, 601; Marriott & Teoh, 2012, pp. 591, 593; Stannard & Mann, 2018; Turner & West, 2013, pp. 288, 292-293; Zhang, 2018, pp. 25-26). Personalization gives students the “feeling that the video is being directed right at them, rather than at an unnamed crowd” (Guo, Kim, & Rubin, 2014, p. 46; cf. Özkul & Ortaçtepe, 2017, p. 872). It makes the learners feel that the teachers have “a genuine interest in their work” and progress (McCartan & Short, 2020, p. 22) and show care about them by creating individualized SCFB (Bakla, 2020, pp. 117, 120; Mathisen, 2012, p. 109; Silva, 2012, p. 9). Students often felt that SCFB could even convey a personal connection that is comparable to individual face-to-face meetings (e.g. Grigoryan, 2017a, pp. 100-101; Moore & Filling, 2012, pp. 5, 10; cf. the review by Mahoney et al., 2019, p. 162). In that regard, it seems to strengthen the rapport between instructors and learners (Ali, 2016, p. 109; Anson et al., 2016, pp. 392, 397; Harding, 2018, pp. 38-39; Vincelette & Bostic, 2013, pp. 265, 267; West & Turner, 2016; Zhang, 2018, pp. 21, 25). In contrast 190 3 Digital feedback methods <?page no="192"?> to face-to-face conferences, however, the affective stress is reduced (Séror, 2012, p. 110; see also Anson et al., 2016, p. 397; Fernández-Toro & Furnborough, 2014; Jones et al., 2012, p. 604). The time lapse between feedback reception and a possible followup meeting can help the recipients “regulate their emotional response” as compared to synchronous methods (McCartan & Short, 2020, p. 22). Furthermore, the repeated playback possibility allows the learners to reflect on and engage with the feedback in deeper ways (McCartan & Short, 2020, p. 18) than in ephemeral face-to-face discussions (similar to audio and video feedback in sections 3.11 and 3.12). In any case, it is possible to follow up with an interpersonal discussion and reciprocal exchange of information after SCFB has been obtained (Comiskey, 2012; Stannard, 2019, p. 68). SCFB even seems to encourage such follow-up dialogues (e.g. Orlando, 2016, p. 160), with learners taking SCFB as a starting point for further learning (cf. e.g. Mathisen, 2012, p. 107; Turner & West, 2013, p. 292; Zhang, 2018, p. 26). In that respect, they appreciate the constructive nature of SCFB that includes advice beyond immediate corrections (e.g. Ali, 2016, pp. 106, 117; Henderson & Phillips, 2014, p. 6; McCarthy, 2020, p. 185). Correspondingly, SCFB is associated with a high learning gain (Harding, 2018, pp. 34-35; Kim, 2018, pp. 39-40; Mathisen, 2012, pp. 106, 108; Ross & Lancastle, 2019, pp. 3962, 3967), even though the empirical evidence is limited so far (cf. the reviews by Ali, 2016, p. 109, Mahoney et al., 2019, pp. 166, 172-173, and Schluer, 2021c). Apart from that, the multimodality of SCFB caters for different learning needs, styles and preferences (e.g. Ali, 2016, pp. 108, 115; Chronister, 2019; Grigoryan, 2017a; Schilling & Estell, 2014; Sprague, 2017). This includes students (and/ or their caretakers) who have special needs, e.g. due to learning impairments (Gormely & McDermott, 2011, p. 18; Jones et al., 2012, p. 595), or reading difficulties (Ali, 2016, p. 108; Grigoryan, 2017a, p. 106; Marriott & Teoh, 2012, p. 592; McDowell, 2020b, p. 22). Partially sighted students, on the other hand, may profit from the additional audio channel (cf. Bissell, 2017, p. 8; Schilling & Estell, 2014, p. 38; see audio feedback in section 3.11). Moreover, screen text can be enlarged and colors can be changed, so to accommodate the special needs of visually impaired learners (Hope, 2011, p. 12; cf. text editor feedback in section 3.1). The multiple modes of representation also seem to be beneficial for learners with ADHD, as they may help them focus their attention (Chronister, 2019, pp. 41-42). Furthermore, the asynchronicity can support learners with autism “to overcome inhibitions in vocal delivery” (McDowell, 2020b, p. 28) and might even assist them in engaging in face-toface interactions later on (pp. 31, 34). Moreover, as with audio and video feedback, learners have the opportunity to gain additional language input (Vincelette & Bostic, 2013, p. 267), hear the correct pronunciations (Langton, 2019, pp. 42, 52; Sprague, 2017, p. 155) and practice their listening skills (Bakla, 2020, p. 109; Elola & Oskoz, 2016, p. 69; Stannard, 2007). This is not only important for low-proficiency or beginning L1 speakers, but for L2 learners of any language (e.g. Ali, 2016, p. 118; Martin Mota & Baró Vivancos, 2018, p. 34; Thompson & Lee, 2012). 191 3.13 Screencast feedback (screen only, talking head optional) <?page no="193"?> Apart from the benefits for students, assessors with hardly intelligible handwriting as well as dyslexic tutors can profit from screencasting. For example, as a dyslexic tutor explained in Jones et al.’s (2012) research, screencasting helped him reduce social anxieties and pressure, since he did not need to write comments when doing his assessment work (p. 603). In addition, screencasting might be preferred by teachers with physical handicaps, e.g. the carpal tunnel syndrome that is often caused by excessive writing (Whitehurst, 2014; cf. audio and video feedback in sections 3.11 and 3.12). Also, it can help to prevent hand fatigue and wrist pain for all assessors who have to correct lots of student work (Kay et al., 2019, p. 1909; Sprague, 2017, p. 142). Finally, as with other digitally recorded feedback, SCFB is convenient for learners because they may access their feedback at any time and from any place and device they wish to (e.g. Cranny, 2016; McCartan & Short, 2020; Séror, 2012, p. 110; Stannard & Sallı, 2019, p. 469). This is particularly useful for part-time, online or distance learners (Anson, 2015, pp. 376, 387; Grigoryan, 2017a, pp. 100, 105-106; 2017b, pp. 464, 466; Stannard, 2019, pp. 66, 70; Stannard & Mann, 2018, pp. 102, 110). Especially nontraditional learners may profit from it, such as working adults, who can access the asynchronous audiovisual feedback at their own convenience (Grigoryan, 2017a; 2017b). The students do not need to remember all details at once but can watch and re-watch, rewind and fast-forward the SCFB at their own pace (Anson et al., 2016, p. 393; Harper et al., 2018, pp. 278, 289; Mahoney et al., 2019, p. 165; O’Malley, 2011, pp. 29-30; Robinson, Loch, & Croft, 2015, pp. 363, 377-378; West & Turner, 2016, p. 402). In addition, screencasts can be shared with other learners, tutors, parents and friends to obtain further support (cf. Mathisen, 2012, p. 101; Moore & Filling, 2012, pp. 10-11; Sprague, 2017, p. 109). Thus, as “permanent record[s]” (McCarthy, 2020, p. 181; Wood, 2019, p. 109), screencasts go beyond the confines of time and space (Bakla, 2017, p. 321; Sugar et al., 2010, p. 2). 3.13.4 Limitations/ disadvantages Despite its many advantages, the provision of SCFB goes along with some challenges, at least at the beginning of using it. Due to their unfamiliarity with the method, assessors may feel an initial anxiety, especially when it comes to recording their voice (Atfield-Cutts & Coles, 2016; Howard, 2018, pp. 172, 221; van der Zijden et al., 2021, p. 60; Vincelette & Bostic, 2013, pp. 270, 273). Likewise, learners may initially experience anxiety before watching a feedback video for the first time (Ali, 2016, pp. 109, 118; Bakla, 2017, p. 329; Bush, 2020, pp. 9-10). As with audio feedback, assessors must take care of their choice of words and negative emotions, especially when giving critical comments (Kerr et al., 2016, p. 20). In the end, however, most students experienced the instructors’ voice as friendly (Ali, 2016, p. 116; see section 3.13.3) and even preferred SCFB to other feedback modes (Sprague, 2017, p. 105). 192 3 Digital feedback methods <?page no="194"?> Beyond that, L2 learners may find it hard to follow the contents if the feedback provider talks too fast (Soden, 2016, pp. 224, 230; Stannard, 2007; Zhang, 2018, p. 27). Therefore, a balance needs to be struck between sounding natural, engaging and comprehensible to the recipient’s ears. On the other hand, many media players nowadays allow for re-adjustments of the speed with which a video is played (see section 3.13.7 below). In addition, hearing impairments could be an obstacle to the reception of SCFB (Anson, 2015, p. 380; Hope, 2011, p. 12). The use of transcripts or captions might be a solution then (Anson, 2015, pp. 380-381; cf. Deeley, 2018, p. 445; Fang, 2019, pp. 30-31; see section 3.13.5). Moreover, a follow-up session is important to check students’ understanding. Even though SCFB often feels conversational (see section 3.13.3), it actually only constitutes a one-way interaction (e.g. Comiskey, 2012; Mahoney et al., 2019, p. 166; Stannard, 2019, pp. 67-68; Thompson & Lee, 2012). Hence, students were unable to immediately reply to the feedback and ask questions in most prior research projects (cf. Bakla, 2017, p. 328; Mann, 2015, p. 171; O’Malley, 2011, p. 30; Özkul & Ortaçtepe, 2017, p. 873). Nowadays, however, several video platforms encourage interactive exchanges by allowing video commenting in various ways (see section 3.13.5). Examples include VoiceThread (http s: / / voicethread.com/ ) as well as EduBreak (https: / / edubreak.de/ ) and, to a more limited extent, YouTube (https: / / www.youtube.com/ ) and Loom (https: / / www.loom.com/ ). This notwithstanding, students and lecturers might simply have different feedback preferences (McLaughlin et al., 2007, p. 338). Sometimes, however, they do not yet know what other feedback methods exist or could even show reluctance towards them because they do not want to change their established ways of working (Mann, 2015, p. 175; Özkul & Ortaçtepe, 2017, p. 874). For instance, instructors may fear that SCFB production is a complex process (Fang, 2019, p. 93) or that their videos could be redistributed via social media (cf. the review by Cranny, 2016, p. 29106; Fang, 2019, p. 43; Howard, 2018, p. 222). This can likewise happen with photos of written corrections, though. At any rate, an illegal distribution would be the fault of the distributors, not of the feedback providers. However, just as some instructors provide better or worse feedback in other modes (e.g. handwritten feedback), screencasts or videos do not automatically guarantee a higher feedback quality (cf. Cranny, 2016, p. 29117; Hope, 2011, p. 11; Soden, 2017, p. 13; see also the novelty bias mentioned by West & Turner, 2016, p. 405). Instead, the provision requires practice as well as new strategies from the providers and receivers that need training (see e.g. Bakla, 2020, pp. 120-121; Stannard, 2007; see sections 3.13.6 and 3.13.7). Indeed, the “multiple modalities may overwhelm” some students (Fang, 2019, p. 36; cf. Henderson & Phillips, 2015, p. 63) because they require them to listen, think and write at the same time (Ali, 2016, pp. 108, 115; Elola & Oskoz, 2016, p. 69). Especially if SCFB videos are disseminated without the reviewed document, learners could experience difficulties in locating the errors the assessor is talking about (Elola & Oskoz, 2016, p. 67; cf. the reviews by Kay & Bahula, 2020, p. 1891, and by Stannard, 2019, p. 67). As this might easily become frustrating for the learners (cf. Moore & Filling, 193 3.13 Screencast feedback (screen only, talking head optional) <?page no="195"?> 2012, p. 10), explicit signposting during the digital recordings would be important to ease localization (Ryan et al., 2019, p. 1516; see section 3.13.6). On the other hand, the additional distribution of written feedback (see e.g. Boone & Carlson, 2011, p. 18; Brick & Holmes, 2010, pp. 340-341; Soden, 2016, p. 231) would increase the time investments which are perceived as high by many teachers anyway. Overall, it is still unclear whether SCFB is a time saver as compared to other feedback methods (Bakla, 2017, p. 327; Fang, 2019, pp. 118, 147; Ghosn-Chelala & Al-Chibani, 2018, p. 150). Even if the time for recording the feedback is as short as or even shorter than that for noting down (hand-)written comments, the instructors nevertheless need to additionally convert or render (Hope, 2011, p. 8; Soden, 2016, p. 215; 2017, p. 10) as well as upload the video to share it with the learners (Bakla, 2017, pp. 327-328; Cunningham, 2019a, p. 237; Schilling & Estell, 2014, p. 35). For this reason, staff may complain about the greater “workload to produce feedback files” and disseminate them to the students (McCarthy, 2015, p. 164). Schilling and Estell (2014) claim that producing SCFB in a time-efficient manner probably “does not allow for careful preparation and editing due to the number of videos that must be created” (p. 30). In fact, hesitations, minor mishaps (such as missing words, mispronunciations) and self-corrections “are a normal part of our speech” and make it sound natural (Orlando, 2016, p. 164; cf. McCartan & Short, 2020, p. 26). Also, interruptions or background noises, such as singing birds, give a natural or authentic feel to the recording (Walker, 2017, pp. 364, 366) that make the assessor more human and real to the recipients’ ears (same for audio and video feedback). The interruptions might even be briefly addressed or apologized for in the video, or else the pause function of the recording program could be utilized as soon as an interruption occurs (Ross & Lancastle, 2019, p. 3965; see section 3.13.6). Moreover, it has generally been acknowledged that SCFB production will become faster with experience (Atfield-Cutts & Coles, 2016; Bakla, 2017, pp. 328-329; Mathisen, 2012, p. 107; Nourinezhad et al., 2021, p. 15; Warnock, 2008, pp. 205, 210). Hence, it requires “an initial investment of time and effort” (Séror, 2012, p. 108), but in the end, it may even reduce the teachers’ workload (cf. Griesbaum, 2017, p. 695; Mathisen, 2012, p. 107). Importantly, when setting up a time cost-benefit analysis of different feedback methods (cf. Turner & West, 2013, pp. 293-294), assessors need to consider that “the finished product is richer and denser in information than traditional forms” (Brick & Holmes, 2010, p. 341). Accordingly, Fang (2019) argued that “screencasting holds great potential not necessarily for the quantity of time-saving but for the quality of feedback” (p. 150). However, also experienced users of SCFB might not be “sure whether they were using the method in a way that was appropriate or efficient” (Fang, 2019, p. 102). Some, for instance, wanted further information and an introduction to new tools that had evolved in the meantime (Fang, 2019, p. 134). Therefore, ongoing training and sufficient opportunities for practice seem crucial (e.g. Turner & West, 2013, p. 294; Nourinezhad et al., 2021, p. 15; see section 4.2.3). Interestingly, a participant in Fang’s (2019) study made a distinction between “personal bandwidth”, i.e. the investment of one’s one time, 194 3 Digital feedback methods <?page no="196"?> effort and energy in SCFB production, and “technology bandwidth”, i.e. technological barriers (p. 118). In total, however, technological problems were comparatively rare in the surveyed studies. One of the most commonly reported limitations was the bad audio quality of SCFB (e.g. Ali, 2016, pp. 110, 116, 118; Comiskey, 2012; McCartan & Short, 2020, pp. 16-17). With the continuous advances in technology, these and other problems are likely to decrease in the future, e.g. regarding the storage space (Anson, 2015, p. 381; Fang, 2019, pp. 88-89; Soden, 2017, p. 10) and long download times of video files (as reported by e.g. Ali, 2016; Bakla, 2017, p. 329; Cruz Soto, 2016, pp. 49-50; McCarthy, 2015). However, in some regions of the world or for some students (and teachers) at least, slow computers (Cruz Soto, 2016, pp. 49-50), slow internet connections or interruptions as well as high download costs may still be an issue (cf. e.g. McCarthy, 2020, p. 188; Özkul & Ortaçtepe, 2017, p. 872; Rahman, Rahim Salam, & Yusof, 2014, p. 971; Swartz & Gachago, 2018). The required equipment will be considered in more detail below. 3.13.5 Required equipment SCFB can be created at a PC, laptop or other mobile device, such as tablets and smartphones (see e.g. O’Malley, 2011, p. 28; Robinson et al., 2015, p. 371). Moreover, the screen of interactive whiteboards (Bakla, 2017, p. 325) or of videoconferences (see section 3.14) can be transformed into a screencast video. Many tablets, iPads and smartphones have a built-in screenrecorder in addition to a screenshot function, though. If not, a screenrecorder can be downloaded from the app store (cf. Martin Mota & Baró Vivancos, 2018, p. 36). Especially for small devices, the use of a smart pen (e.g. iPencil or other digital pen) is highly recommended as it eases the screen navigation process and allows assessors to draw highlights or symbols during the commenting process. This way, handwritten notes can be incorporated into the SCFB (O’Malley, 2011; cf. McLaughlin et al., 2007; McVey, 2008). To improve the audio quality, an external microphone or a headset could be utilized (cf. e.g. Cunningham, 2017b, p. 14; Kerr & McLaughlin, 2008, p. 3; Orlando, 2016, p. 163; Vincelette & Bostic, 2013, p. 272). With modern devices that do not produce any rushing noises, the internal microphone is often sufficient. If the webcam is recorded as well, then adequate lighting would be important (Martin Mota & Baró Vivancos, 2018, p. 35; see section 3.12.6). Regarding software, assessors can choose from a variety of products. Some of them work online and thus require permanent internet access, whereas many other programs can be downloaded for offline use. For example, Loom (https: / / www.loom.com/ ) can be installed as a Chrome extension or desktop app, but requires a stable internet connection in both cases. In addition to the screen recording, users may display their webcam or a photo of themselves. The videos are stored on the Loom server so that sharing is fairly easy. In the basic version, the recording is limited to 5 minutes for 100 195 3.13 Screencast feedback (screen only, talking head optional) <?page no="197"?> videos. The full version is free for verified teachers and offers a wider range of functions (see also Waltemeyer & Cranmore, 2018). For instance, feedback providers may insert an interactive hotspot (“call-to-action”) into their videos, and recipients can reply with a written comment or even with an audio or video recording to continue the feedback dialogue. Moreover, transcripts can be generated to ease the navigation in the videos (manual insertion of transcripts or automatic transcripts that can be edited further). However, the recipients will also need a Loom account in order to access the videos and respond to them. Therefore, educators should check whether Loom complies with the data protection regulations of their school or institution. To be on the safe side, the best solution would be to create and disseminate screencast videos in the LMS of the educational institution (cf. Ross & Lancastle, 2019, p. 3965). Indeed, several integrated solutions already exist. To exemplify, Opencast (https: / / opencast.org/ ) is a sceenrecorder and editor that can be integrated into Moodle, StudIp and other platforms. Further, the Interactive Video Suite (https: / / interactive-video -suite.de/ de/ startseite) can be used to engage in interactive video discussions. Apart from that, the LMS Canvas (https: / / www.instructure.com/ ) allows for different types of digital feedback, including SCFB (e.g. Fang, 2019, pp. 62, 118; Lee & Bailey, 2016, pp. 139-141; cf. Chronister, 2019, p. 54; Harding, 2018). For all other users, a wide variety of software alternatives exists, and so they should choose one that caters best for their needs. For example, Opencast also offers a simple screenrecorder on the web for free that does not require prior registration. Instead, the screen (and webcam) as well as audio recording can be directly started at the Opencast Studio website (https: / / studio.opencast.org/ ). After the recording, users can download the video immediately (as a .webm file) or connect it with the Opencast platform. The editing features are limited to trimming, though. Screencastify (https: / / www.screencastify.com/ ) is a Chrome extension for video re‐ cording and editing that works with a Google account (see e.g. Penna, 2019). The free version limits the recording and exporting time to 5 minutes per video, which is, however, sufficient for feedback purposes. During the recording, several annotation tools are available to the assessors, including the insertion of smileys and stickers, rectangle shapes as well as drawing with a digital pen in different colors, all of which can be erased by clicking “clear screen”. The editing features comprise cropping, trimming, merging and splitting clips, zooming and blurring as well as adding text and interactive questions. The files are stored on Google Drive, but an .mp4 export is likewise possible. With regard to offline or desktop applications, it still often makes a difference whether one works at a Mac or Windows computer. Mac users are usually equipped with a built-in screenrecorder right from the start. Two common software programs are iMovie (https: / / www.apple.com/ de/ imovie/ ) and Quick Time (https: / / support.ap ple.com/ de-de/ guide/ quicktime-player/ welcome/ mac). Camtasia (https: / / www.techsmith.com/ ) is a commercial software that is available for both Mac and Windows. It grants users a free 30-day trial for all features, but keeps a watermark in the videos. The fee for the full version is a one-time fee and does not 196 3 Digital feedback methods <?page no="198"?> require subscription. The rather expensive fee is reduced for educational purposes, though. This software offers a wide range of functions for video editing and is popular among screencasters and other video creators. Likewise, it was utilized in many SCFB studies. Beyond screen (and camera) recording, various file types (pictures, videos, audio) can be uploaded and arranged freely on the multi-layered tracks in the video editor. Moreover, numerous annotations can be added and customized by the users. They may even export them as templates for recurring usage, which will save time if screencasts become a regular part of the feedback process (cf. Hope, 2011, p. 10; Silva, 2012, p. 6). Another important affordance is the incorporation of additional resources via “interactive hotspots”. These interactive buttons may contain hyperlinks to websites as well as questions (e.g. quiz questions or questions for reflection) that the viewers should answer before continuing with the video. To seize these interactive possibilities, the video needs to be played in a browser. This can be done by either sharing the “Smart Player” (HTML5) files with the recipients or by uploading the clip onto a video platform, such as Screencast.com (https: / / www.screencast.com/ ). Most screencasting tools do not yet offer these interactive functionalities. However, assessors may use the H5P overlay application (https: / / h5p.org/ ) to add interactive buttons to (existing) .mp4 videos. Alongside Camtasia, the freeware Jing was utilized in many studies. It is a simple screenrecorder from the same developer (Techsmith). By now, it has been replaced by the newer tool Techsmith Capture (https: / / www.techsmith.com/ jing-tool.html), which no longer relies on Flash technology, but produces .mp4 files. Beyond that, there are several further developers of screenrecording software, some of which are exclusive to Windows, while others are compatible with Windows and Mac. Each one has different affordances and limitations, which is why assessors are invited to choose the tool that best suits their needs. Examples include ActivePre‐ senter (https: / / atomisystems.com/ activepresenter/ ) for Windows and Mac, FlashBack Recorder (https: / / www.flashbackrecorder.com/ de/ ) for Windows, the freeware Screen‐ cast-o-matic (https: / / screencast-o-matic.com/ ) that is available as a browser extension or as a desktop version as well as Open Broadcaster Software (OBS) Studio (https: / / obsproject.com/ de/ ). Finally, if feedback is provided for PowerPoint presentations, the built-in PowerPoint plugin by Microsoft Office could be utilized. Advice for the didactic implementation will be given next. 3.13.6 Implementation (how to) SCFB can be produced in many different ways, ranging from very simple to sophisti‐ cated. The production usually entails several steps that can become automated once a certain workflow has been established. Preparation: First of all, it is recommended to do the recording in a rather quiet place and to have all equipment ready for use (Borup et al., 2015, p. 175; Thompson & Lee, 2012; van der 197 3.13 Screencast feedback (screen only, talking head optional) <?page no="199"?> Zijden et al., 2021, p. 60; Vincelette & Bostic, 2013, p. 269). As Haxton and McGarvey (2011, p. 20) caution, one should also turn off other programs that might contain popup notifications (e.g. emails). These would otherwise be recorded in the screencast. Moreover, an electronic version of an assignment is required (such as a Word, PowerPoint or Excel file, a website or video). Thus, learners will need to submit an electronic document to the assessor, e.g. via the LMS (Durham University, 2019; Jones et al., 2012, p. 596), a cloud server (e.g. Google Drive: Woodard, 2016, p. 31) or email (Stannard, 2006, p. 18). Otherwise, assessors may quickly take a photo of the (handwritten or handcrafted) assignment by using their smartphone or tablet (Woodard, 2016, p. 32; cf. the scanning in O’Malley, 2011, p. 29; Özkul & Ortaçtepe, 2017, p. 865; Séror, 2012, p. 108). For three-dimensional handcrafted artifacts, a video clip could be recorded and then used as a basis for the SCFB. Next, the screencast content needs to be prepared, both the visual content and the audio comments that the assessor intends to convey. This also entails making a plan for the contents to be recorded and talked about (Martin Mota & Baró Vivancos, 2018, p. 35; McCartan & Short, 2020, p. 26; see e.g. Spencer, 2012, pp. 160-163, on the writing of effective scripts and storyboards). Moreover, it presupposes a technological familiarization with all required tools (cf. Delaney, 2013, p. 300). Even though assessors may start the recording immediately, i.e. before looking at the assignment in advance (e.g. Martínez-Arboleda, 2018, p. 40; Schilling & Estell, 2014, p. 30), this usually leads to rather disorganized und unfocused screencasts (Stannard, 2007; see also Delaney, 2013, p. 300). Such a mere think-aloud in the screencasts could be too confusing for the recipients (Mann, 2015, p. 173) or might otherwise increase the editing time (Swartz & Gachago, 2018). Therefore, at least a brief perusal of the submitted work is recommended, so to identify major strengths and weaknesses and to present them in an appropriate and understandable manner to the feedback recipients (Stannard, 2007; 2008, p. 18). Alternatively or additionally, assessors might want to insert comments (e.g. Cheng & Li, 2020, pp. 14-15) or mark important sections in the submitted assignment, e.g. a text document (see section 3.1). These comments or highlights may then serve as “talking points” (Kay et al., 2019, p. 1913), i.e. as reminders for assessors to address these aspects in the video, but also as scaffolds for the recipients to help them locate specific passages more easily and work on them when watching the SCFB (cf. Cheng & Li, 2020, p. 5; Cunningham, 2017b, p. 86). Especially for SCFB novices and recordings in a foreign language, such a rough plan could help reduce assessors’ “performance anxiety” (Soden, 2017, p. 12). A script, however, should not be fully written out (cf. Durham University, 2019), but rather constitute an action plan or outline to ensure a natural and authentic tone (cf. Henderson & Phillips, 2014, p. 6; McCartan & Short, 2020, p. 26; Orlando, 2016, p. 163; see also audio and video feedback in sections 3.11 and 3.12). Depending on the assignment, the error types and personal preferences, assessors may use specific annotation strategies. For text documents, the various editing and 198 3 Digital feedback methods <?page no="200"?> review features of a word processing software can be utilized, as had been described in section 3.1.6. For instance, they could enable the “Track changes” function (e.g. in Microsoft Word) to make the text corrections visible (cf. Schluer, 2020a, p. 9; Silva, 2012, p. 6). This might be especially useful for lower-level corrections. For higher-order remarks, the “Comment” function of the text program can be used (Mathieson, 2012, p. 147; Silva, 2012, p. 6). In these comment bubbles, assessors may insert hyperlinks that would direct the learner to further resources (cf. Soltanpour & Valizadeh, 2018, p. 128). They can also contain explanations, encouragement or questions for clarification (cf. Mathieson, 2012, p. 147). In addition, a summative comment could be inserted at the end of the electronic draft (Mathieson, 2012, p. 147; Silva, 2012, p. 6; see section 3.1.6). However, editing and commenting can also be kept at a minimum for screencasting purposes. Moreover, the highlighter tools of the text processor or the screencasting app can be purposefully exploited for SCFB. In that regard, it is frequently recommended to set up “a color-coding system to differentiate between [different] types of comments” (Thompson & Lee, 2012, n.p.; cf. Harper et al., 2018, p. 277; Langton, 2019, p. 38; Schluer, 2020a, p. 9; Vincelette & Bostic, 2013, p. 268). This means that “[e]ach type of issue would be assigned a [particular] colour” (Martínez-Arboleda, 2018, p. 37) that would then serve as a “visual scaffolding” for the screencast recipients (Séror, 2012, p. 109). To illustrate, yellow might be used for recurring linguistic errors, pink for content issues or argumentation, blue for citation and green for praiseworthy aspects (cf. Delaney, 2013, p. 300; Thompson & Lee, 2012). Certainly, this color-coding system needs to be clear to the learners (cf. Harding, 2018, p. 12; Martínez-Arboleda, 2018, p. 37), just as the meanings of any other error code and assessment criterion (see section 2.1.2). Generally speaking, these different comments, highlights and other edits can either be applied prior to the recording or during the recording (cf. Bakla, 2017, p. 323; Harper et al., 2018, p. 283; Jones et al., 2012, p. 597; McGarrell & Alvira, 2013, p. 47; Schluer, 2021e). Some of them might also be added later in a video editor, notably text highlights and comments (Schluer, 2020a). At any rate, before starting a recording, the content to be captured should have already been opened on the screen to avoid long loading times at the beginning of the screencast ( Jones et al., 2012, p. 597; Schluer, 2020a, p. 10; Whitehurst, 2014). This includes the submitted assignment as well as all other resources that assessors want to show during the recording, e.g. the assessment rubric, PowerPoint slides, handouts, images, graphics or websites (cf. Mathieson, 2012, p. 147; Turner & West, 2013, p. 291; West & Turner, 2016, p. 402). Alternatively, the additional resources could be inserted later on during video editing. On the other hand, assessors should hide all irrelevant information and close all unnecessary browser tabs. Some assessors may even consider creating a separate “Screencast” user account that shows nothing but the essentials (Holland, 2020). However, most screencasting programs permit their users to select a customized screen region for the recording. This way, the feedback providers can hide all window elements 199 3.13 Screencast feedback (screen only, talking head optional) <?page no="201"?> that are irrelevant, such as the toolbar or the browser’s bookmarks bar (Holland, 2020). Hence, when starting the screencasting software, the assessors should decide whether they want to record the full screen or only a part of it (O’Malley, 2011, p. 28). Moreover, they should make sure to select the correct audio source, preferably from an external microphone or headset. Camera usage: If assessors wish to record their webcam video in addition to their voice and screen, the webcam source needs to be detected. Furthermore, the camera might need to be repositioned to ensure that the assessor’s head and shoulders are visible on the screen, “with enough space in the frame to allow some movement and capturing of hand gestures” (Henderson & Phillips, 2014, p. 6; see video feedback in section 3.12). Some video software allows for flexible adjustments of the camera and screen sizes during the recording and editing process (e.g. Camtasia), while others merely display the webcam in a small, predetermined corner of the screen. If that is the case, a solution could be to utilize a videoconferencing software (see section 3.13.5) in which the sizes and positions of the different window parts can be flexibly arranged. In contrast to synchronous webmeetings, though, only the assessor but not the feedback recipient would attend that videoconference. Overall, however, research on the usefulness of displaying the camera in addition to the screen is comparatively rare and yielded mixed results. First, the additional display of the assessor’s face could distract learners from concentrating on the contents or might even result in cognitive overload (cf. McDowell, 2020c, pp. 134-135). Presumably, the cognitive load depends on the individual learners and their prior knowledge in relation to the contents and the resultant intrinsic load of the feedback message. Besides questions about the size of the talking-head video, another important concern would be its timing and duration. For didactic reasons, it would make sense to show a large or even full-screen talking-head video whenever relational work is crucial, but to minimize or hide it when the content of the reviewed work should be in the center of attention. Thus, assessors might want to ■ begin a feedback video by displaying their webcam in full size in order to establish rapport, ■ then switch to a screenrecording of the assignment to annotate it visually and comment on it orally, and ■ return to webcam mode at the end of the video in order to motivate learners to utilize the feedback in their revisions or subsequent assignments (see structural suggestions in section 2.1.4). Instead of recording their talking head, another possibility would be to film hand gestures only (or certain objects or processes). This would be beneficial for explaining handwriting to young learners (or e.g. Chinese characters to foreign language learners) as well as for giving feedback on handcrafted assignments in arts or other subjects. 200 3 Digital feedback methods <?page no="202"?> For this purpose, assessors would position their camera in a way that only their hands would be filmed, e.g. by using a tripod. To illustrate, a teacher in the field of textile design might display a photo of students’ stitching results in one part of the screen and then record their hands in another screen-part while correcting some of the stitches or illustrating suggestions for further improvement. The learners would thus gain procedural support for stitching through demonstrations and explanations (cf. Kroeger, 2020). Recording options: If no talking-head video is involved, assessors can choose to record the audio and the screen simultaneously or successively (Schluer, 2020a, p. 10; 2021e). In other words, narration could either supplement the live screenrecording (Morris & Chikwa, 2014, p. 26; Sugar et al., 2010, p. 2) or be recorded before or after it (cf. Séror, 2012, p. 106). In fact, concerting speaking and showing can be very demanding (Vincelette & Bostic, 2013, pp. 271-272), especially for novices, which is why they might first want to record their voice only and then listen to it while capturing the pertinent passages on the screen (Schluer, 2020a, p. 10) - or vice versa (Ruffini, 2012). They could speak their comments into their smartphone or record them at their computer, e.g. by using the freeware Audacity (cf. McCarthy, 2015, p. 157; Merry & Orsmond, 2008, p. 3; see section 3.11.5). Afterwards, they would show and capture everything on the screen while listening to their recorded audio (cf. Schluer, 2020a, p. 10; Spencer, 2012, p. 160). The different audio and video tracks would then be joined with the help of a video editing program (Schluer, 2020a, p. 10). Voice and pace: In any case, the assessors need “to be in the right frame of mind” to avoid a tiring voice and ensure an encouraging tone ( Jones et al., 2012, p. 603; cf. Kay et al., 2019, p. 1909). They should not rush through the feedback message, but deliver it at an adequate pace and with an appropriate modulation (Merry & Orsmond, 2008, p. 9). The same is true of the speed of the screen actions. This will make it easier for the recipients to follow and understand the feedback contents (Schluer, 2020a, p. 10). Screen actions: In fact, screencasters can resort to a variety of screen manipulations during and after the recording (see e.g. Schluer, 2021d). During the screencasting, the users can scroll through a document, zoom in at certain passages (Holland, 2020) as well as utilize the mouse pointer and highlighting functions to delineate areas that were solved successfully or are worth of improvement (e.g. Thompson & Lee, 2012). “[E]xcessive scrolling” should be avoided, though, since this might have a negative impact on the viewers (Vincelette & Bostic, 2013, p. 268; cf. Walker, 2017, p. 390). Similarly, all other deictic strategies need to be employed purposefully during the recording (Martínez- 201 3.13 Screencast feedback (screen only, talking head optional) <?page no="203"?> Arboleda, 2018, p. 41; Rybakova, 2020, p. 506) or during post-production (transition effects, info boxes, animations etc.). Overall, assessors may apply various strategies, e.g. to direct learners’ attention and increase their active processing of the SCFB contents. For directing attention, signaling or cueing strategies can be utilized (Brame, 2016, p. 2). This means that information is highlighted, e.g. by the use of different colors (each with a specific meaning), underlining or pointing (De Koning et al., 2009, cited by Brame, 2016, p. 2). As localization can be tricky in SCFB (see section 3.13.4), care should be taken to orally refer to or show a particular page or line number whenever needed (Cunningham, 2017b, p. 13; cf. Ryan et al., 2019, p. 1516). Involvement of learners: To promote learners’ cognitive activity while watching the video, assessors may ask questions for reflection or include interactive elements, such as hyperlinks (Brame, 2016, pp. 4-5). To illustrate, interactive hyperlinks could direct the learners to relevant course materials or other internet resources, for example to explanations of grammar rules or other correction advice as well as to corpora that demonstrate word usage in context (cf. Mahboob & Devrim, 2011, p. 8; Schluer, 2020a). The feedback providers might also employ interactive hotspots in the following way: In the case of a recurring error, they could ask the learners whether they have really understood a certain grammar phenomenon. When watching the videos, the learners then have to choose between two options that can be clicked on. The first one might say, “Yes, I struggle with this”, and leads to an external website which provides further information (cf. Loch & McLoughlin, 2011, p. 819). The second hotspot says, “I have understood this phenomenon and do not need further information”, and the video would continue as usual. Optionally, it might also contain a quiz question that tests the learners’ knowledge before being allowed to move on to the next video parts. Alternatively, learners could be requested to pause the video at particular points and reflect about the issues that were raised before they continue the video (Loch & McLoughlin, 2011, p. 819; Martínez-Arboleda, 2018, p. 40). The interactive questions and pausing requests may also motivate the learners to take notes (Brame, 2016, pp. 4- 5), to find a solution to a problem that was addressed in the video (Loch & McLoughlin, 2011, p. 819) or to implement the changes directly in their drafts. Similarly, brief pauses in the videos themselves can give learners room for reflection on the contents (Loch & McLoughlin, 2011, p. 819; cf. Cunningham & Link, 2021, p. 6). Finally, transfer tasks (or follow-up learning tasks) should be integrated, so to give learners the chance to apply the newly gained information (cf. Martínez-Arboleda, 2018, pp. 32, 37, 40, 42; see section 2.2.4). This is true of any feedback follow-up, but in video format, these tasks can be incorporated interactively by using quiz questions (Brame, 2016, p. 5) or by asking for a video reply (cf. McDowell, 2020b). 202 3 Digital feedback methods <?page no="204"?> Time management: Given this variety of possibilities for SCFB production, assessors may sometimes need a pause to recollect their thoughts during the recording, to (re-)read a section and to plan the next steps. A strategic use of the pause button of the screenrecorder would therefore be helpful (Hynson, 2012, p. 56; Schluer, 2020a, p. 10; Séror, 2012, p. 109; Walker, 2017, pp. 366, 390). Moreover, it can help suppress unwanted background noises without having to invest in excessive editing. All in all, the resultant video should not exceed 5 minutes, if possible (Comiskey, 2012; Durham University, 2019; Moore & Filling, 2012, p. 10; Soden, 2017, p. 9). Feedback providers might first have to learn to manage this length successfully (Hyde, 2013, p. 2; Séror, 2012, p. 109). At any rate, it is essential to preview the recording before passing it on to the recipients (Martin Mota & Baró Vivancos, 2018, p. 35; McCartan & Short, 2020, p. 26; Séror, 2012, p. 109). This way, also some superfluous details could be cut out by using the trimming tool that most screenrecording software offers (e.g. Loom). Distribution: The screenvideo should ultimately be saved in an appropriate format (usually .mp4) to avoid incompatibilities with students’ media players (cf. Bakla, 2017, p. 328; McCarthy, 2020, p. 186; Whitehurst, 2014). Moreover, giving the file a straightforward title before distributing the video or video link to the recipient would be helpful (cf. Hynson, 2012, p. 56). In that respect, it would be safest and most convenient for learners and teachers alike to utilize the LMS via which the assignment had been submitted (cf. Jones et al., 2012, p. 597; Stoneham & Prichard, 2013, pp. 1-2). Alternatively, the distribution may occur via a secure cloud (Bush, 2020, p. 4) or video platform in which videos can be shared by means of a direct, non-searchable link (Atfield-Cutts & Coles, 2016; Stannard & Mann, 2018, pp. 103-104; Whitehurst, 2014). If the electronic draft is disseminated to the student together with the video link, the latter can be pasted into the draft, together with a comment such as “To see and hear your instructor’s comments, click here.” (Woodard, 2016, p. 31, original emphasis). This appears to be most convenient when a cloud-based document, such as a Google Doc, is used (see section 3.3), as no additional document upload of the text will become necessary (Woodard, 2016, p. 31). Especially if the SCFB was produced at a tablet or smartphone, the SCFB video could also be distributed via a messenger app (Martin Mota & Baró Vivancos, 2018, p. 36; see section 3.7) as long as this complies with the data protection regulations of the institution. However, the screen size might be a hindrance for the recipients when viewing it on a small device. Further advice: Regarding the sequencing of the SCFB contents, several scholars seem to follow the general advice for feedback that was described in section 2.1.4 (see e.g. Henderson & Phillips, 2014, p. 7; Phillips et al., 2017, p. 365; Schluer, 2021d, p. 166). For SCFB 203 3.13 Screencast feedback (screen only, talking head optional) <?page no="205"?> (and other recorded feedback) in particular, it is recommended to delimit the range of aspects to be discussed. Hence, error patterns should be identified rather than going through the document in a sequential manner (Henderson & Phillips, 2014, p. 7). This is particularly true of long and complex works. For instance, van der Zijden et al. (2021) recommended a maximum of three different aspects for inclusion in the SCFB (p. 51); Whitehurst (2014) suggested two to three skills that have been mastered well plus two to three areas that need improvement. For this, assessors may sequence or partition their feedback contents based on the relevant assessment criteria (e.g. the major aspects of content, form and language). They might also display the assessment rubric throughout the SCFB by using a split-screen approach. This will help them to structure their feedback and explain the successes and required improvements (cf. Sabbaghan, 2017, pp. 96-97; Whitehurst, 2014). Alternatively, a sequencing of the video contents can be achieved through visual segmenting in other ways, e.g. by inserting text boxes or transition slides or even by creating an interactive table of contents (Brame, 2016, p. 5; Stannard & Sallı, 2019, p. 461; cf. Mayer & Moreno, 2003). The procedure can be eased by the use of video editors (e.g. Camtasia) which offer a range of functions (Haxton & McGarvey, 2011, p. 19), also for re-using distinct video parts. To exemplify, assessors may choose to produce a generic (or group) screencast feedback instead of individualized SCFB. This means that one screencast is produced for a group of students or even an entire class (Kay, 2020, p. 5; O’Malley, 2011, p. 30; Stannard, 2007). It would be useful for recurring errors and it would save time for the assessors (cf. Crook et al., 2012; Soden, 2016, p. 230). In the video editor, then, assessors could create a stock of helpful explanatory sequences and draw on them flexibly for personalized SCFB (cf. Atfield-Cutts & Coles, 2016; Harper et al., 2018, p. 288; cf. the feedback banks in Google Keep in section 3.1.6). To further personalize these videos, they would record individualized introductory and concluding messages (maybe even with a webcam) and add them to the generic contents (Hope, 2011, p. 11). Overall, however, there is not yet much research about the structures and strategies that prove to be effective for SCFB provision (cf. Bakla, 2017, p. 319; Kay et al., 2019, p. 1913; Kerr & McLaughlin, 2008, p. 10; Schluer, 2021c), but ongoing studies try to fill this gap (e.g. Schluer, 2021d; Schneider, Schluer, Rey, Kretzer, & Fröhlich, 2022). 3.13.7 Advice for students As with other new methods, learners’ unfamiliarity partly accounts for many of the perceived challenges and disadvantages that are associated with SCFB (see section 3.13.4; cf. Bakla, 2020, p. 117; Howard, 2018, pp. viii, 140, 225-226; Lamey, 2015, p. 698). In other words, implementing a new feedback method is of little use if the learners are not taught strategies to utilize it (cf. Ali, 2016, pp. 111, 119; Cunningham, 2017b, pp. 23-24; see the review by Killingback et al., 2019, p. 37). Depending on the learners’ skills, they might need to be introduced to the function‐ alities of the video player first (Sprague, 2016, p. 26), including its replaying and pausing 204 3 Digital feedback methods <?page no="206"?> (Harper et al., 2018, p. 290; Thompson & Lee, 2012) as well as fast-forwarding functions (Hope, 2011, p. 11; cf. section 3.12.7 about talking-head video feedback). Moreover, knowing how to navigate to particular video parts (Robinson et al., 2015, p. 378) and how to adjust the playback speed would allow learners to skim through a video and locate particular passages (Langton, 2019, p. 54; Wood, 2019, p. 108). By pausing the feedback, the learners gain time for taking notes or for implementing the suggestions directly (Bakla, 2017, p. 324; Kerr et al., 2016, p. 17; Rybakova, 2020, p. 508; Soden, 2016, p. 223). In fact, taking notes while watching the video appeared to be a frequently recommended strategy in previous works (Anson et al., 2016, p. 387; Atfield-Cutts & Coles, 2016; Hall et al., 2016; Howard, 2018, p. 145). To exemplify, Rybakova (2020) included the following advice at the beginning of her SCFB videos: Hi there! When you click on this link (link provided), you’ll open up video feedback for your paper. It might seem a bit strange at first, but it will help guide you in revising your paper. Don’t forget that writing is a cyclical process, which means you’ll never had [sic] a perfect draft, but as you continue to redraft your writing will get better! What I suggest is that you watch the video with a notepad (either pen and paper or digital) so that you can jot down some of the revisions that you could focus on. (p. 508) These notes could not only contain directions for revisions, but possibly also the time stamps of the pertinent passages in the video (Rahman et al., 2014, p. 971). Another option would be to utilize a multiple-windows view, i.e. opening two windows on the screen, one for the video and one for the draft that needs to be revised (Rahman et al., 2014, p. 971). All in all, however, further work about how learners actually watch and use the feedback is still necessary to derive more specific recommendations (Atfield- Cutts & Coles, 2016; Harper et al., 2018, p. 289). So far, there is only scant evidence and, if reported, there is wide variation. For example, some learners in Dunn’s (2015) study either watched the video and revised their paper simultaneously or sequentially, while others did both (p. 13). Of course, this may also depend on the nature of the requested changes and the ease of their implementation. More research on students’ use of SCFB would therefore be essential (Cunningham, 2017b, pp. 33, 35; 2019a, pp. 223-224). At any rate, the feedback recipients should be required to actually do the revision or to transfer the knowledge to a new learning situation. Such kind of tasks can be embedded in the feedback videos themselves or in an accompanying comment or written paper (see section 3.13.6). Beyond immediate revisions, learners may use the SCFB as a “reference material” that they could consult before the next assignment in order to re-inspect the errors they tend to make (Stannard, 2008, pp. 18-19; see also Chronister, 2019, p. 47; Kerr & McLaughlin, 2008, pp. 11-12; Waltemeyer & Cranmore, 2018). Regarding the feedback implementation process, learners should be made aware of potential follow-up discussions with instructors and peers to clarify open questions (Ali, 2016, p. 118; see also Harper et al., 2018, p. 290; Howard, 2018, p. 249; Rahman et al., 2014, pp. 966, 970-972; Rybakova, 2020, p. 508). Hence, they might be encouraged to ask their 205 3.13 Screencast feedback (screen only, talking head optional) <?page no="207"?> questions in the next classroom session, to send an email (see section 3.8) or to arrange individual online or face-to-face consultations, e.g. as part of the teacher’s office hour. They might also produce a feedback response or request by using SCFB (cf. McDowell, 2020b, pp. 21-22; 2020c, pp. 133, 135, 144). Likewise, SCFB can be employed for peer feedback purposes (Schluer, 2020b; 2021a; 2021d; Silva, 2017; Walker, 2017). For those applications, the recommendations from the foregoing section would be relevant. Finally, if SCFB is used in combination with text editor feedback, the advice given in section 3.1 would apply here as well. Indeed, this is a common combination of SCFB with other feedback methods, as the next section will elucidate. 3.13.8 Combinations with other feedback methods Prior to or during the screencasting, assessors usually mark passages and make comments or corrections in the submitted assignment (see section 3.13.6). This anno‐ tated document can be shared together with the screencast, which would not cause any additional effort for the assessors, but would assist the students in the revision process. The video link could then be incorporated into the annotated file to ease the dissemination process (see section 3.13.6). This is especially comfortable when using cloud documents, e.g. Google Docs (section 3.3), as the video links would be readily accessible in the online space. For instance, Ferguson (2021) and Wood (2019; 2021; 2022) combined the use of Google Docs and SCFB. Furthermore, assessors may also choose to provide more detailed feedback in the submitted draft or write a short summary or transcript that they want to pass on to the learners to ease the video navigation for them (Kerr et al., 2016, pp. 15, 17). Alternatively, this text could be written into an accompanying email (see section 3.8) or in the comment section of the return box in the LMS. Quite often, assessors complete an assessment rubric, which could be handed out to the learners in addition to the screencast video (see section 2.1.2; cf. Harper et al., 2018, pp. 277-278; McCarthy, 2020, pp. 185, 187; Tokdemir Demirel & Güneş Aksu, 2019, pp. 196-197; Zhang, 2018, p. 24). This way, the assessment procedure and results will become more transparent for the learners (cf. Ryan et al., 2016, p. 2; Thompson & Lee, 2012; Turner & West, 2013, p. 291; Whitehurst, 2014). Moreover, the students could utilize the rubric as an orientation when receiving SCFB (e.g. Anson, 2015, pp. 379-380). Some LMS already offer a grading tool together with the screenrecording functionality, e.g. the SpeedGrader tool in Canvas (Fang, 2019, p. 62; Harding, 2018, pp. 6, 10-12). In fact, a combination of SCFB and some type of written feedback (notably text editor feedback) was widely preferred in the literature, especially by students (Alharbi, 2017, p. 246; Elola & Oskoz, 2016, pp. 69-71; Grigoryan, 2017a; Howard, 2018; Silva, 2012, pp. 9, 12). In that regard, the written comments and corrections appeared to be particularly useful for local, micro-level aspects, such as typos, grammar, word choice and writing conventions (Campbell & Feldmann, 2017, p. 2; Silva, 2012, p. 3), whereas the SCFB allowed for macro-level comments about content, structure and argumentation (Ali, 206 3 Digital feedback methods <?page no="208"?> 2016, p. 117; Elola & Oskoz, 2016, p. 71). The video could highlight the most important themes, while the written feedback would contain the details (Cavaleri et al., 2019, p. 15). To reduce time investments for the written feedback, automatic spelling and grammar checking could be run for lower-level errors (e.g. Kim, 2018, pp. 37, 47; see section 3.2). Another frequent combination is SCFB with talking-head video feedback (see section 3.12), i.e. when the assessor’s webcam video is shown in a corner of the screenrecording (e.g. Cheng & Li, 2020, p. 5; Dunn, 2015, p. 5; Ferguson, 2021; Mayhew, 2017, p. 182). An alternative strategy would be to display the talking head only during the phases of relational work, whereas the remainder of the feedback video would exclusively consist of a screenrecording (section 3.13.6). Moreover, face-to-face conferences could be conducted after the provision of SCFB (cf. the review by Bakla, 2020, p. 108, and by Chang, 2016, p. 100). A lecturer in Vincelette and Bostic’s (2013) study, for example, called SCFB “a ‘bridge’ between traditional written comments on student papers and face-to-face conferences” (p. 264), not as their replacement. Eventually, live feedback sessions during online consultations could be recorded to make the contents asynchronously available for later re-consultation by the learners (Vo, 2017, p. 96). Feedback in web conferences will therefore be inspected in the next section. 3.13.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) What equipment do teachers need to create SCFB videos? What video editing programs and screencasting software do you know? (2) As compared to written electronic feedback, text comments and edits cannot be readily accepted or implemented when feedback is received via screencasts only. Hence, students have to correct errors and mistakes on their own. Is this a strength or a weakness of screencast feedback? Discuss critically. (3) Would you consider screencast feedback/ video feedback as a replacement or complement to teacher-parent conferences? In how far could video-based feedback methods help parents support their children? Would you encourage your students to share your feedback video with their parents, siblings and private tutors? Give reasons. (4) Search for a screencast feedback video on YouTube (e.g. https: / / youtu.be/ l2vh7zXN AGU). a. Watch the video and evaluate it in terms of its feedback quality. b. If you were the recipient of this SCFB video, how would you feel about it? Is it informative? Does it address the learners’ needs? c. Have important feedback principles been attended to? d. What visual strategies were employed? e. What worked well in your point of view? f. What would you do differently/ improve? 207 3.13 Screencast feedback (screen only, talking head optional) <?page no="209"?> Application questions/ tasks: (5) At the end of the project week at your local school, you would like to appreciate the work that your students have produced. They created materials of many different kinds, including handcrafted artwork, video clips and digital portfolios. Your students have already taken pictures of the artwork and submitted their video clips and portfolios to your school cloud. Please try out the following steps: You open these documents on your computer. Already after a brief while, you are impressed by the quality and creativity of your students. Inspired by their work, you launch the screenrecorder on your computer, check the sound of your microphone, jot down a few keywords and phrases to guide your feedback and then you start the recording. While talking about the student work, you navigate through the submitted multimedia materials in a short five-minute clip. As you were very enthused by your students’ creativity, you feel that your clip does not need any further editing. You watch it briefly and then upload it onto the school cloud folder that is privately shared with the student group. (6) The project week also showed you that your students seem to be well familiar with the basic recording functions of their smartphones and cameras. Some of them even use software for photo and video editing regularly in order to create clips for their social media channels. You are very curious about learning these tools yourself, but also want to raise your learners’ awareness of the dangers of social media. a. As part of critical media education, you ask your students to list the apps they use for photo and video editing. In face-to-face teaching, your students might write the apps on paper snippets (one app per paper) and pin them to the board in your classroom. In online (and hybrid) teaching, you ask your students to type the names or insert the hyperlinks of the apps onto a digital wall or board (e.g. Padlet; see section 3.3). To avoid multiple mentions, the students click the “thumbs up” button if they know a program as well, or rate its quality by assigning stars on a rating scale. They should also list some of the most striking features of each app. b. Some of the programs might only be known by a few students, so that others become curious about them. Ask the students to introduce the programs to each other in small groups. Afterwards, they should be prepared to present the affordances/ functions to the entire class and thus also to you as the teacher. c. After the end of the session, you inspect some of the apps more closely because you want to use them on an exemplary basis in your further teaching. In one of the subsequent sessions, you draw your students’ attention to the potential fake reality of the social media world, as they themselves were capable of modifying pictures and videos. d. In another session, you want to use the editing and cutting functions pro‐ ductively by incorporating digital peer feedback in your classroom. For that purpose, you can use some media products that your students had created and ask them to deliver digital feedback to their classmates. Make them aware of the most important principles for constructive feedback and practice the procedure with your students. Start with an assignment that allows for multiple solution paths so that the feedback activity will not result in a correction exercise for the 208 3 Digital feedback methods <?page no="210"?> students. Provide in-process support to your students by following the steps that were mentioned in this chapter. e. Don’t forget to talk about the feedback production and reception experiences afterwards and to critically evaluate the method together with your students. 3.14 Feedback in web conferences (Videoconference feedback) 3.14.1 Definition and alternative terms/ variants Videoconferencing enables the synchronous provision of feedback via “interactive voice, video and data transfer between two or more groups/ people” (Altıner, 2015, p. 628, with reference to Wiesemes & Wang, 2010; see also Fatani, 2020, p. 2, as well as Gough, 2006, cited in Karal, Çebi, & Turgut, 2011, p. 277). It has alternatively been called “web video conferencing (WVC)” by Fatani (2020, p. 2) and of‐ fers several functions: ■ “real-time, two-way video and audio communication”, ■ “content sharing”, including presentations, file sharing and screen sharing, ■ note-taking and content creation, e.g. in a text editor or on a digital whiteboard, ■ “messaging” via private and public chats, ■ “collaborative learning”, e.g. simultaneous writing in a text pad, ■ “immediate feedback”, including live polling/ quizzing. Seckman (2018) further elaborated that the feedback sessions can be directed at indi‐ viduals or groups (p. 19; cf. Rottermond & Gabrion, 2021, p. 40). Moreover, they could be recorded so that learners can reconsult them at a later point in time (Rottermond & Gabrion, 2021, p. 41; Seckman, 2018, p. 19). Thus, live videoconference feedback could turn into SCFB, which would constitute a possible combination (see sections 3.13 and 3.14.8). Both of them are multimodal feedback methods. In contrast to recorded (i.e. asynchronous) feedback, however, videoconference feedback allows for a synchronous bidirectional exchange without time delay (cf. Seckman, 2018, p. 19). Accordingly, Seckman (2018) used the term “Interactive Video Communication (IVC)” to highlight the interactive nature of videoconference feedback (p. 19). Moreover, as opposed to textbased feedback of different kinds (see sections 3.1 to 3.10), it additionally “capture[s] verbal and nonverbal cues” (Seckman, 2018, p. 19). Sometimes, it was called “video chat” feedback (Rassaei, 2017), mostly when Skype was used, even though it might have extended beyond the video function, e.g. through written chats and file sharing, but also through polling (see section 3.10). Based on a literature review, it appears that mainly two types of feedback in video‐ conferences can be distinguished. One is (corrective) feedback during live conferences (e.g. Monteiro, 2014; Rassaei, 2017). The other one is conducting (scheduled) feedback 209 3.14 Feedback in web conferences (Videoconference feedback) <?page no="211"?> sessions for individuals or small groups in which feedback on the learners’ work in progress or a previously created learning product is provided (Chiappata, 2020; Schluer, 2020b, pp. 53-54). The contexts of use will be surveyed in more detail below. 3.14.2 Contexts of use, purposes and examples Due to advances in technology, notably faster broadband connections and more powerful processors, videoconferencing has become more prevalent in people’s lives for different kinds of distance communication (Wiesemes & Wang, 2010, p. 29). For example, it plays an increasingly important role at the workplace, in people’s free time and in education, especially since the Covid-19 pandemic (e.g. Fatani, 2020). The literature, however, is still limited. Mostly, videoconferencing was utilized as a teaching tool (e.g. Ghazal, Samsudin, & Aldowah, 2015; Skinner & Austin, 1999) rather than as a method of feedback provision. In only some studies, the focus was placed on the use of oral corrective feedback strategies as part of online teaching sessions (Monteiro, 2014; Rassaei, 2017). Therein, the analytical interest was similar to research on oral corrective feedback in the face-to-face classroom (Lyster & Ranta, 1997; Sheen & Ellis, 2011, p. 594; Voerman et al., 2012). In others, feedback in videoconferences was compared to written feedback, for example regarding its impact on cognitive, social and teaching presence in online learning environments (Seckman, 2018, p. 19; see also Clark, Strudler, & Grove, 2015, p. 49). Frequently, this research was grounded in the Community of Inquiry (COI) framework by Garrison, Anderson and Archer (2000). Apart from instructor-to-student feedback, videoconferencing can be utilized for peer feedback (cf. Samuels, 2006, pp. 100-101). In that regard, it has mainly been researched as part of e-tandem exchanges (e.g. Arellano-Soto & Parks, 2021; O’Dowd, 2007; cf. the review by Guichon et al., 2012, pp. 182-183). In e-tandems (or teletandems), typically two learners of different native languages assist each other in learning the native language of the partner and in becoming more aware of cultural aspects (Brammerts, 1996, p. 121, cited in Arellano-Soto & Parks, 2021, p. 223). Clearly, when students mutually support each other’s learning efforts, feedback and correction might turn important at several points. Research therefore analyzed “focus-on-form episodes” in e-tandem exchanges (Arellano-Soto & Parks, 2021, p. 232), in which linguistic errors were addressed. Moreover, cultural differences in feedback provision and error correction were examined (Arellano-Soto & Parks, 2021, p. 227; Sotillo, 2005, p. 486). Videoconferencing feedback might not only be beneficial for student peers, but also for teachers as part of their professional development. As Martin (2005) suggested, teachers “too can be put into contact with experts and with their peers, either elsewhere in their own country or anywhere else in the world” to enhance “their continuous learning as professionals” (p. 400). 210 3 Digital feedback methods <?page no="212"?> Given the numerous affordances that videoconference systems offer, it is surprising to see that the most frequently studied feedback type was written in nature, e.g. in chats (as reviewed by Guichon et al., 2012, p. 182, and Monteiro, 2014, p. 58; see section 3.7). Oral and multimodal feedback exchanges were hardly ever focused on, and if so, this was often only done in an exploratory manner (cf. Monteiro, 2014, pp. 58-59). For example, Guichon et al. (2012) explored written and oral feedback provision practices by pre-service teachers of French in a videoconferencing environment. Similarly, research on individual feedback conferences for formative or summative feedback is almost non-existent (e.g. Chiappetta, 2020; Samuels, 2006). The next sections will summarize the prior literature regarding the advantages and limitations of feedback in videoconferences, which needs to be substantiated by future studies, though. 3.14.3 Advantages of the feedback method As compared to digitally recorded feedback (and other asynchronous feedback types), one major advantage of live videoconferencing is that it allows for “immediate feed‐ back” (Fatani, 2020, p. 2; Seckman, 2018, p. 21). Teachers and learners can “communicate in real time” (Ghazal et al., 2015, p. 1) and thereby see each other even though they do not sit in the same physical space (Rassaei, 2017, p. 2). Videoconferencing thus appears to be particularly suitable for interactive feedback exchanges in online or distance education (Samuels, 2006, p. 98; cf. Martin, 2005, p. 398). During the feedback dialogues, learners “can ask questions and/ or work through revisions on the spot” (Rottermond & Gabrion, 2021, p. 41). They can discuss the reasons behind certain mistakes and corrections and engage in critical thinking (Ahmed et al., 2021, pp. 294, 309). This may help to diminish ambiguities (Ahmed et al., 2021, p. 305), which often remain unresolved in written feedback or other types of asynchronous feedback. Due to the video modality, not only verbal communication strategies are relevant, but also paraverbal (e.g. intonation, voice inflection) and nonverbal strategies, such as facial expressions that signal understanding (smile, nodding) or non-understanding (e.g. frowning, lifting eyebrows) (Guichon et al., 2012, pp. 187-188; Samuels, 2006, p. 86). At the same time, cultural and individual differences in the realization of speech acts and gestures etc. need to be acknowledged (cf. e.g. Arellano-Soto & Parks, 2021, p. 227; Sotillo, 2005, p. 486; Wierzbicka, 1985). This richness of interactive communication cues is similar to the traditional face-toface classroom (Ghazal et al., 2015, p. 1) but still different from it. On the one hand, it is reduced because the use of body language is restricted to facial and hand gestures. Furthermore, eye contact is hard to realize, as the interlocutors either look at the camera or the speaker on the screen. Also, the physical space between interlocutors has become irrelevant, whereas in face-to-face meetings cross-cultural, situational and personal perceptions of a comfortable distance or closeness are critical. Moreover, the video image is usually smaller and more pixelated than in real life (cf. Rassaei, 2017, p. 2). 211 3.14 Feedback in web conferences (Videoconference feedback) <?page no="213"?> On the other hand, videoconferencing is more inclusive than on-site communication since the interaction in webmeetings can take place in multiple ways and directions (Guichon et al., 2012; Monteiro, 2014; Seckman, 2018, p. 19). Apart from verbal and visual body language, learners and teachers can use “a variety of other modalities” (Monteiro, 2014, p. 58), including text chats, writing pads, screen sharing, file sharing and further functions, depending on the videoconferencing software that is utilized (see section 3.14.5). To exemplify, instructors may write an important keyword or a corrected form into the text chat (or pad) as a reminder for the learners (Guichon et al., 2012, p. 189) or as a hook for further elaboration at a later point (see also combinations in section 3.14.8). For drawing learners’ attention to linguistic corrections, the formatting options of the text pad (bold print, underlining, italics etc.) and chat (e.g. capitalization) can be exploited. For example, a tutor in Guichon et al.’s (2012) study used “capital letters to indicate where the problem lies (e.g. tellemenT)” (p. 193). With the help of the note pad or text chat, teachers can thus provide correction hints for the learners “without interrupting the flow of the conversation” that is going on audio-visually (Guichon et al., 2012, p. 187). Moreover, learners and teachers may utilize screensharing during feedback sessions to talk about work in progress (Schluer, 2020b, pp. 53-54). Hence, a student’s assign‐ ment could be displayed via screensharing in order to conduct feedback exchanges about it, e.g. to seek for feedback, to explain previously annotated passages (cf. sections 3.1 and 3.3), to perform live annotations and corrections or to demonstrate the extent to which revisions have already been executed. In addition, assessors could show external resources or navigate to helpful websites in order to support the learners. These functionalities are highly similar to SCFB (see section 3.13). Furthermore, instead of displaying documents temporarily, the participants may also send permanent files, e.g. a relevant presentation or publication, via videoconferencing platforms such as Zoom, WebEx or Skype (Arellano-Soto & Parks, 2021, p. 230; Monteiro, 2014, pp. 60-61, 63). Some of them also permit drawings and annotations on a digital whiteboard, e.g. BigBlueButton and WebEx (Arellano-Soto & Parks, 2021, p. 230). The effective combination of these various tools and modalities still needs to be examined by future research in order to enhance feedback practices in videoconferences (Monteiro, 2014, p. 71). If employed sensibly, they may cater for different preferences and needs of the teachers and students across distinct age groups, disciplines and learning styles (Martin, 2005, p. 398; Swanson & Tucker, 2012). As compared to in-person meetings, the support could thus be better tailored to the individual learner (Chiappetta, 2020; Swanson & Tucker, 2012) while not being more time-intense than traditional face-to-face consultations (Samuels, 2006, p. 99). Furthermore, videoconferencing has become relatively inexpensive nowadays (Martin, 2005, pp. 403-404), while its functionalities keep on increasing. For instance, it allows for interaction among students and teachers in different places around the world (cf. Martin, 2005, p. 397) and may enhance (intercultural) communication and collaboration 212 3 Digital feedback methods <?page no="214"?> skills (Martin, 2005, pp. 398, 402). As a side-effect, it could lead to a widening of horizons and “greater tolerance of other perspectives” (Martin, 2005, pp. 401-402). As opposed to written feedback, it may also improve students’ listening skills (Martin, 2005, p. 402), especially in foreign language learning settings. What is more, videoconferencing contributes to a significantly higher cognitive, social and teaching presence (Garrison et al., 2000) as compared to text-based feedback (Seckman, 2018, pp. 20-21). “Presence” commonly refers to “a subjective feeling of ‘being there’ irrespective of physical location” (McKerlich, Riis, Anderson, & Eastman, 2011, cited by Seckman, 2018, p. 19). In Garrison et al.’s (2000) Community of Inquiry (CoI) model, cognitive presence is fundamental to the learning success, as it designates “the extent to which the participants […] are able to construct meaning through sustained communication” (p. 89). Social presence, in turn, is “the ability […] to project […] personal characteristics into the community” and thus to present oneself as “real people” (Garrison et al., 2000, p. 89). It is meant to support cognitive presence (p. 89), as is, for example, obvious in the use of video communication during a learning event. Finally, teaching presence mainly has to do with the teachers’ skills in organizing the content and facilitating learning (Garrison et al., 2000, pp. 89-90). In prior research, it was stressed that videoconferencing can help establish feelings of connectedness to the teacher and among the students, thereby fostering teaching and social presence (Clark et al., 2015, pp. 59-60; Seckman, 2018, p. 21). Moreover, it may increase students’ motivation and engagement in the learning process (Ahmed et al., 2021, p. 307; cf. the review by Rassaei, 2017, pp. 2-3) and thus their sustained cognitive presence (cf. Seckman, 2018, p. 21). For these reasons, Rassaei (2017) concluded that oral corrective feedback in video‐ conferences can be at least as effective as in the traditional classroom (p. 1). At the same time, it is not too intrusive for the learners due to the modal affordances of the videoconference system. For example, a participant in Samuels’ (2006) small-scale exploratory study stated that “the Web conference provided her sufficient personal space” so that she dared to take notes as it did not seem “necessary for her to maintain eye contact with [the tutor] throughout the conference” (p. 83). Finally, not only the chat and pad contents from videoconferences can be downloa‐ ded for later review, but also the entire feedback “sessions can be recorded, which offers students an extra layer of support” by allowing them “to access the comments as needed” (Rottermond & Gabrion, 2021, p. 41; cf. Seckman, 2018, p. 19; see section 3.14.6). Synchronous meetings could consequently be made available for later, asynchronous use. 3.14.4 Limitations/ disadvantages While videoconferencing may help to increase cognitive, social and teaching presence as compared to asynchronous and synchronous text-based methods, “the risk of ‘losing’ [the] remote pupils” (Martin, 2005, p. 403) still persists. Adequate feedback designs and 213 3.14 Feedback in web conferences (Videoconference feedback) <?page no="215"?> strategies therefore need to be set up to sustain the interest of the learners (cf. Martin, 2005, p. 402). Especially if one-on-one feedback conferences are scheduled while the rest of the course is waiting for their appointment, appropriate tasks need to be assigned that are to be completed during the waiting time (Chiappetta, 2020; Schluer, 2020b, pp. 53-54). On the other hand, if appointments are scheduled outside of the normal online sessions, it might be difficult to agree on meeting times that are convenient for everybody (Ahmed et al., 2021, p. 293; Ghazal et al., 2015, p. 3). What is more, the scheduling can be pretty time-consuming (cf. Chiappetta, 2020), but might be eased by the use of calendar tools in the LMS. Aside from these organizational concerns, teachers and learners may initially be too anxious to engage in feedback exchanges via videoconferencing (Samuels, 2006, p. 77) or might even be overwhelmed, confused or frustrated by its multimodal functionalities (Seckman, 2018, p. 21). In that regard, Seckman (2018) points out that especially older students (and teachers) might favor written exchanges via mail, whereas younger ones feel more comfortable with a variety of media (p. 21). If needed, learners should therefore be made familiar with the videoconferencing platform and its functions (cf. Ghazal et al., 2015, p. 5). Some could even lack the hardware equipment or do not know how to use it. For instance, they might not know how to turn on the microphone and the camera or have difficulty in sharing their screen (cf. Samuels, 2006, p. 92). Teachers would thus also need knowledge about potential problems and how to solve them, e.g. that a particular videoconferencing software might not be compatible with all existing internet browsers. They should be able to give recommendations to the learners or might even ask for using the remote access function to discover the reason for a technological problem. Likewise, remote access would be useful if students are unable to use the screensharing function on their own (cf. Samuels, 2006, p. 92). On the other hand, though, the danger of remote access is that students could adopt a rather passive role in the feedback process (Samuels, 2006, pp. 83-84, 97) when the instructor “take[s] control” of it (p. 83). Generally speaking, learners’ unfamiliarity with the technologies might lead to an increase in cognitive load, making it harder for them to concentrate on the contents and to process them (Develotte et al., 2010, cited by Monteiro, 2014, p. 58). For example, they need to manage shifting attention between oral, written and visual exchanges that are displayed in different window parts (Guichon et al., 2012, p. 189). Split screen can thus cause split attention (cf. Guichon et al., 2012, p. 193). To decrease the load, students might purposefully minimize the instructor’s webcam video to avoid eye contact and focus on the document that is discussed (Samuels, 2006, pp. 80-81). Of course, this could also happen accidently, if they do not know how to handle the platform or how to re-open the video screen (cf. Samuels, 2006, pp. 80-81). Consequently, they might miss important information from the interlocutors’ non-verbal language (see section 3.14.3 above). On the other hand, assessors could feel uncomfortable using additional modalities, such as noting down comments, because this makes them look at the screen 214 3 Digital feedback methods <?page no="216"?> instead of the camera (participant in Guichon et al., 2012, pp. 193, 195). This way, they would be unable to maintain eye contact with the learners, who could interpret this as a sign of disengagement in the conversation (participant in Guichon et al., 2012, pp. 193, 195). However, these written notes are usually helpful feedback for the learners (cf. Guichon et al., 2012, p. 189). For this reason, a videoconferencing interface should be chosen that allows for a flexible arrangement of the different window parts. Accordingly, the text chat window could be placed near the position of the webcam so that eye contact can be maintained while writing into the chat (cf. Guichon et al., 2012, p. 195). However, it also needs to be borne in mind that verbal, paraverbal and nonverbal information is often restricted or disrupted in video communication (Rassaei, 2017, p. 2). For example, there could be a slow or unstable internet connection that would lead to a poor voice and audio quality or even make the use of video communication impossible (Ghazal et al., 2015, p. 1; Monteiro, 2014, p. 70; Seckman, 2018, p. 21). Likewise, short disruptions could disturb the flow of communication (Monteiro, 2014, p. 58). There might be interruptions or delays in the transmission of audio and video (Ghazal et al., 2015, pp. 6-7) and the screen might freeze. There could also be an audio echo that disturbs the exchange (Ghazal et al., 2015, p. 7). Even if there are no hardware or connectivity problems, there might be external disturbances, such as computer notifications, ringing phones or other background noises, e.g. by ambulances (cf. Monteiro, 2014, p. 70), construction workers, neighbors or flatmates. Moreover, there can be affective burdens on the students’ part. Some might even refuse to turn on their camera or microphone to hide themselves or their environment. All this could lead to frustration, anxiety (Seckman, 2018, p. 21) or boredom among the participants (Ghazal et al., 2015, p. 1). The latter emotion might be exacerbated by an incompetent use of the software and hardware. There is thus a need for training and for “raising awareness of what is already possible with videoconferencing” among teachers (Martin, 2005, p. 403; see also Seckman, 2018, p. 22). On the one hand, this includes a technological familiarization; on the other hand, it requires pedagogical competencies to exploit the multiple modalities and functionalities strategically for effective feedback provision. Some tips for the successful implementation of videoconferencing feedback will be sketched below. 3.14.5 Required equipment In order to give feedback during a videoconference, a computer or mobile device (laptop, tablet, smartphone) with broadband internet access, webcam, microphone and loudspeakers is required (Seckman, 2018, p. 19). Furthermore, a videoconferencing tool or software is needed. Examples include Zoom, BigBlueButton, Skype (e.g. Fatani, 2020, p. 2; Monteiro, 2014, p. 60), Microsoft Teams and Google Meet as well as many others (Rottermond & Gabrion, 2021, pp. 40-41). Quite often, at least one of these software solutions is supported by the educational institution, e.g. as part of the LMS (Seckman, 215 3.14 Feedback in web conferences (Videoconference feedback) <?page no="217"?> 2018, p. 19). Depending on the software that is utilized, different functions for visual and auditory interaction might be available, including or excluding screen sharing, file sharing, small group discussions, live polls, note writing, annotation tools and chats (see section 3.14.1 above). A brief overview of software alternatives will be given below. However, it needs to be borne in mind that the available features might change over time and that additional software solutions are likely to be released in the future. Zoom (https: / / zoom.us) is one of the most popular programs for video and audio conferences in small or large groups. Beyond video, audio and chat communication, it allows for file sharing and screen sharing. The meeting times can be directly synced with Google Calendar to ease the scheduling of the conferences. Invited participants can join the conference by clicking on a link and optionally by entering a password. While the invited participants do not have to possess a full license, they need to install the Zoom web client on their device. However, in the free version, Zoom only permits up to 40 minutes of meeting time per session. BigBlueButton (https: / / bigbluebutton.org) is an open-source software that has often been used by educational institutions since the Covid-19 lockdown. Some institutions integrated it directly into their LMS (e.g. Canvas, Moodle, IServ, Sakai, StudIP). Alternatively, the meetings can be entered easily by copying the meeting link into the web browser or by joining the conference via phone. In contrast to Zoom, the users do not need to install a software on their computer or mobile device. Apart from that, the scope of functions approximates that of Zoom. Both Zoom and BigBlueButton have a waiting room function so that a feedback session will not be disturbed by another classroom member. Moreover, they permit polling, (multi-user) annotations, small group discussions in separate rooms as well as recordings of the video meetings. However, BigBlueButton still seems to be more unstable as soon as several participants have turned their webcam on. Google Meet (https: / / meet.google.com) is another popular videoconferencing solu‐ tion and offers several functions. Users need a Google account, which would also enable the utilization of the many other tools that Google affords, such as the simultaneous and collaborative work in Google documents and presentations (see section 3.3). Similarly, Microsoft Teams (https: / / www.microsoft.com/ en-us/ microsoft-teams/ g roup-chat-software) is a very common and comprehensive software for videoconfer‐ ences and is fully compatible with other tools of the Microsoft product portfolio. Beyond that, the integration of further apps is possible, e.g. Dropbox, Google Drive and Adobe Creative Cloud. In addition, there are many further free or commercial videoconferencing apps that could be utilized for feedback purposes. It is advisable to use the software that has been purchased or recommended by one’s educational institution to allow for easy access by the students. 216 3 Digital feedback methods <?page no="218"?> 3.14.6 Implementation (how to) As had been explained above, feedback in videoconferences can either occur during live interactions in a course or consist of separate feedback sessions (similar to office hours). In both cases, the meeting link needs to be shared with the participants and explanations for accessing the conference might need to be provided in advance. Ideally, the videoconferencing should take place in a quiet and undisturbed location, with all other notification and communication systems being turned off to sharpen concentration. If feedback is given during a normal course meeting, teachers might want to type questions, keywords or corrections into the chat or writing pad to avoid disrupting the flow of communication among the learners (Guichon et al., 2012, p. 187). Peers might proceed in a similar manner or raise their (digital) hand in order to signal their non-understanding, agreement or disagreement and their willingness to contribute a comment. These signals may help the instructor to organize the subsequent peer feedback discussions. Output-prompting techniques for eliciting learners’ self-correc‐ tions should be given precedence during these learning conversations, with direct corrections being reserved for those aspects that the students cannot (yet) self-correct (cf. section 2.1.3). By contrast, individual feedback conferences with one or several learners might require more scheduling. They may either occur in an impromptu manner during a live meeting or in a separate appointment that has been planned well ahead. As Clark et al. (2015) explained, learners “had the option to schedule impromptu videoconferences with the instructor if they required extra help, or with fellow students for collaborative or social purposes” (p. 57). The feedback providers could then write time slots into the shared notes of the videoconferencing software so that the students can enter their names next to them (or vice versa, see Schluer, 2020b, p. 53). Alternatively, an event scheduler could be utilized, e.g. within the course section of the LMS (Clark et al., 2015, p. 57). Especially if the feedback sessions are held outside of the normal class time (e.g. in online office hours), such prior planning is essential. The students should select their preferred time slot well in advance in order to avoid long gaps between individual consultations. To conduct the feedback sessions, the teacher might then set up a separate webmeeting room that the students need to enter at the scheduled time. If the individual feedback sessions are integrated into a normal course meeting, the waiting time of the remaining students should be seized purposefully. For instance, they could be asked to continue working on a task that is to be presented and discussed at the end of the session (Schluer, 2020b, p. 53; cf. Chiappetta, 2020). In other words, individual consultations would be embedded within an ordinary lesson, while the other learners continue solving a particular task before the whole class meets again. This structure was called “webmeeting burger” by Schluer (2020b, p. 53) and is illustrated in Figure 13. 217 3.14 Feedback in web conferences (Videoconference feedback) <?page no="219"?> Figure 13: Webmeeting burger In the one-on-one meetings, the learners would report about their progress and ask open questions to which the teacher provides feedback and feed-forward advice. In a subsequent session, then, the implementation or adaptation of the feedback is monitored, either through another round of individual consultations or through the submission of the work in progress. As reported by Hattie and Clarke (2019), usually about five minutes are enough to talk about the progress and further steps of each individual student (pp. 110-111). Albeit comparatively short, the students perceive these individual meetings as a “caring conversation” (Hattie & Clarke, 2019, p. 110) and feel that their efforts are appreciated. To make these meetings effective, the learners need to be informed about their purpose and prerequisites. Technically, they might need to practice screensharing beforehand so that they could show their current work in the individual meetings without any time delay (Schluer, 2020b, p. 51). To avoid long loading times, learners and teachers should open all relevant files on their screen before they enter the webmeeting. Content-wise, the learners should have noted down some open questions that they want to clarify in the meetings. Also, they should know how to take notes during an online meeting in order to remember the comments. Additionally, the teacher could type important keywords and useful hyperlinks into the chat and ask the student to copy them before leaving the webmeeting. Furthermore, the teacher could record the live meeting and make the video available to the learner for later use. Hence, combining individual meetings with screenrecordings is perfectly possible. In other words, screencast feedback does not need to be pre-recorded (see section 3.13), but could likewise be a product of a live meeting (provided that everybody consents to this). The learners can then replay the recorded feedback conversations whenever they want to. Optionally, a pre-structured feedback sheet could be utilized during the meetings to facilitate the learning dialogue (e.g. Tovell, quoted by Hattie & Clarke, 2019, p. 112). It will highlight some important assessment criteria that would be addressed during the meeting (cf. Chiappata, 2020). The exact contents will, however, vary with the learning 218 3 Digital feedback methods <?page no="220"?> objectives and learner characteristics and the progress they have made (see sections 2.1.1 and 2.1.2). Depending on the learners’ age or proficiency level (e.g. in a foreign language), it could also contain some question and answer prompts that would facilitate the conversation for the learners. For example, when the learners display their work in progress during the meeting, the teacher might ask them to identify those parts first that they feel confident with or proud of (cf. section 2.2.2). The learner or teacher could then use a highlighter tool (e.g. in green) or symbol to mark those areas (Tovell, cited in Hattie & Clarke, 2019, p. 112). The passages that learners are insecure about would be marked in a different color or with another symbol, for instance a question mark. This procedure does not only work for written products, e.g. in a shared cloud document, but also for any other work displayed on the screen if the drawing tools of the web conference are utilized. In a more detailed manner, the teachers might also ask the learners to relate concrete examples from their own work to the different criteria of the assessment rubric (Chiappetta, 2020). To locate the relevant criteria and examples, a variety of visual and verbal cues can be used in the videoconference. Verbal signposts, e.g. announcing a page or paragraph number orally, ease navigation and comprehension (Samuels, 2006, pp. 82, 86). Likewise, screensharing facilitates visual pinpointing of relevant passages, either via mouse-over movements, drawings or highlighter tools. For this, a locally saved assignment or a collaboratively shared cloud document can be utilized. Many of the tips that have been collected for written feedback in a text editor (section 3.1), collaborative cloud documents (section 3.3) or screencast feedback (section 3.13) would thus become relevant in addition to managing the specific tools of the videoconferencing software. For instance, the whiteboard feature of the conferencing tool could be used (Samuels, 2006, p. 82) as a supplement or alternative to written comments in the note pad or text chat (Guichon et al., 2012, p. 189). Finally, learners should be made aware of the possibility to download the shared files as well as to copy and save the notes from the chat or pad. Beyond that, it might be necessary to repeat the most important aspects at the end, especially if a longer feedback conference was conducted. Throughout the conference, assessors should speak slowly enough if they have the impression that learners have trouble following their suggestions, e.g. due to delays in signal transmission (Samuels, 2006, p. 85). Regular comprehension checks during the discussion might be an elegant solution for ensuring mutual understanding. In that regard, the integrated polling feature of some videoconference platforms is helpful (see section 3.10). Finally, instructors could invite the students to approach them via email or in another meeting in case further questions emerge after the videoconference. Overall, the feedback exchanges can thus stretch over multiple turns and integrate several tools and modalities (Guichon et al., 2012, p. 187). Every videoconference session should be framed by relational work, e.g. a greeting and progress discussion at the beginning and an invitation for further dialogue at the end (see section 2.1.4). The possibility for interpersonal exchanges is a major affordance of synchronous conferences and can help to develop relationships (cf. Hattie & Clarke, 2019, p. 110). 219 3.14 Feedback in web conferences (Videoconference feedback) <?page no="221"?> They can turn out to be very time-effective because a lot can be said within a short time (Hattie & Clarke, 2019, p. 108). This, however, not only requires sufficient preparation from the teachers, but also from the learners. Some tips will be sketched next. 3.14.7 Advice for students To seize the benefits of videoconferencing for feedback purposes, learners need to be familiar with the different functions of the software as well as with setting up the necessary hardware (microphone, speakers, webcam). Beyond that, they should know how to navigate documents during screensharing, possibly also how to use the annotation tools of the program (e.g. text editor) in which the submitted assignment had been created. Moreover, they might need to be familiar with additional functions of the LMS, such as the calendar or event scheduler, in order to enroll in meeting slots. The learners should make sure to enter the videoconferencing session at the pre-determined time (Samuels, 2006, p. 98). If they are unable to access it (e.g. due to temporary technological problems), they need to know how they could reach the instructor for trouble-shooting or postponing the meeting. To prepare for the meeting, it is advisable for the learners to note down open questions that they would like to discuss, e.g. by formulating a feedback request (see section 2.2.2). If the session is devoted to feedback on a submitted assignment, the students should have it available before the meeting starts (either in print or digitally). Moreover, they should try to attend the conference in a quiet location and turn off all disturbing apps and notifications on their devices. At the beginning of the meeting, they may ask the instructor whether a recording of the conference will be available for later perusal. During the conference, they should address their open questions. Depending on the standard settings of the videoconference, they might need to ask the assessor to give them the right to share their screen in order to display their questions or work in progress. Apart from that, they can type questions into the chat or text pad for further discussion with the teacher. In case the instructor shares the screen, learners need to be able to manage the different program windows or change their sizes. For instance, they might want to leave the full screen mode of the screensharing in Zoom in order to have more space on the screen for taking notes in their own documents. Also, the window of the webcam video could be minimized if non-verbal cues are unnecessary for the discussion. Throughout the meeting, learners should try to clarify all open questions and ensure their understanding of the feedback comments. Before leaving the conference, they should download all shared files and save the contents of the text chat or note pad. Finally, it would be beneficial to talk about preferred ways of contacting the instructor should further insecurities arise regarding the feedback contents. For peer feedback approaches, e.g. in the group rooms of the videoconference application, students would additionally need skills in feedback provision (see section 2.2.3). Furthermore, they might need to be competent in various other feedback methods (e.g. text editor or cloud editor), depending on the assignment that is discussed. 220 3 Digital feedback methods <?page no="222"?> At the same time, this already hints at some possible combinations that will be synthesized next. 3.14.8 Combinations with other feedback methods Feedback sessions in webmeetings are typically multimodal in nature and several different feedback methods can thus be combined (see e.g. Monteiro, 2014). To exemplify, a submitted document could be screenshared while being annotated by the assessor and/ or learner in an offline editor or cloud application (cf. Ahmed et al., 2021, p. 296; Samuels, 2006, pp. 90-92). Depending on the tools that are used, videoconferencing may incorporate feedback in text editors (section 3.1), collaborative documents (section 3.3), chats (section 3.7) and audio or talking-head video feedback (sections 3.11 and 3.12), for instance. In addition, live polls are a frequent component of videoconferences so that this feedback method could be integrated as well (section 3.10). Moreover, videoconferences can be recorded to make them permanently available (Seckman, 2018, p. 19; cf. Rottermond & Gabrion, 2021, p. 41; cf. participant in Ahmed et al., 2021, p. 304). The result would thus be a type of digitally recorded screencast feedback (see section 3.13). Several videoconferencing applications, LMS and other programs consequently allow for a variety of feedback methods that could be used synergistically by all participants. Hattie and Clarke (2019), for instance, report about a project that utilized the digital whiteboard app “Explain Everything” (p. 168). During the sessions, the learners were “encouraged to live-record both written and verbal feedback to each other” to ease reflection about their strengths and struggles (Hattie & Clarke, 2019, p. 168). These recordings would then be made available to everybody so that they could reconsult them and reflect on them at later points as well (Hattie & Clarke, 2019, p. 168). Similarly, teachers are advised to reflect on their feedback practices, especially when using the multiple tools of LMS and web-conference systems. To keep track of the practices and encourage deeper reflections, portfolios can fulfill important functions. The next section will therefore shift the focus to self-assessment by means of e-portfolios. 3.14.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) What tools do you typically use in web conferences? (2) What are the disadvantages for students and teachers when using web-conferencing tools? (3) Can videoconference feedback be utilized as an asynchronous feedback method? (4) Is videoconference feedback suitable for peer feedback? How would you implement it in your own classroom? 221 3.14 Feedback in web conferences (Videoconference feedback) <?page no="223"?> Application questions/ tasks: (5) During (or after) your next web conference, please note down all the feedback pro‐ cesses that are going on and keep track of the tools that are used for them. For this, a table might be helpful. In one column, you mention the feedback purpose that was pursued, in the other columns the persons that were involved and the feedback direction(s), the tools that were utilized and the success of the implementation or the challenges that occurred. Do you think that for some purposes, other tools should have been utilized? Try them out next time. 3.15 Feedback in e-portfolios 3.15.1 Definition and alternative terms/ variants E-portfolios have become a popular part of e-learning environments in the past years, especially since the 2000s (Farrell, 2020, pp. 7-8). They are suitable for formative assessment by the teachers and peers as well as for self-assessment (notably selfreflection) by the learners. Given the plethora of work that has been done on eportfolios, only those that are relevant for feedback will be reviewed here. E-portfolios is the short form for electronic portfolios, which have alternatively been called digital portfolios, virtual portfolios or web-based portfolios (Alawdat, 2013, p. 340; Farrell, 2020, p. 2). They are “personalized, web-based collections” of coursework or other activities (DiBiase, 2002, p. 2, cited by Alawdat, 2013, p. 340) that document learners’ progress, showcase samples of their work, include reflections and encourage the exchange of ideas (Lorenzo & Ittleson, 2005, quoted in Farrell, 2020, p. 9). In short, their purpose is to “collect, select, reflect and connect” (Hughes, 2008, p. 439, cited in Ellis, 2017, p. 41). Their contents are not restricted to written material, but can include images, sound, video and other multimedia materials (Abrami & Barrett, 2005, p. 2, cited by Alawdat, 2013, p. 340; Chaudhuri, 2017, p. 11; cf. the review by Farrell, 2020, p. 9). Hence, feedback in e-portfolios can be manifold: Learners can use their collected materials as well as the checklists or reflective questions in the portfolios for selffeedback; they can share their portfolios with peers to ex‐ change peer feedback; they can obtain feedback from teachers and employers regarding the materials they have compiled and the reflections they have written. For this, comments can be given in various ways, e.g. through dis‐ cussions in a face-to-face or electronic manner. Notably, the perspective of self-reflection and self-assessment is relevant because feedback from others does not need to be bound to the portfolio system itself. However, students may likewise compile feedback portfolios in which they collect and organize the feedback 222 3 Digital feedback methods <?page no="224"?> they receive in order to set up plans for their further learning (e.g. the FEATS system by Winstone, 2019, quoted in Winstone & Carless, 2020, pp. 64-65). 3.15.2 Contexts of use, purposes and examples A dominant theme in the e-portfolio literature is its capacity for encouraging reflection about the learning process (as reviewed by Farrell, 2020, p. 9). E-portfolios offer opportunities for reflective practice and self-monitoring ( JISC, 2019b, p. 4), but may also foster several other skills, such as digital literacy through the very creation of the e-portfolios themselves: Creating an e-portfolio involves skills essential for 21st century learning - organising and planning material, giving and receiving feedback, reflecting, selecting and arranging content to communicate with a particular audience in the most effective way. ( JISC, 2019a, n.p.) The purposes of compiling e-portfolios can be numerous and may differ across contexts. Abrami and Barrett (2005) distinguished between three main purposes, namely “process, showcase and assessment” (p. 2, cited by Farrell, 2020, p. 9). In terms of process, an e-portfolio shows a person’s development over time (Farrell, 2020, p. 9). It records their learning journey in a particular subject field, the evolution of their professional identity and reveals their achievements, for instance ( JISC, 2019b, p. 4). At the same time, it may be utilized to showcase one’s own competencies (Farrell, 2020, p. 9), for example as part of a job application. Private and professional domains often intersect in that regard. While e-portfolios are reflective and thus rather personal, they are frequently shared with others for some kind of assessment purpose. To illustrate, the creation of e-portfolios is a common alternative assessment strategy that is necessary for course completion ( JISC, 2019b, pp. 4, 8) or for evaluation by potential employers (Pegrum & Oakley, 2017, p. 23). The three main purposes thus overlap to some degree. Due to their versatility, e-portfolios have been used in many disciplines, including vocational education (cf. the review by Lu, 2021a, p. 97). For example, they were commonly utilized to demonstrate the learning gain from courses or apprenticeships ( JISC, 2019b, p. 4), e.g. as part of teacher education or health education (Farrell, 2020, p. 10). They can assist the transition from school or higher education to the job market, but they may also “document continuous professional development activities for those already in the workplace” (as reviewed by Farrell, 2020, p. 10). For language learners and prospective language teachers, the portfolios by the Council of Europe deserve special mention. They had originally been developed as PDF documents ready for print, but some parts are also available as interactive PDFs. Furthermore, the contents could be modified or loaded into e-learning systems to transform them into true e-portfolios. The European Language Portfolio (ELP) is targeted at language learners and helps them record and reflect on their language learning and cultural experiences at school and beyond (Council of Europe, 2000; https: / / www.coe.int/ en/ web/ portfol 223 3.15 Feedback in e-portfolios <?page no="225"?> io/ home/ ). It consists of three main components, the language passport (overview of language proficiency), the language biography (tool for planning, reflecting and assessing the language learning process and progress) as well as the dossier (in which selected materials are collected, which are referred to in the passport or biography). With regard to the European plurilingual policy and the general importance of language competencies for professional life, the ELP can be shown to employers. The European Portfolio for Student Teachers of Languages (EPOSTL) (Newby et al., 2007; Newby, Fenner, & Jones, 2011), in turn, shifts the perspective to language teachers. It can be utilized from early stages onwards, i.e. at the beginning of teacher education, and may accompany preand in-service teachers throughout their teaching journey. A key aim is to encourage reflection about their beliefs, knowledge and practices in order to further improve their teaching. Users can chart their progress and discuss their reflections with peers, mentors and teacher educators (Newby et al., 2007, p. 5). This would also be useful for tracing the development of their digital competences, e.g. their digital feedback skills. Similarly, other portfolios are not only used for self-feedback, but frequently also for external feedback by teachers, employers or peers (for the latter see e.g. Andrade & Zeigner, 2021; Chang, C.-C., Tseng, Chou, & Chen, 2011; Chang, C.-C., Tseng, K.-H., & Lou, S.-J., 2012; Kusuma, Mahayanti, Gunawan, Rachman, & Pratiwi, 2021; Lu, 2021b; cf. Carless & Boud, 2018, p. 1322). Even though they were typically designed as individual portfolios, another option would be to create team e-portfolios and to reflect on teamwork skills, as Andrade and Zeigner (2021, pp. 41-42) have suggested. The purposes of e-portfolios are thus various. Nevertheless, it is possible to condense certain advantages and limitations, as will be done next. 3.15.3 Advantages of the feedback method Advantages of e-portfolios were found for the cognitive, affective and especially for the metacognitive dimensions of learning. Cognitively, learning gains were observed, e.g. regarding the different skill areas of language learning (Alawdat, 2013, p. 344), but also for other professional competencies. In that respect, it was argued that e-portfolios can help develop multimodal skills and digital literacies (Pegrum & Oakley, 2017, p. 28; see also JISC, 2019b, pp. 13, 17), which are becoming increasingly important in today’s world. Indeed, during the compilation of e-portfolios, learners may draw on various modes and media as well as on different genres. Apart from different text types, learners could incorporate images, hyperlinks, videos and sound files (Chaudhuri, 2017, pp. 5, 11). At the same time, the collected materials and accompanying reflective texts provide insight into students’ progress over time (Farrell, 2020, p. 9) and document their achievements up to a particular point in time ( JISC, 2019b, p. 8). This is not only insightful for assessors and employers, but also for the learners themselves, as they gain “a more holistic sense of their learning journeys” (Martin, 2013, cited in Pegrum 224 3 Digital feedback methods <?page no="226"?> & Oakley, 2017, p. 22). They may even discover the connections between different elements of their learning process or between different courses in a module (cf. the review by Farrell, 2020, p. 10). This process can be supported by learning analytics dashboards (LADs) that synthesize and chart students’ learning journey (Sedrakyan, Malmberg, Verbert, Järvelä, & Kirschner, 2020; Winstone & Carless, 2020, p. 63). The processes of selecting, organizing and reflecting on their own work help build students’ metacognitive thinking (cf. Ciesielkiewicz, 2019, p. 653; Lu, 2021b, p. 169; Modise, 2021, p. 285; Oosterbaan et al., 2010, cited by Alhitty & Shatnawi, 2021, p. 3). They encourage self-reflection and self-awareness about their accomplishments and areas for improvement (Sharifi, Soleimani, & Jafarigohar, 2017, pp. 7-8). From this selfassessment, students may devise plans for their future learning, which fosters their self-directed learning skills further (cf. the review by Kiffer, Bertrand, Eneau, Gilliot, & Lameul, 2021, pp. 4-5). The utilization of e-portfolios can consequently facilitate learners’ active involvement in the learning process, their critical thinking and selfregulation (Ciesielkiewicz, 2019, p. 650; see also Chaudhuri, 2017, p. 5, and the reviews by Farrell, 2020, p. 10, and Lu, 2021b, p. 169). Apart from the individual learning gain, there are social and affective benefits to the use of e-portfolios. Peers may collaborate while creating e-portfolios or by providing feedback to each other (Andrade & Zeigner, 2021; Farrell, 2020, p. 10; Lu, 2021b, pp. 169-174). The insights learners obtain from others’ e-portfolios, but also from their feedback comments, are considered valuable for confidence-building about their own abilities as well as for their further development (Lu, 2021b, pp. 170-174; cf. section 2.2.3). Especially in professional portfolio platforms, the feedback by peers may not only be helpful for personal development, but also for networking. Likewise, the regular implementation of peer feedback in educational environments qualifies students for the professional practice of peer feedback in their future job (Lu, 2021b, p. 171). If students realize the value of e-portfolios, they are likely to be more motivated in using them and in improving their skills further (see the review by Ciesielkiewicz, 2019, p. 660). Seeing their accomplishments has a positive impact on their self-worth and could encourage additional investments in their learning success (cf. Kusuma et al., 2021, p. 360; Lu, 2021a, p. 99). E-portfolios may thus foster learners’ intrinsic motivation (Ciesielkiewicz, 2019, p. 660), but this might also be complemented by extrinsic rewards, e.g. digital badges (Farrell, 2020, p. 10). Simultaneously, these badges visualize learners’ capabilities to others (cf. JISC, 2019b, p. 8). The accessibility of e-portfolios is another advantage, especially as compared to paper-based versions. Feedback could consequently be exchanged instantaneously (Chionidou-Moskofoglou, Doukakis, & Lappa, 2005, p. 230). Furthermore, a variety of hyperand multimedia materials can be incorporated, allowing for easy access to external resources (cf. the review by Alawdat, 2013, p. 345). Certainly, this may also go along with some limitations, as will be reviewed below. 225 3.15 Feedback in e-portfolios <?page no="227"?> 3.15.4 Limitations/ disadvantages Compared to the advantages, there are few limitations cited in the e-portfolio literature. However, it is claimed that the actual learning gain from e-portfolios is still unclear (Alawdat, 2013, p. 341). Moreover, affective barriers are mentioned. In particular, learners might be concerned about the confidentiality of their e-portfolios (Pegrum & Oakley, 2017, p. 31; Valdez, 2010, cited by Alawdat, 2013, p. 349). This concern comes from the dual role of e-portfolios as tools for personal reflection, but also for educational assessments and professional evaluation. In addition, students may worry about the privacy of the platform that is utilized, especially if it is external to their LMS. If the platform is publicly available, their reflections might not be as open as in a more private educational setting. On the other hand, if it is an institution-specific platform, the learners will probably be unable to access and continue their e-portfolios after graduation or course completion (cf. Farrell, 2020, p. 11). Another concern is learners’ digital literacy which is required for compiling eportfolios (Alawdat, 2013, p. 349, based on Shephard & Bolliger, 2011). They might become frustrated about the time-consuming nature of e-portfolio creation, especially if they lack digital skills and if the e-portfolio only counts to a small extent towards their final grades (Alawdat, 2013, p. 342; cf. Chaudhuri, 2017, p. 12; Lu, 2021b, p. 174). E-portfolios should therefore be granted sufficient recognition in the calculation of the grades, e.g. by acknowledging learners’ self-assessment skills as well as their progress over time. This, however, might be time-intense for the teachers. Accordingly, several challenges arise when e-portfolios are used. Some suggestions for alleviating them will be offered below. 3.15.5 Required equipment To create and access e-portfolios, a desktop computer, laptop, tablet PC or even a mobile phone can be used (Pegrum & Oakley, 2017, pp. 30-31). Beyond that, portfolio tools, interactive websites or at least PDF forms (such as for the European Language Portfolio by the Council of Europe, 2000) are required. One common e-portfolio platform in educational environments is Mahara (https: / / mahara.org/ ; e.g. Farrell, 2020, p. 8). It can be incorporated into the LMS Moodle. Likewise, other LMS often have an integrated portfolio tool, for example Sakai (https: / / sakai.screenstepslive.com/ ; Hodges, 2020). Several free but also commercial alterna‐ tives exist (e.g. Winchell, 2018), such as Adobe Portfolio (https: / / portfolio.adobe.com/ ) or Portfolium (https: / / portfolium.com/ ; Winchell, 2018). Moreover, website services, e.g. Weebly (https: / / www.weebly.com/ ) or Google Sites (https: / / sites.google.com/ new), but also wikis can be used for that purpose (see section 3.5). Some even argue that professional networking sites resemble personalized e-portfolios, including LinkedIn (https: / / www.linkedin.com/ ; Winchell, 2018) and ResearchGate (https: / / www.researc hgate.net/ ). A few platform-independent suggestions for the implementation of eportfolios for self-feedback and external feedback will be given next. 226 3 Digital feedback methods <?page no="228"?> 3.15.6 Implementation (how to) In order to utilize e-portfolios effectively, prior training and maybe even continuous support appear important (O’Loghlen, 2015, quoted by Lu, 2021b, p. 174; cf. Chionidou- Moskofoglou et al., 2005, p. 231). This may help to “reduce students’ frustrations” (Lu, 2021b, p. 175) and motivate them to use it (Pegrum & Oakley, 2017, p. 26). Apart from the students, also the staff needs familiarization with this tool (Cheng & Chau, 2013, in Ciesielkiewicz, 2019, pp. 653-654, 661). Next, the goals of and reasons for using an e-portfolio should be clearly defined (Riedinger, 2006, cited in Alhitty & Shatnawi, 2021, p. 3; see also JISC, 2019b, p. 17). Teachers may explain the many purposes e-portfolios can serve so that students might be more willing to continue using them beyond a course or module for their personal and professional development (Ciesielkiewicz, 2019, pp. 660-661). At any rate, the anticipated learning outcomes should be determined and standards for attaining them should be specified (Chaudhuri, 2017, p. 11). As Winchell (2018) put it, “it’s essential to have a vision for what belongs in the portfolio and quality standards for each artifact” (n.p.). A general recommendation is to provide an overall structure or template for the eportfolio (Chaudhuri, 2017, p. 13). Beyond that, teachers may offer additional resources (Lu, 2021b, p. 175), e.g. guidelines for the kinds of materials the e-portfolio might include (Chaudhuri, 2017, pp. 11, 13). As Modise (2021) explained, “templates with guidelines and rubrics can be designed to help students select evidence and reflect on how evidence is connected to the identified goals and objectives” (p. 293). Eventually, the students should present the collected pieces in a structured manner, with the connections between the individual parts becoming clear (Cambridge, 2010, p. 136, quoted by Alawdat, 2013, p. 340; Modise, 2021, p. 293). For this, teachers could offer a checklist or rubrics in order to facilitate learners’ self-feedback, i.e. feedback to themselves. In addition, rubrics are essential for assessment by the teacher (Chaudhuri, 2017, p. 15) or peers (for a sample rubric see Appendix B by Chaudhuri & Cabau, 2017, pp. 205-207; see also section 2.1.2). Moreover, it is commonly recommended to specify an approximate or a minimum number of materials for inclusion in the e-portfolio (Chaudhuri, 2017, pp. 11-13). Especially when learners use an e-portfolio for the first time, detailed explanations and maybe even explicit prompts are considered beneficial (Chaudhuri, 2017, p. 13). This scaffolding might be gradually removed over time (Chaudhuri, 2017, p. 13). Depending on the purpose of the e-portfolio, an adequate platform should be chosen (Chaudhuri, 2017, p. 17; see section 3.15.5). Preferably, this platform is intuitive to use (“ease of use”), allows for content sharing and interaction with others (“shareability”) and can be consulted beyond the confines of a particular course or study program (“permanence”) (Winchell, 2018). Apart from these three central factors, the platform could offer “extra features” (Winchell, 2018, n.p.), such as an illustration of learning pathways and digital certification. 227 3.15 Feedback in e-portfolios <?page no="229"?> As soon as the learners are familiar with the purposes of the e-portfolio and the platform, they start to produce work or collect materials and select those that they want to incoporate. The resources can be varied, ranging from different types of text to multimedia materials and hyperlinks (Chaudhuri, 2017, pp. 5, 11; Sakai, 2022). To share the e-portfolio with others, they need to determine the audience, e.g. the teacher, a potential employer or fellow students (cf. Sakai, 2022). They should invite feedback from them since this is often perceived as motivating and because it can increase the quality of the portfolio (Klamper & Köhler, 2015, cited by Ciesielkiewicz, 2019, p. 654). Generally, it is recommended to exchange peer feedback first before teachers or employers comment on it (cf. section 2.2.3). Indeed, peer review has been found helpful in prior studies (cf. Kusuma et al., 2021, p. 354; Lu, 2021b, pp. 170-171), and several platforms even have integrated tools for it, e.g. Moodle or Sakai (Hodges, 2020). Feedback communication may also take place via written messages in forums, chats or mails, but also in audioor videoconferences (see foregoing chapters). Likewise, teachers should provide scaffolding as needed to assist the students in their self-regulated learning process (Chionidou-Moskofoglou et al., 2005, pp. 229- 230). However, “[f]ew publications provide details on the scaffolds or instructional prompts associated with the portfolio” (Panke, 2014, p. 1535). For example, teachers might encourage deeper reflections, but also clarify potential misconceptions or errors (Chionidou-Moskofoglou et al., 2005, pp. 229-230). Such “instructional scaffolding and peer feedback [are seen] as the two main factors to support learning and reflection in an e-portfolio approach” (Kiffer et al., 2021, p. 3, based on Panke, 2014). Certainly, the learners need to engage with the comments they received (Sakai, 2022) and continue working on their e-portfolios. In addition, the provision of self-assessment quizzes can be useful to help learners reflect on their learning and decide on the next steps they would like to take ( JISC, 2019b, p. 19, reporting about the e-portfolio toolkit at the University of Plymouth). 3.15.7 Advice for students For students, self-reflection is an important prerequisite for the creation of e-portfolios. It starts with the selection of suitable materials whilst giving reasons for their choices. In writing, they may back up their perceptions with relevant literature, which could prompt deeper reflections. Similarly, the rubrics, checklists or quizzes provided by the teacher do not only facilitate the compilation of the e-portfolio, but also serve as tools for self-assessment. Beyond that, the students should seek for external feedback regularly (by peers, the teacher or potential employers) and implement the comments whenever appropriate. To keep track of the numerous pieces of feedback they receive over time, learners should be encouraged to create feedback portfolios (cf. Winstone & Carless, 2020, p. 64). The portfolio system helps them to assemble, compare and organize external feedback that had been produced in various media formats, such as video, audio or text. They 228 3 Digital feedback methods <?page no="230"?> can utilize the portfolio tools to extract the key points from the feedback messages, to derive specific points for action and to search for additional resources which would be helpful for their further learning journey. Eventually, the collected feedback and materials assist them in tracing their progress over time and engage them more deeply in self-monitoring and self-regulated learning. 3.15.8 Combinations with other feedback methods Since e-portfolios are compatible with a variety of file formats and enable feedback in different directions (peer, individual, instructor), they could be utilized together with various digital feedback methods to support the development of student learning. Likewise, e-portfolios can turn into repositories of the feedback that students have received for previous assignments. They can actively engage with the feedback by exploiting the affordances of e-portfolios, i.e. by collecting, organizing, connecting and reflecting on their contents. If e-portfolios are integrated into an LMS or operated in parallel to it, there will be multiple possibilities for feedback exchanges that have been outlined in this book. They serve to complement the self-assessment options of e-portfolios through peer and teacher feedback. 3.15.9 Questions/ tasks (for teachers and students) Knowledge and reflection questions: (1) Have you ever created an e-portfolio in one of your classes? a. If so, what was its purpose? b. Did you enjoy this process? c. What was challenging? d. What would you do differently in the future? (2) What types of feedback are possible when using e-portfolios? Application questions/ tasks: (3) As a (pre-service) teacher, you have started to compile an e-portfolio about your teaching experiences. However, you notice that you have not yet written about the feedback processes that are going on in the classroom. During your next sessions (own teaching or classroom observation), you therefore pay particular attention to all feedback processes that are taking place. You might ask a colleague or student to help you take note of it. a. What do you observe? b. What digital feedback methods do you use? c. What do you want to try out next? 229 3.15 Feedback in e-portfolios <?page no="231"?> 3.16 Questions/ tasks (for teachers and students) after heaving read this chapter Knowledge and reflection questions: (1) Explain why (or why not) you would use the below-mentioned methods for the following purposes and assignment types. Think of concrete learner groups and learning environments. Suggest alternative methods whenever suitable. a. Screencast feedback on learners’ presentations (e.g. PowerPoint, Prezi, Genial.ly), b. Email feedback on learners’ video-recorded oral discussions, c. Audio feedback on learners’ handcrafted arts products, d. Video feedback on learners’ argumentative essays, e. Written electronic feedback on mathematical solution paths, f. Screencast feedback on learners’ video-taped sports performances, g. Live feedback in a videoconference on learners’ e-portfolios, h. Chat feedback as in-process support during learners’ collaborative writing tasks, i. Written electronic feedback in a collaborative cloud document on learners’ draft of a blog post, j. Audio feedback on learners’ interview transcripts, k. Screencast feedback on a learners’ solution of a picture-mapping task, l. Live feedback in a videoconference on a learners’ performance in a class test, m. … Application questions/ tasks: (2) Get together in small groups of three to four persons. Each group receives an electronic file of a learner’s task performance. Possible assignments are written texts (e.g. a document in a text editor or in a collaborative cloud), a video-taped conversation among students, a student-created website, a PowerPoint presentation or other multimedia file. a. Each member of your group focusses on the same file, but provides feedback in different modalities. One person uses written feedback, the other one audio feedback and the third one talking-head video feedback. If there are four members in your group, this person utilizes another digital feedback method of your choice, e.g. screencast feedback. b. Grab your smartphone, tablet or laptop and start the required app for giving feedback in the written, audio or video modality. Support each other if you need help with the recording (e.g. by holding the device in a stable position while recording another person’s video). You may also leave the room in order to find a quieter place. However, don’t worry too much about doing a perfect recording - just give it a first try and then reflect on your experience. c. As soon as every group member has finished their feedback, talk to each other about the steps you have taken. d. Read, listen to and watch the feedback in the different modalities and compare the pros and cons of each method. e. Be ready to present your feedback method and results to the others in class. f. Discuss the affordances and limitations as well as possible modifications for their future use. 230 3 Digital feedback methods <?page no="232"?> 4 Discussion and outlook In the foregoing chapter, different digital feedback methods were presented. This chapter will discuss several possible combinations of these methods. Furthermore, it will extend the theoretical discussion from chapter 2 by stressing the importance of dynamic digital feedback literacies and by deriving recommendations for future research and teaching practice. Questions/ Tasks before reading the chapter Knowledge and reflection questions: (1) In chapter 3, you were introduced to several digital feedback methods and some possible combinations had been suggested very briefly. a. What feedback combinations do you still remember? Note them down. b. What combinations have you already tried out yourself ? c. What other combinations come to your mind that have not yet been addressed? (2) At your school/ workplace/ university, do you talk about (digital) feedback methods? a. Are there any workshops, training events or support measures offered to you as part of your professional development? b. Is feedback a topic that you discuss with your colleagues in informal discussions or team events? 4.1 Combinations of feedback methods The review of digital feedback methods in chapter 3 has shown that each one has different advantages but also certain limitations. Teachers therefore need to balance the pros and cons in order to identify the best possible method for a particular learning goal, assignment and learner group (e.g. Hartung, 2017, p. 206; McCarthy, 2015, p. 166). In addition, combinations of methods are feasible. However, there are hardly any studies that specifically examined combinations of feedback methods, neither of several digital feedback methods nor of digital and analog feedback methods. Mostly, the need for combinations was mentioned in the discussion sections of papers that investigated a single method or contrasted two methods with one another. Ideally, though, different feedback methods are combined in ways that produce “a complementary effect” (Ryan et al., 2019, p. 1508), i.e. in which the benefits of one mode help to reduce or overcome the limitations of another. Indeed, in Ryan et al.’s (2019) study, all “[s]tudents who received multiple forms of feedback had consistently higher mean ranks” than those who obtained feedback via a single mode only (p. 1517). <?page no="233"?> At the end of each methodological chapter, some potential combinations had therefore been suggested. The present chapter will look at the different possibilities more synergistically and discuss combinations of different digital feedback methods, also in conjunction with analog feedback. Generally speaking, two or more methods can often be combined easily with each other. Some straightforward combinations are AWE and feedback in offline text editors or online cloud documents. The written feedback process may additionally be enriched by audio and video comments or might directly be recorded as a screencast. Moreover, live chats or videoconferences can be conducted simultaneously or subsequently. Certainly, the follow-up discussions may also take place on-site in a face-to-face meeting. These aspects will be dealt with in more detail below. 4.1.1 Combinations with written feedback to address higherand lower-level aspects Most often, the combinations include some kind of written (electronic) feedback because it enables an easy localization and navigation as compared to e.g. audio or video files. This is particularly relevant for written assignments. Hence, it may solve the problem of video or audio feedback’s “separation from the work being assessed” (Bond, 2009, p. 2). The assessor can insert corrective feedback in a text editor, such as Microsoft Word (section 3.1) and Google Docs (section 3.3), and then provide explanatory feedback through audio or video (e.g. Borup, 2021). To facilitate the localization for the students, audio (and video) comments can be connected to specific paragraphs of a text or PDF file (see section 3.11). In that respect, the combination of text editor feedback and screencast feedback (section 3.13) is a common one. Even in studies that made use of SCFB only, the students often asked for an additional written summary for easier review (e.g. Kerr et al., 2016, p. 16). Usually, assessors type some notes and corrections into the submitted text draft anyway before starting an SCFB recording. Thus, it would not be any extra burden to disseminate the annotated document back to the students, together with the SCFB. As Elola and Oskoz (2016) put it, the ideal type of feedback would combine both tools. Learners would benefit from the immediacy of […] Word […] while also enjoying the benefits offered by more lengthy and deeper explanations on content, structure, and organization from hearing the instructor’s remarks. (p. 71) Grigoryan (2017a) conducted an experiment to see whether the combination of textbased commentary (“using the Insert Comment and Track Changes functions in Word”) and audio-visual feedback (“using screen capture software to create feedback videos”) was more effective than using text editor feedback alone (p. 90). The participating students rated the overall effectiveness of the combined feedback higher than the textonly groups, but this difference was not significant (Grigoryan, 2017a, p. 99). However, 232 4 Discussion and outlook <?page no="234"?> there was a significant difference concerning personalization, with the combined feedback being perceived as more personal (Grigoryan, 2017a, p. 100). Moreover, the students found it clearer and more helpful (p. 104) and would therefore favor such a combination in the future as well (p. 102). The greatest benefit from SCFB could thus be reached if not only the screencast itself is sent to the learners, but also the reviewed electronic document the screencast is referring to. Since the additional provision of written feedback might cost time, automatic spelling and grammar checking could be run for lower-level errors. For example, Kim (2018) com‐ bined the screencasting process with the use of an online grammar checker (Grammarly). This way, students did not only receive global feedback on organization and content, but also local feedback on mechanics and grammar (Kim, 2018, p. 47). The procedure turned out to be a significant time-saver for the instructor (Kim, 2018, p. 47). Many text editors are already equipped with simple AWE functions (cf. AbuSeileek, 2013b) so that no additional plug-in or program needs to be installed. However, external applications might work more reliably as they usually analyze more error types than the pre-installed spelling and grammar checker of the online or offline text editors (see section 3.2). While the direct utilization of AWE in text editors appears to be very common, there are also many tools that work as a browser extension (e.g. Grammarly for Chrome). Consequently, AWE could be applied to all other kinds of written feedback as well. Examples include feedback in blogs, wikis, forums, emails and cloud documents. What is more, AWE can be used in chat feedback since the majority of smartphones and tablet PCs have built-in grammar and spelling checkers. AWE will help focus on local issues and thus compensate for the challenges of a few other methods. For instance, it has often been criticized that blog feedback only allows for general or global feedback, while making specific local feedback very difficult (Dippold, 2009; see section 3.6). Assessors may therefore consider combining blog comments with AWE applied to the running text of the blog. Another possibility would be to compose the blog contents in a cloud document (section 3.3) or wiki (section 3.5) first and to provide feedback on that draft before uploading the revised version onto the blog (Dippold, 2009, p. 32). Alternatively, browser extensions (such as Hypothes.is; see section 3.3.5) can be installed that facilitate website annotations, e.g. of blogs. Other ways to enrich the contents of blog comments are as follows: The feedback providers may record a video or audio file and upload it to a cloud server or video-/ podcasting platform and then share the link in the blog comment. Likewise, hyperlinks to websites with further explanations and advice can be included in the comment. Similarly, forum posts enable file uploads so that feedback would not be restricted to text only (section 3.4). Here, for example, files with feedback in a text editor could be uploaded easily, but also other multimedia attachments (e.g. video or audio). Another option would be to write a blog (or forum) post first and to conduct a chat or videoconference afterwards to discuss the feedback contents in more elaborate ways (see section 4.1.3 below). With wikis, a direct integration of multimedia files as feedback sources would be possible. Users may upload photos, audio, video and other files (Kemp et al., 2019, p. 149) and thus 233 4.1 Combinations of feedback methods <?page no="235"?> provide feedback in a variety of ways. The same is true of e-portfolios (see section 3.15) and several cloud applications. To exemplify, Padlet allows users to create an audio or screencast recording on the fly or to upload multimedia files from the hard drive (section 3.3). Likewise, hyperlinks to external resources can be incorporated or simple written comments can be inserted. Due to the manifold possibilities, however, learners might easily lose track of the various comments and their connections to the originally submitted tasks. To establish direct connections to specific parts of written assignments, it could therefore be helpful to integrate external resources by copying the hyperlinks to e.g. recorded feedback into a comment bubble in the text document itself (see section 3.1.6). This would be another possibility to complement written feedback with talkinghead video feedback or screencast feedback, for instance. Alternatively, the links could simply be disseminated via the LMS or via email (section 3.8). Instead of annotating the submitted assignment, though, written feedback in an accompanying email might sometimes be enough to complement recorded feedback (audio, video or screencast feedback). As Heimbürger (2018) suggested, an email with bullet points highlighting the most important points from the recorded (audio) message could be fully sufficient (p. 114). In LMS, it is frequently possible to insert written remarks into a submitted assignment or to upload a file, which could be an audio or video recording, for instance. However, email feedback would also be useful to complement text editor feedback in order to provide an orientation for the learners. Quite often, the number of comments and their sequential appearance in word processing programs is disadvantageous (see section 3.1.4) so that a summary of the most crucial aspects in an email would be beneficial. Email can thus constitute a complement to many different types of electronic feedback, e.g. audio feedback, video feedback, screencast feedback, text editor feedback etc. It can also be combined with synchronous feedback in cloud documents, wikis and chats. Synchronous feedback in this case could be utilized to communicate immediate and short responses, while email feedback might provide asynchronous and more detailed feedback for later perusal. Email thus appears to be a versatile tool to combine several feedback methods, not just text file attachments. 4.1.2 Combinations with synchronous feedback to foster feedback dialogues Collaborative tools (e.g. Google Docs or electronic mindmaps; see section 3.3) as well as other synchronous applications enable interactive in-process support and promote follow-up exchanges. For example, chatting (section 3.7) or webmeetings (section 3.14) could occur in parallel to collaborative or individual work. Furthermore, they could be conducted as a follow-up to any other feedback method, especially asynchronous feedback (e.g. written feedback or recorded feedback). The present section will be a bit more detailed about these possibilities. While learners work on a task, e.g. in text editors (cf. Ene & Upton, 2018, p. 4), cloud documents or wikis, chats can be a useful supplement to discuss procedural aspects 234 4 Discussion and outlook <?page no="236"?> and clarify open questions (cf. section 3.7.8). To exemplify, Wigham and Chanier (2013) studied the combination of audio and text chat feedback in the virtual world of “Second Life”. The architecture students were enrolled in a content and language integrated learning (CLIL) course and received corrective feedback in the text chats regarding their oral contributions. The students seemed to be able to manage both modalities, i.e. monitoring the comments in the chat feedback and responding to them in the oral modality (Wigham & Chanier, 2013, p. 277). The tools of videoconference systems can be utilized in a similar manner, i.e. teachers or peers typing corrective feedback (notably focus on form) in the text chat while the students are talking (see Guichon et al., 2012, in sections 3.7 and 3.14). Likewise, many cloud applications, such as Google Docs, offer a chat feature in addition to simultaneous typing in a document (section 3.3). To enhance feedback dialogues, Wood (2019; 2021; 2022) stressed the particular affordances of cloud documents as compared to offline text editors. In that respect, he focused on enhancing the dialogic potential of SCFB (section 3.13) that was used as a supplement. He explained that students can respond to and question screencast comments through cloud application mediated dialogues. This combination may help preserve the conversational feeling of screencast feedback while providing a convenient mechanism for further discussion of the feedback. (Wood, 2021, p. 10) The idea of feedback dialogues is thus central to Wood’s suggestion, but also to feedback in general (see sections 2.1.7 and 2.2). In that regard, peer exchanges as well as teacherstudent conversations fulfill important functions (Wood, 2019, pp. 41, 114-116, 125; 2021, pp. 10-12). Indeed, “dialogism” turned out to be most important in his study, both regarding peer and teacher feedback practices (Wood, 2019, p. 77). After some initial concerns (Wood, 2019, pp. 116-117), the students realized the benefits of peer feedback and engaged in that practice in manifold ways. They clarified their understanding through Google comments, gave suggestions, embedded external sources and resolved problems (Wood, 2019, pp. 79, 82). Since in this open space specific persons can be directly addressed in the comments through tagging, the students often tagged their teacher who was able to reply instantaneously (Wood, 2019, p. 85). Wood (2019) presumed that this affordance of the Google comments lowered the threshold or burden that might have otherwise prevented students from asking questions and engaging into further dialogues (p. 89). Even the SCFB contents were integrated into the comments for clarification (Wood, 2019, pp. 113-114). Wood’s cloud-based feedback approach hence encouraged feedback dialogues and negotiations of various kinds. Likewise, McDowell suggested implementing an asynchronous dialogic “Video- Feedback Loop (VFL)” that consists of learner-to-instructor, instructor-to-learner and learner-to-instructor screencasts (2020b, p. 21; 2020c, p. 131). To start the video feedback loop, the students showed their work in progress and highlighted their problem areas and hence proactively formulated a request for feedback (McDowell, 2020b, p. 21; 2020c, pp. 135, 144; cf. section 2.2.2). Afterwards, the instructor provided SCFB by 235 4.1 Combinations of feedback methods <?page no="237"?> stopping the students’ video at crucial points and pointing out potential solutions (McDowell, 2020c, p. 133). The students were then requested to continue their work and to provide “video evidence of how the feedback had been used, and to confirm whether these actions had solved the problem” (McDowell, 2020b, pp. 21-22; cf. 2020c, p. 133; cf. section 2.2.4). McDowell’s study is thus a rare example of how SCFB can be implemented systematically and profitably, especially for learners with special needs. Also more generally, such a dialogic approach fosters learners’ engagement, reflexivity and autonomy to a much greater extent than one-directional feedback only (cf. McDowell, 2020b; 2020c, p. 152). In contrast to Wood (2019; 2021; 2022) and McDowell (2020b; 2020c), many studies on digital feedback did not foreground the dialogic and cyclical nature of the feedback process. Rather, it seemed to end with the provision of feedback from the teacher to the students. In fact, however, feedback can and should occur in multiple directions, with the learners’ perspective at the center of attention (see section 2.2). Any method can thus be used by everyone who is part of the learning environment. Hence, students might create audio, video and screencast recordings for their teachers or fellow students, they could create live polls (ARS) and seek or exchange feedback in many other ways. The stereotypical feedback directions are sometimes even reflected in the terminology that is used, such as “student response systems” for live polls (ARS) that elicit learners’ knowledge or views on a topic (section 3.10). Also, there does not seem to be much literature that suggests combinations of ARS with other feedback methods. The same is true of survey feedback (section 3.9), which appears to end with the students submitting their answers. Followup feedback exchanges would, however, be crucial, either during open discussions in the classroom or in more anonymous ways. For example, teachers might set up a text pad with feedback questions to which every student can respond anonymously at any time as the course evolves (see section 3.3 on collaborative documents). Moreover, live exchanges, e.g. via videoconferences (section 3.14), can be conducted at any point. In those meetings, students could ask for feedback and raise open questions, engage in peer feedback as well as obtain teacher feedback. Likewise, videoconferences or face-to-face meetings have been proposed as a follow-up to feedback that was provided in other, notably asynchronous ways. This combination will therefore be discussed in the next section. 4.1.3 Combinations of digital and analog feedback methods for follow-up exchanges The most common types of follow-up exchanges are individual consultations or classroom discussions that can take place online or on-site. For example, after the reception of recorded audio feedback, students may request a “follow up meeting with [the] lecturer to discuss” their understanding (Carruthers et al., 2014, p. 9; cf. Lee & Bailey, 2016, p. 147, for video feedback). Similarly, the participants in McCartan and Short’s (2020) project considered SCFB as a “catalyst to more beneficial face-to-face 236 4 Discussion and outlook <?page no="238"?> discussions” (p. 22), and Kerr et al. (2016) argued that a face-to-face meeting could help clarify open question from the recorded feedback (p. 16). Moreover, the learners in Rahman et al.’s (2014) research reported that the SCFB reception readily encouraged discussion with their peers (pp. 966, 971-972; see also Orlando, 2016, p. 160). The delivery of SCFB as a “one-sided conference” (Vincelette & Bostic, 2013, p. 264; section 3.13.4) therefore has to be rather seen as part of a wider dialogic exchange. The same is true of any other method that should be regarded as a part of a wider learning journey. Some learners could, however, be unaware of follow-up options or hesitant about engaging in further discussions and seeking clarifications (Vo, 2017, pp. 95-96). To stimulate engagement and facilitate follow-up discussions, learners might be requested to fill in a response sheet in which they note down the insights they have gained from the feedback as well as the concrete steps they plan to take to improve their performance (see section 2.2.4 for details; cf. Martínez-Arboleda, 2018, p. 37; Stannard, 2019, p. 68). For this, they could utilize a print paper or an electronic text document, but they might likewise resort to other digital applications. To exemplify, they could organize helpful resources and set up an action plan or checklist on Google Keep (cf. Rottermond & Gabrion, 2021, p. 41) or Padlet. Similarly, learners can exploit the eportfolio functionalities to create a feedback portfolio that helps them keep track of all the feedback they receive as well as to reflect on it in a thorough and structured manner (see section 3.15). Furthermore, portfolios do not only foster self-assessment skills, but may also serve as a foundation for follow-up feedback dialogues. In cases where offline face-to-face feedback is not possible, videoconferencing can be utilized to complement any other digital feedback method. For example, Fuccio (2014) as well as Saeed and Al Qunayeer (2020) suggested combining Google Docs with student conferencing to discuss the feedback. In fact, modern videoconference platforms directly enable various combinations of feedback methods via text chats, talking-head video, audio, text pads, ARS, file uploads and screensharing, as had been explained in section 3.14. Moreover, the assignment that was annotated during a videoconference can be sent to the students after the meeting if it is not yet available as a cloud document during the meeting (see section 3.3). Furthermore, assessors may record their live feedback in the videoconference via screenrecording and subsequently share this SCFB file with the students. Not only recorded videoconferences, but also asynchronous SCFB can contain the assessor’s webcam (talking head) in addition to the screenrecording (e.g. Mayhew, 2017; see section 3.13). However, research is still needed about the impact of including a webcam into SCFB. On the one hand, the webcam video might be too small to have an effect, e.g. regarding relationship-building (Borup, 2021). On the other hand, care must be taken that the talking head will not cover important parts of the screenrecording (Borup, 2021). A pilot study by Schneider et al. (2022) is currently exploring the effects of different webcam sizes on the feedback recipients. Further research is, however, required to derive recommendations for teaching practice. 237 4.1 Combinations of feedback methods <?page no="239"?> Beyond videoconferencing, other digital tools can be exploited for follow-up feed‐ back discussions. For example, in cloud editors, such as Google Docs, learners and teachers can engage in further conversations through the commenting and chat functions. However, also other chat applications (e.g. in the LMS) or messenger apps could be utilized (section 3.7). Many messengers nowadays enable the exchange of various media so that text-based, audio and video feedback can be combined quite easily. Moreover, course chats or Q&A forums on the LMS may help to collect questions for follow-up feedback conversations (see section 3.4). Finally, it needs to be stressed again that digital feedback methods are not exclusive to online teaching, but also suitable for hybrid and blended approaches that supplement on-site teaching. Depending on the equipment of the classroom, some digital tools could be directly incorporated into face-to-face teaching. Others help to continue feedback conversations beyond the confines of the face-to-face classroom. The next section will take a look ahead and outline some important prerequisites for the use of digital feedback in the future. 4.2 Towards dynamic digital feedback literacies As had been outlined at the beginning (section 2.2), feedback methods should nurture ongoing dialogues about learning, with the aim of helping learners improve. In that regard, digital technologies widen the spectrum of possible feedback options (Narciss, 2008, p. 126). For example, multimodal feedback combines auditory and visual channels as well as permits the integration of additional materials and resources, e.g. through hyperlinks (cf. Mory, 2004, p. 775; see e.g. sections 3.13 and 3.14). These enhanced possibilities call for a reconsideration of established notions of feedback provision (cf. Mory, 2004, p. 776) and digital literacy development. In fact, they are to be seen as an integral part of contemporary conceptualizations of learners’ and teachers’ feedback literacy (Carless & Winstone, 2020, p. 4). First of all, some general preconditions need to be fulfilled to use digital feedback methods competently. According to the European reference frameworks DigComp and DigCompEdu (see section 2.3.2), digital competence is defined as the ability “to use digital technologies in a [confident,] critical, collaborative and creative way” (Kluzer & Pujol Priego, 2018, p. 7). In that regard, DigCompEdu outlines a progression model (A1 newcomer, A2 explorer, B1 integrator, B2 expert, C1 leader, C2 pioneer) to help educators assess and improve their digital competence. For this, it emphasizes the importance of continuous professional development (CPD) (Redecker & Punie, 2017, p. 19) and delineates six main areas, i.e. (1) professional engagement, (2) digital resources, (3) teaching and learning, (4) assessment, (5) empowering learners and (6) facilitating learners’ digital competence (see section 2.3.2). Not only area 4, but all of them are relevant for the development of digital feedback literacies. Albeit in a different order and with a shift in emphasis, the next sections will try to carve out crucial prerequisites for a continuous 238 4 Discussion and outlook <?page no="240"?> development of digital feedback literacies. They address “the knowledge, skills and mindset” (Winstone & Carless, 2020, p. 24) that educators and learners need in order to engage in successful feedback interactions, as shown in Figure 14. Figure 14: Digital feedback literacies: attitudes, knowledge and skills 239 4.2 Towards dynamic digital feedback literacies <?page no="241"?> 4.2.1 Knowledge of feedback methods and of their use This book has presented several possible digital feedback methods in detail. Certainly, this overview is not exhaustive, as technologies keep on evolving. Accordingly, educators need to show openness towards innovations and demonstrate awareness of the possibilities that already exist (sections 4.2.2 and 4.2.3). Every method has distinct affordances and limitations (cf. Killingback, Drury, Mahato, & Williams, 2020; Ryan et al., 2019, p. 1508) so that teachers should “have a clear and explicit account of the options available to them, an understanding of the rationale for each option, and some knowledge of the research findings” (Ellis, 2009b, p. 106). Likewise, they need to develop this understanding, open-minded attitudes and the required skills among their students (cf. Carless, 2020; Carless & Winstone, 2020; Nash & Winstone, 2017; see feedback literacies in section 2.2). Based on this repertoire of methods, all participants in feedback exchanges “need to make appropriate judgments about when, how, and at what level to provide appropriate feedback […]” (Hattie & Timperley, 2007, p. 100). The feedback choices should not depend on the technologies that are available or known by the teachers and learners, but they should primarily be driven by didactic considerations (cf. Fang, 2019, p. 163; Harris & Hofer, 2011, p. 214; KMK, 2016, p. 51). Thus, informed decisions need to be made to tailor the feedback appropriately to a specific context, learner group, task and purpose (cf. e.g. Elola & Oskoz, 2016, p. 59; Mathisen, 2012, p. 111; McCarthy, 2015, p. 166; Thompson & Mishra, 2007, p. 38; Vincelette & Bostic, 2013, pp. 264, 271). For example, in the case of a visual-based assignment in the field of media production, screencast feedback is highly useful, whereas for some types of texts written feedback might be sufficient (McCarthy, 2015, p. 166; Vincelette & Bostic, 2013, p. 264), or oral feedback for oral interactions in the classroom. Moreover, the effectiveness of any feedback method is conditioned by the adherence to general feedback principles that had been outlined at the beginning (section 2.1; cf. Cranny, 2016, p. 29117; Morrison, 2013, p. 17). This conforms to “a multidimensional view of feedback where situational and individual characteristics of the instructional context and learner are considered along with the nature and quality of a feedback message” (Shute, 2008, p. 176, based on Schwartz & White, 2000, original emphasis). There is thus no “one-size-fits-all” method when it comes to feedback (Ferris et al., 1997, p. 178; McCarthy, 2015, p. 166; Soden, 2016, p. 232). Rather, all feedback methods need to be carefully orchestrated in the particular learning settings (cf. Bakla, 2017, p. 329). In that regard, digital methods are an important enrichment to the feedback repertoire, but not a substitute of all others (Mathisen, 2012, p. 111). Indeed, different digital and non-digital feedback methods can be combined with each other (cf. Hatziapostolou & Paraskakis, 2010, p. 113), as had been outlined above (section 4.1). Accordingly, the question is not whether digital feedback methods should be used, but “where, when, and how to use” them (Chang et al., 2018, p. 420, original emphasis). However, hardly any research exists about combinations of digital feedback meth‐ ods, neither with each other nor in conjunction with non-digital methods (see section 240 4 Discussion and outlook <?page no="242"?> 4.1). In 2013 already, Narciss argued that “special interest should be devoted to teacher education and instructional design issues such as how to combine human and technical sources of feedback in order to design and implement formative multi-source feedback strategies” (p. 23). Similarly, Ene and Upton (2018) emphasized that future research should “study synchronous and asynchronous TEF [teacher electronic feedback] as well as various combinations of face-to-face and multimodal feedback” (p. 11). As the review in the present book has shown, there still has not been much work in this area. Instead, different feedback methods were frequently compared and contrasted with each other, while their fruitful combinations have often been left unexplored. To instantiate, written electronic feedback in text editors had commonly been contrasted with SCFB. However, as Letón et al. (2018) pointed out, it is difficult to compare such different modes, as their content, notably the amount of detail, is not equivalent (pp. 188-189). Another advantage of using multiple feedback methods is that they may provide a richer and more holistic insight into academic performance than a single method could achieve (McCarthy, 2015, p. 165). In addition, experiencing “a variety of feedback methods” (McCarthy, 2015, p. 164) can be very motivating for students and contribute to their overall satisfaction with a course (p. 165). Moreover, they cater for a wider range of learning preferences and needs (Bakla, 2020, p. 121). At the same time, learners would develop a sense for the different methods and tools as well as their usefulness. This is a crucial prerequisite for engaging the students actively in the feedback process (see sections 2.2 and 4.2.4). So far, however, research on conceptualizing and fostering student feedback literacy is still in its infancy (Chong, 2021), especially with regard to digital methods. As the primary aim of feedback is to support learners and promote their selfregulatory competencies, it is necessary for teachers to foster students’ digital feedback literacies so that they can actively participate in the entire feedback process. Here, we observe commonalities with DigComEdu’s area 5, i.e. empowering learners and encouraging their active engagement (Redecker & Punie, 2017, p. 16). Consequently, students need to develop the “skill, will and thrill” (Hattie & Clarke, 2019, pp. 47, 170) to proactively seek, produce, exchange and utilize feedback as self-regulated learners (Carless, 2020; Carless & Winstone, 2020; Winstone & Carless, 2020). This can be achieved by creating feedback requests and feedback responses as well as by conducting digital selfand peer-assessments (see section 2.2). For all this, educators need differentiated knowledge of digital feedback methods as well as strategies for using them and for developing the required skills among the learners. On that basis, they need to design learning environments that encourage multidirectional, technology-enabled feedback interactions (cf. Winstone & Carless, 2020, p. 68). Adopting the established distinction between declarative, procedural and conditional knowledge (Garner, 1990; Paris, Lipson, & Wixson, 1983), we might derive the following dimensions: 241 4.2 Towards dynamic digital feedback literacies <?page no="243"?> (1) Declarative knowledge (knowing that): Knowledge of a broad repertoire of digital feedback methods as well as knowledge of their particular characteristics, advantages and limitations, (2) Procedural knowledge (knowing how): Knowledge about how to implement digital feedback, (3) Conditional knowledge (knowing when): Knowledge about when to use which digital feedback method or combination of feedback methods, e.g. depending on the learner group and situational circumstances. The present book has attempted to prepare teachers and students for all three of them; however, knowledge of the actual implementation will only develop and refine if it is put into practice (see section 4.2.3). The questions and tasks in the different chapters encouraged the readers to do so, and it is hoped that they will inspire them to try out and innovate digital feedback in the future. Regarding the knowledge types, we note intersections with areas 2 and 3 of DigCompEdu (section 2.3.2). The second one includes skills to select appropriate resources that fit a particular learning objective and learner group etc. (Redecker & Punie, 2017, p. 20). Another point is that users need to show awareness of copyright aspects and data protection regulations (KMK, 2016, pp. 8, 22, 33; Redecker & Punie, 2017, p. 20), which are crucial in the digital age. Moreover, the third area addresses the skills of “managing and orchestrating the use of digital technologies in teaching and learning” (Redecker & Punie, 2017, p. 9). In our context, it refers to the purposeful concertation of different feedback methods, including considerations of their design and use (cf. Redecker & Punie, 2017, p. 20). The teachers’ role in that regard is that of a guide, mentor or facilitator for the learners (Redecker & Punie, 2017, p. 20; cf. King, 1993, p. 30, quoted by Rau, 2017, p. 143). This means providing individual support and fostering self-regulated competencies among the learners, but also developing their collaboration skills in digital environments (Redecker & Punie, 2017, p. 20; see section 4.2.4 below). In that respect, it might be useful to conduct peer feedback activities as well as to encourage feedback loops that are initiated by the learners rather than the teacher. Consequently, educators need to develop feedback skills among the students, which requires them to be digitally competent themselves (cf. Winstone & Carless, 2020, pp. 12, 174). For this, openness towards new feedback methods appears to be an important prerequisite, as will be discussed next. 4.2.2 Attitudes and flexible adaptation skills (digital agility) Beyond knowledge of digital methods, attitudes and skills on the part of the teachers and learners are crucial. In that regard, we may resort to the notion of agile literacies which are central to a dynamically changing world. For feedback this means that teachers and learners engage in flexible and adaptive feedback practices. They show openness towards innovations, are willing to implement them, but are also able to 242 4 Discussion and outlook <?page no="244"?> critically reflect on them. Clearly, this necessitates awareness and knowledge of a broad spectrum of feedback methods (see section 4.2.1). The idea of “agility” has been widely used in the field of business in recent years. It is stated that “[a]gile organizations embrace a mindset of continuous improvement” (Betterworks.com, 2021) and are more responsive to change than others. One of the most notable changes has occurred in the area of digitalization. Accordingly, digital agility was defined as “the ability of an organization [or educator] to quickly and easily change their processes by applying and leveraging digital technology tools, processes, and software to perform basic business [or educational] functions” (Attaché Docs, 2020, information in brackets added). Transferred to educational contexts, this means that all stakeholders (teachers and learners alike) should be able to adapt their teaching, learning and feedback processes swiftly and meaningfully by making use of digital technologies. Since “technologies are constantly changing”, also the pedagogies need to undergo continuous change (Webb, 2010, p. 598). However, educators may wonder whether “it is worthwhile to try out or invest in new technologies” in view of the dynamically changing digital landscape (Alqurashi & Siegelmann, 2020, p. 229). Indeed, there are often affective barriers towards technolo‐ gies, which come from ingrained beliefs and traditions (cf. Bearman, Boud, & Ajjawi, 2020, pp. 7, 10; Blume, 2020). “Traditions cast long shadows” (Bearman et al., 2020, p. 7) so that it might be difficult to “disrupt established teacher and student routines” (Gruba, Hinkelman, & Cárdenas-Claros, 2016, p. 135) through the use of new technologies. In fact, resistance on the part of the teachers (and learners) is still a common phenomenon (Cunningham, 2017b, p. 5; Howard, 2018, p. viii; Soden, 2017). For instance, teachers may fear a loss of control (Gruba et al., 2016, p. 142) and they might be concerned about the time needed to learn a new method (Soden, 2017, p. 1). From an agility framework, the perspective is a different one and requires a change in mindset at micro- (individuals, local classrooms), meso- (colleagues, schools) and macro-levels (policymakers, institutions) (cf. Winstone & Carless, 2020, pp. 10-11). For example, Lorenzo Galés and Gallon (2019) elaborated on the idea of “educational agil‐ ity”. They emphasized the necessary change in mindsets as an important precondition for agile and student-centered learning environments. As had been shown above, it is crucial to flexibly adjust the feedback methods by considering various factors that arise from the particular purpose, learner, task and setting (cf. e.g. Ferris, 2014, p. 21; Hartung, 2017, p. 215). For some tasks, contexts and contents, brief written or oral feedback might be sufficient; for others, more complex or sophisticated feedback methods would be beneficial. Flexible adaptation skills are therefore necessary. This means selecting and enacting (analog and/ or digital) feedback methods that are adequate for a specific learning objective and student or student group. It also demands openness “to explore innovative methods […] within more traditional forms of […] instruction” in order to seize all benefits to their fullest (Borup et al., 2012, p. 202). Moreover, once users are familiar with a particular tool, they may exploit it for further purposes. In that regard, Fang (2019) writes about refocusing, 243 4.2 Towards dynamic digital feedback literacies <?page no="245"?> which means “us[ing] an innovation in additional or alternative ways” than the ones they already know (p. 123). For example, teachers who are able to produce screencasts for feedback purposes could employ them for tutorial videos or lecture announcements as well (Fang, 2019, pp. 123, 143) - and vice versa: Communication tools or instructional tools can be transformed into feedback tools, for which new pedagogies might need to be developed based on research and actual usage. This is true of any method that has been presented in this book and will probably apply to many further innovations in the future. For instance, it is highly likely that feedback processes on social media, such as Instagram, TikTok and YouTube, will attract interest by researchers and educators in the next years since they have become increasingly popular among many learners. Agile adoption models therefore foreground “the need for dynamically responding to the ever changing technological innovations” (Mesfin, Ghinea, Grønli, & Hwang, 2018, p. 165). They “embrac[e] the dynamically changing technological innovations and also recogniz[e] the need for gradual progression (as opposed to big bang) in the process of adoption” (Mesfin et al., 2018, p. 164). This complies with the idea of a stepwise familiarization that is sketched next. 4.2.3 Stepwise familiarization with digital feedback methods and continuous training In prior studies, a research-based, stepwise and reflective familiarization with new methods has proven valuable. These three points will be further explained in this section, starting with the notion “research-based” and continuing with “stepwise and reflective”. A research-informed pedagogy is important to seize the potentials of digital feedback for enhancing learning processes. Otherwise, people would be inclined to use new media in the same way as they did for existing media (Guo et al., 2014, p. 50). To exemplify, the first versions of digital textbooks resembled “scanned versions of paper books” (Guo et al., 2014, p. 50) that did not take advantage of their additional interactive affordances. The particularities of the different tools, modes and media should, however, be considered since they have an impact on the way we communicate (Middleton, 2011, p. 27). It is therefore critical to develop an “understanding of feedback design” in various communication modes (Ryan et al., 2019, p. 1507). However, studies that investigate the effects of different feedback methods are still relatively rare (Elola & Oskoz, 2016, p. 61), even though the technological advances of the past decades have given rise to experimentations with novel ways of feedback provision (e.g. Ali, 2016; Mathisen, 2012, p. 100; Silva, 2012). Especially for multimodal and synchronous methods, such as videoconference feedback, research is scarce, but would bear great future potential due to its heightened relevance. In that respect, empirical research could examine the impact of specific feedback strategies in different modes. For example, it needs to be tested whether the general multimedia principles suggested by Mayer and colleagues (e.g. Mayer, 2006) are 244 4 Discussion and outlook <?page no="246"?> likewise relevant for feedback designs or whether they need to be refined and modified (see section 2.3.1). To instantiate, it would be crucial to disentangle the effects of the medium that is employed, teacher’s proficiency in feedback provision and learner’s competence in feedback use, teachers’ and learners’ access to and familiarity with the tool and feedback mode that is chosen, the task or assignment that has been performed as well as the assessor’s communicative abilities and scaffolding skills (cf. Borup et al., 2015, pp. 165-166). Future work could concentrate in greater detail on learners’ understanding and uptake of the feedback that has been received through different modes (West & Turner, 2016, p. 408). As Chang et al. (2018) suggested, learner engagement might not only be measured by the changes they had made in their revised drafts, but also through eye-tracking, keystroke-logging and screencasting of their revision process (pp. 405-406, 418). Moreover, research needs to be done in numerous contexts, with different learner groups, language(s) and cultural backgrounds, topics and learning goals (cf. Elola & Oskoz, 2016, p. 71), but also with “instructors from different cultural, linguistic, and educational backgrounds with different comfort levels and proficiency with technology” (Cunningham, 2017a, p. 480; 2017b, p. 105). Users’ (learners’ and teachers’) perceived time effort and their actual time investment when implementing different feedback methods is also worth closer examination. What impact do all these back‐ ground variables have on using and learning from feedback that is exchanged via different methods (cf. Chang et al., 2018, pp. 415-416)? An integrated system, e.g. an LMS, that allows for a variety of digital feedback methods would ease a lot for researchers and practitioners. For this reason, Chang et al. (2018) advocated the use of “multilayered multimodal cross-platform integrated feedback systems” which “include layers of screencast, synchronous and asynchronous text, audio, video, human-generated and computer-generated feedback that is personal, trackable, exportable, and accessible in a single integrated system” (pp. 419-420). However, confronting learners and teachers with such a sophisticated system might overwhelm them. Certainly, “[t]eachers need time to learn and implement these tools in reflection and in line with their own beliefs” (Lindqvist, 2020, p. 505). A stepwise and reflective familiarization is therefore considered important, also to reduce affective barriers (Deeley, 2018, p. 446). As Howard (2018) remarked by quoting Sharma Burke, “people don’t resist change, they resist the unknown” (p. 253). Not only are new technological skills and equipment needed, but also new strategies as well as training in the use of these tools to give the best possible feedback in different modes (cf. Anson, 2015, p. 381; Honeyman, 2015, p. 4; Turner & West, 2013, p. 294). To foster these skills, Howard (2018) proposed the following steps: (1) “finding time to explore or try out the new method, (2) finding an affordable tool to record [or create] the feedback, (3) finding or securing a system that allows you to share [digital feedback] in a timely, intuitive way, (4) training yourself on the tool and/ or system, 245 4.2 Towards dynamic digital feedback literacies <?page no="247"?> (5) training your students on how to use the feedback, and (6) developing comfort with the new process and efficiency in completing that process” (p. 41). Such a gradual introduction is crucial for teacher education and teacher training alike. It appears to be particularly important in light of the multi-layered options of digital, especially multimodal, feedback (e.g. screencasting and videoconferences). As soon as the users have become more familiar with a technology, they are likely to build up more confidence and might even dare to use it in more creative ways (Stannard & Sallı, 2019, p. 470). Important foundations for this can be laid in the protected space of teacher education programs. For example, Schluer (2020b; 2021e; to appear 2022) utilized a peer approach in order to familiarize pre-service teachers with SCFB in a stepwise manner. The overall results were very positive, especially since they experienced the perspectives of feedback providers and feedback recipients simultaneously. Through scaffolding (e.g. in individual consultation sessions and group meetings) and sufficient opportunities for self-reflection and discussion, the prospective teachers were able to voice their concerns and questions at any time and thus to overcome their insecurities. Since this approach worked for a complex digital method, i.e. SCFB, it is likely that it will also be successful with other feedback methods. Moreover, teacher education programs usually offer opportunities for the direct application of knowledge and skills during school placements. For example, Rybakova (2020) reported that her pre-service teachers provided video feedback to middle school students after they had learned about digital assessment methods (p. 510). The teacher education courses thus served as a model for the pre-service teachers so that they tried to implement the methods themselves (Rybakova, 2020, p. 513). Apart from modelling, reflection was crucial, since the pre-service teachers started “to reflect on the pedagogical strategies that their instructors use and how they might use them in their own classrooms” (Rybakova, 2020, p. 512). Modelling, practice and reflection can thus be identified as three key elements for bringing about change. This resonates with the idea of teachers as “reflective practitioners” (Schön, 1983). Regular journaling in personal blogs (section 3.6) or e-portfolios (section 3.15) can support this process of guided reflection (see e.g. the EPOSTL by Newby et al., 2007). It ideally continues throughout the teaching career, for example through action research projects in their own classrooms. To assist this process, professional teacher training plays a primordial role, also because new technologies keep on evolving (Gruba et al., 2016, p. 142). The first area of the DigCompEdu framework therefore stresses the relevance of professional engagement (Redecker & Punie, 2017, pp. 9, 16, 19). It comprises reflective practice, professional collaboration with colleagues, parents, learners and other parties, organ‐ izational communication as well as digital continuous professional development (CPD) (Redecker & Punie, 2017, p. 19): 246 4 Discussion and outlook <?page no="248"?> CPD is the means by which members of professions maintain, improve and broaden their knowledge and skills and develop the personal qualities required in their professional lives, usually through a range of short and long training programmes, some of which offer accreditation. This job-related continuing education and training refers to all organised, systematic education and training activities in which people take part in order to obtain knowledge and/ or learn new skills for a current or a future job. (Redecker & Punie, 2017, p. 89) In that respect, the development of an open and cooperative mindset seems beneficial (see section 4.2.2). Teachers may try out different tools and “gradually build a ‘technol‐ ogy tool library’ that can serve as a resource for them or their colleagues” (Alqurashi & Siegelmann, 2020, p. 239). This may happen through informal collegial exchanges and through formal staff development programs (Wood, 2021, p. 13; cf. Carless & Winstone, 2020, p. 10; Fang, 2019, pp. 132, 150-151; Winstone & Carless, 2020, p. 175). Some might not even be aware of the possibility of using digital methods, such as screencasting, for feedback purposes or else they may feel “at a loss how to make the appropriate choices” in finding an optimal tool (Fang, 2019, p. 152). Apart from professional training events or IT support, “collaboration, sharing, and collegial learning” would therefore be helpful (Lindqvist, 2020, p. 518; italics omitted; see also Gebril et al., 2018, p. 5; Popham, 2009, p. 10). Accordingly, several scholars stressed the importance of “peer-to-peer coaching among faculty” (Fang, 2019, p. 162) and “cooperation across programmes to share resources and expertise” (Gruba et al., 2016, p. 145). Learning from colleagues could even be more motivating and inspiring than information provided by external trainers. On the other hand, though, the professional workshops may set off and encourage dialogues among staff members, which might not have started without the training event. In the end, catalytic effects could be achieved in many ways. Teachers, teacher educators and teacher trainers can spread their ideas and knowledge among their own colleagues. Teacher educators and mentors can inspire pre-service teachers (Fang, 2019, p. 114), but pre-service teachers may likewise inspire their peers or in-service teachers during their school placements. Moreover, preand in-service teachers can develop the required skills among their students who could experiment with the methods further, which may in turn lead to further adaptations or innovations. Stannard and Sallı (2019) therefore argued that some of the most interesting and perhaps rewarding ways of working with SCT [screencast‐ ing technology] occur when we put the technology in the hands of our students or trainee teachers. It is ideal for reflection, discussions, presentations and sharing. (p. 470) These ideas are in line with a “diffusion model of innovation” (Rogers, 1962, cited by Hattie, 2009, p. 257) and the domino effects that can be triggered by it. The model states that “initially only a few teachers […] begin trying an innovation” and that under favorable conditions, “many more begin to innovate” (Hattie, 2009, p. 257, based on Rogers, 1962). In that regard, several factors are critical, including “awareness, 247 4.2 Towards dynamic digital feedback literacies <?page no="249"?> knowledge, persuasion, decision, implementation, and confirmation” (Rogers, 2003, cited by Hattie, 2009, p. 257). Teachers may adopt it if they recognize the advantages of a new method over others and feel that it is compatible with their own practices and beliefs, if they do not consider it too complex or difficult and have the opportunity to try it out with ease, and if they see that others are already using it effectively (Rogers, 2003, quoted by Soden, 2017, p. 4). It thus becomes evident that “innovation is an ongoing process rather than a one-off experiment” (Mann, 2015, p. 174) for which several factors can be decisive. 4.2.4 Learner-orientation and development of digital learner feedback literacy Teachers are often seen as role models for their students. Consequently, their utilization of digital feedback methods could be inspiring for learners. More importantly, though, the digital feedback methods can open up new avenues for the learners to seek, exchange and discuss feedback in order to enhance their learning (see section 2.2). As stated in the sixth area of DigCompEdu, educators are responsible for develop‐ ing their learners’ digital competencies, such as using technologies “creatively and responsibly […] for information, communication, content creation, wellbeing and problem-solving” (Redecker & Punie, 2017, p. 16). This final area therefore tackles “the specific pedagogic competences required to facilitate students’ digital competence” (Redecker & Punie, 2017, pp. 9, 17). However, while teacher assessment literacy has been researched to quite some extent, learners’ assessment literacy has been less of a concern so far, but is gaining in importance (Carless, 2020; Carless & Boud, 2018; Chong, 2021; see section 2.2). This is particularly true of technology-enabled feedback processes: Most of the prior research has concentrated on teacher-to-student feedback, whereas didactic design recommendations for (pro-)active learner engagement and feedback dialogues in digital environments are still rare (Winstone & Carless, 2020, pp. 61, 68). In fact, learners should not only be able to utilize feedback, but also to initiate and continue feedback conversations in multiple directions, as had been stressed at the beginning of this book (see section 2.2). For this reason, Wood (2021) suggested “a technology-mediated dialogic model of feedback uptake and literacy” (p. 1; see also the dissertation by Wood, 2019). He emphasized the need for ongoing dialogue and “uptakeoriented activities” in the development of feedback literacy (Wood, 2021, p. 1). Only if learners actively engage in feedback processes can uptake be successful (Winstone & Carless, 2020, p. 166). In Wood’s (2021) point of view, digital environments offer particular affordances for this purpose (pp. 1, 3). For instance, SCFB might be created and used in conjunction with a cloud document (e.g. Google Docs; Wood, 2019, pp. 43, 130) so that not only the instructor, but also the peers can engage in feedback dialogues (Wood, 2019, p. 125; 2021, pp. 10, 12). To achieve this, the documents need to be shared in an open feedback environment to which all course members have access (Schluer, 248 4 Discussion and outlook <?page no="250"?> in prep.; Wood, 2021, pp. 8, 12; see combinations in section 4.1.2). Through successive stages of writing and feedback cycles, the learners could offer mutual assistance and enhance their feedback literacy (Wood, 2021, p. 8). In that process, the teacher guides the students through adequate “training, modelling and support” (Wood, 2021, p. 11; see section 2.2). Learners would be asked to formulate “feedback requests” (Wood, 2021, p. 11, citing Jönsson & Panadero, 2017, as well as Winstone & Carless, 2019) to which the peers and the teachers would respond. The possibility to reply to feedback comments and learn from each other’s feedback can be of high value for the students to improve their assessment literacy and subject-specific learning (Wood, 2019, pp. 93, 100-101, 128), but also to build a learning community (pp. 79, 92). Based on the limited evidence, such an approach seems to work pretty well in higher education, for instance to foster writing skills or feedback competencies among preservice teachers (Schluer, 2020b; in prep.; to appear 2022). However, empirical evidence is still very scarce, notably concerning the implementation in schools and regarding other subject fields, skill areas and sociocultural settings. Likewise, an open research question relates to effective combinations of different feedback methods to nourish ongoing dialogues about learning, with the aim of supporting students in their learning journey (see section 4.1). The readers are therefore encouraged to experiment with the methods suggested in this book and to engage in dialogues about learning with their students as well as colleagues and with researchers and trainers. There is still much uncharted territory to explore in this dynamically evolving field of digital feedback and digital learning. The summaries of research findings and the inspirations for practical implementation presented in this book may serve as a foundation for further explorations that resonate with the idea of dynamic digital feedback literacies. 4.2.5 Questions/ Tasks (for teachers and students) after having read this chapter Knowledge and reflection questions: (1) Revisit one of the tools that you had chosen in chapter 2.3 in order to self-assess your digital competence. a. What has changed? b. What do you want to work on in the future? Application questions/ tasks: (3) Team up with a fellow student or colleague and discuss potential challenges that you might encounter when designing digital feedback (e.g. via screencasts). How could you solve them? Are there any resources or facilities at your school or university that you could consult for support? What other resources could you draw on (e.g. websites, online tutorials)? How could you support each other? Are there any techsavvy colleagues or friends that you might ask for help? 249 4.2 Towards dynamic digital feedback literacies <?page no="251"?> (4) Your school has the chance to successfully apply for funding in order to foster teachers’ digital literacy. As a proponent of digital feedback methods, you talk to your colleagues and to your headmaster in order to write a funding proposal. Your aim is to foster the digital feedback skills among the teachers at your school and a cooperating partner school. What are the contents of your proposal? Why do you consider digital feedback literacy beneficial? What resources would you need (hardware, software, infrastructure, staff)? Where do you see the greatest need for support? 4.3 Questions/ Tasks (for teachers and students) after having read the book Knowledge and reflection questions: (1) Having read this book, what have you learned from it? Describe your key learning gain in two to three sentences. You may distinguish between theoretical, methodo‐ logical and practical perspectives. (2) What digital feedback methods do you want to try out in your teaching? Which one do you try out first? Why? (3) Do you want to pass on your newly gained knowledge to your colleagues/ stu‐ dents/ friends? In what other contexts could it be helpful (other feedback designs or media skills more generally)? (4) Would you like to learn more about a specific method presented in this book? Which one? Application questions/ tasks: (5) As a creative and competent producer and user of feedback and digital media, what other ways of giving and receiving digital feedback come to your mind? a. Note down your ideas. b. Search the web for more information (whether others have used it already or what software and skills you might need in order to produce it). c. Try it out : -) d. If you like, share them here: https: / / padlet.com/ JSchluer/ DigiFeed BookSchluer22 (6) What feedback would you like to provide to the book author? Feel free to contact her. 250 4 Discussion and outlook <?page no="252"?> 5 References AbuSeileek, A. F. (2013a). Using comments and track changes in developing the writing skill: Learners’ attitude toward corrective feedback. International Journal of Learning Technology, 8(3), 204-223. doi: 10.1504/ IJLT.2013.057060 AbuSeileek, A. F. (2013b). Using track changes and word processor to provide corrective feedback to learners in writing. Journal of Computer Assisted Learning, 29(4), 319-333. doi: 10.1111/ jcal.12004 AbuSeileek, A. F., & Rabab’ah, G. (2013). Discourse functions and vocabulary use in English language learners’ synchronous computer-mediated communication. Teaching English with Technology, 13(1), 42-61. Adiguzel, T., Varank, İ., Erkoç, M. F., & Buyukimdat, M. K. (2017). Examining a web-based peer feedback system in an introductory computer literacy course. EURASIA Journal of Mathemat‐ ics, Science and Technology Education, 13(1), 237-251. doi: 10.12973/ eurasia.2017.00614a Ahmed, M. M. H., McGahan, P. S., Indurkhya, B., Kaneko, K., & Nakagawa, M. (2021). Effects of synchronized and asynchronized e-feedback interactions on academic writing, achievement motivation and critical thinking. Knowledge Management & E-Learning: An International Journal, 13(3), 290-315. doi: 10.34105/ j.kmel.2021.13.016 Ajjawi, R., & Boud, D. (2015). Researching feedback dialogue: An interactional analysis approach. Assessment & Evaluation in Higher Education, 42(2), 252-265. doi: 10.1080/ 02602938.2015.1102863 Ajjawi, R., & Boud, D. (2018). Examining the nature and effects of feedback dialogue. Assessment & Evaluation in Higher Education, 43(7), 1106-1119. doi: 10.1080/ 02602938.2018.1434128 Akbar, F. S. (2017). Corrective feedback in written synchronous and asynchronous computermediated communication. Teachers College, Columbia University Working Papers in Applied Linguistics & TESOL, 17(2), 9-27. Alawdat, M. (2013). Using e-portfolios and ESL learners. US-China Education Review A, 3(5), 339-351. Alharbi, M. A. (2020). Exploring the potential of Google Doc in facilitating innovative teaching and learning practices in an EFL writing course. Innovation in Language Learning and Teaching, 14(3), 227-242. doi: 10.1080/ 17501229.2019.1572157 Alharbi, W. (2017). E-feedback as a scaffolding teaching strategy in the online language classroom. Journal of Educational Technology Systems, 46(2), 239-251. doi: 10.1177/ 0047239517697966 Alhitty, A. A., & Shatnawi, S. (2021). Using e-portfolios for writing to promote students’ selfregulation. In AUBH E-Learning Conference Innovative Learning and Teaching: Lessons from COVID-19 (pp. 1-8). Ali, A. D. (2016). Effectiveness of using screencast feedback on EFL students’ writing and perception. English Language Teaching, 9(8), 106-121. doi: 10.5539/ elt.v9n8p106 <?page no="253"?> Al-Olimat, S. I., & AbuSeileek, A. F. (2015). Using computer-mediated corrective feedback mode in developing students’ writing performance. Teaching English with Technology, 15(3), 3-30. Alqurashi, E., & Siegelmann, A. R. (2020). Designing and evaluating technology-based formative assessments. In A. Elçi, L. L. Beith, & A. Elçi (Eds.). Handbook of research on faculty development for digital teaching and learning (pp. 222-244). Advances in educational technologies and instructional design (AETID) book series. Hershey, PA: IGI Global. Altıner, C. (2015). Perceptions of undergraduate students about synchronous video conferencebased English courses. Procedia - Social and Behavioral Sciences, 199(4), 627-633. doi: 10.1016/ j.sbspro.2015.07.589 Alvira, R. (2016). The impact of oral and written feedback on EFL writers with the use of screencasts. PROFILE Issues in Teachers’ Professional Development, 18(2), 79-92. doi: 10.15446/ profile.v18n2.53397 Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives (Abridged ed.). New York: Longman. Andrade, M., & Zeigner, S. (2021). Team ePortfolios in Management Education: Insights into students’ skill development. e-Journal of Business Education & Scholarship of Teaching, 15(1), 40-54. Anson, C. M. (2018). “She really took the time”: Students’ opinions of screen-capture response to their writing in online courses. In P. Jackson & C. C. Weaver (Eds.), Writing in online courses. How the online environment shapes writing practices (pp. 21-45). Gorham, Maine: Myers Education Press. Anson, C. M., Dannels, D. P., Laboy, J. I., & Carneiro, L. (2016). Students’ perceptions of oral screencast responses to their writing: Exploring digitally mediated identities. Journal of Business and Technical Communication, 30(3), 378-411. doi: 10.1177/ 1050651916636424 Anson, I. G. (2015). Assessment feedback using screencapture technology in Political Science. Journal of Political Science Education, 11(4), 375-390. doi: 10.1080/ 15512169.2015.1063433 Arellano-Soto, G., & Parks, S. (2021). A video-conferencing English-Spanish eTandem exchange: Negotiated interaction and acquisition. CALICO Journal, 38(2), 222-244. doi: 10.1558/ cj.38927 Ariyanto, M. S. A., Mukminatien, N., & Tresnadewi, S. (2021). College students’ perceptions of an automated writing evaluation as a supplementary feedback tool in a writing class. Jurnal Ilmu Pendidikan ( JIP), 27(1), 41-51. doi: 10.17977/ um048v27i1p41-51 Arnold, P., Kilian, L., Thillosen, A., & Zimmer, G. (2018). Handbuch E-Learning: Lehren und Lernen mit digitalen Medien (5th ed.). UTB Pädagogik: Vol. 4965. Bielefeld: W. Bertelsmann Verlag. Arroyo, D. C., & Yilmaz, Y. (2018). An open for replication study: The role of feedback timing in synchronous computer-mediated communication. Language Learning, 68(4), 942-972. doi: 10.1111/ lang.12300 Association of College and University Educators [ACUE]. (2020). Provide feedback strategically in online discussions. https: / / acue.org/ wp-content/ uploads/ 2020/ 03/ Section-4_PG3_Strategic -Feedback_CFIN.pdf Atfield-Cutts, S. & Coles, M. (2016). Blended feedback II: Video screen capture assessment feedback for individual students, as a matter of course, on an undergraduate computer 252 5 References <?page no="254"?> programming unit. Paper presented at the 27th Annual Workshop of the Psychology of Program‐ ming Interest Group (PPIG) 2016, 7-10 September 2016, St. Catharine’s College, University of Cambridge, UK. http: / / www.ppig.org/ sites/ ppig.org/ files/ 2016-PPIG-27th-Atfield-Cutts.pdf Attaché Docs. (2020). What is digital agility and how it accelerates digital transformation? https: / / www.blog.attachedocs.com/ what-is-digital-agility-and-how-it-accelerates-digitaltransformation/ Atwood, G. S. (2014). Padlet: Closing the student feedback loop. NAHRS Newsletter, 34(2), 11-13. Aubrey, S. (2014). Students’ attitudes towards the use of an online editing program in an EAP course. Kwansei Gakuin University Repository, 17, 45‐55. Avval, S. F., Asadollahfam, H., & Behin, B. (2021). Effects of receiving corrective feedback through online chats and class discussions on Iranian EFL learners’ writing quality. International Journal of Foreign Language Teaching & Research, 9(34), 203-214. Aydawati, E. N. (2019). An analysis of the effects of the synchronous online peer review using Google Doc on student’s writing performance. In C. T. Murniati, H. Hartono, & A. D. Widiantoro (Eds.), Technology-enhanced language teaching. Current research and best practices (pp. 64-76). Semarang: Universitas Katolik Soegijapranata. Azmi, A. M., Al-Jouie, M. F., & Hussain, M. (2019). AAEE - Automated evaluation of students’ es‐ says in Arabic language. Information Processing & Management, 56(5), 1736-1752. doi: 10.1016/ j.ipm.2019.05.008 Bahula, T. & Kay, R. (2020). Exploring student perceptions of video feedback: A review of the literature. Proceedings of the 13th Annual International Conference of Education, Research and Innovation (ICERI2020 Proceedings), 9-10 November 2020. https: / / library.iated.org/ view/ BAH ULA2020EXP Bai, L., & Hu, G. (2017). In the face of fallible AWE feedback: How do students respond? Educational Psychology, 37(1), 67-81. doi: 10.1080/ 01443410.2016.1223275 Bakla, A. (2017). An overview of screencast feedback in EFL writing: Fad or the future? Conference Proceedings of the International Foreign Language Teaching and Teaching Turk‐ ish as a Foreign Language (27-28 April, 2017), Bursa, Turkey (International Foreign Lan‐ guage Teaching and Teaching Turkish as a Foreign Language). Bursa, Turkey, pp. 319- 331. https: / / www.researchgate.net/ publication/ 322804886_An_Overview_of_Screencast_Fe edback_in_EFL_Writing_Fad_or_the_Future Bakla, A. (2020). A mixed-methods study of feedback modes in EFL writing. Language Learning & Technology, 24(1), 107-128. Balfour, S. P. (2013). Assessing writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review TM . Research & Practice in Assessment, 8, 40-48. Barrot, J. S. (2021). Using automated written corrective feedback in the writing class‐ rooms: Effects on L2 writing accuracy. Computer Assisted Language Learning, 1-24. doi: 10.1080/ 09588221.2021.1936071 Barrot, J. S. (2020). Integrating technology into ESL/ EFL writing through Grammarly. RELC Journal, 17(2), 1-5. doi: 10.1177/ 0033688220966632 Barton, E. E., & Wolery, M. (2007). Evaluation of e-mail feedback on the verbal behaviors of preservice teachers. Journal of Early Intervention, 30(1), 55-72. doi: 10.1177/ 105381510703000105 253 5 References <?page no="255"?> Barton, K. L., Schofield, S. J., McAleer, S., & Ajjawi, R. (2016). Translating evidence-based guidelines to improve feedback practices: The interACT case study. BMC Medical Education, 16, 1-12. doi: 10.1186/ s12909-016-0562-z Bauer, E. (2017). Zur Relevanz literaler Kompetenzen beim online Studieren. In H. R. Griesehop & E. Bauer (Eds.), Lehren und Lernen online. Lehr- und Lernerfahrungen im Kontext akademischer Online-Lehre (pp. 149-166). Wiesbaden: Springer. Baumann-Wehner, M., Gurjanow, I., Milicic, G., & Ludwig, M. (2020). Analysis of studentteacher chat communication during outdoor learning within the MCM digital classroom. In M. Ludwig, S. Jablonski, A. Caldeira, & A. Moura (Eds.), Research on outdoor STEM education in the digital age. Proceedings of the ROSETA online conference in June 2020 (pp. 63-70). Münster: WTM-Verlag. doi: 10.37626/ GA9783959871440.0 Bearman, M., Boud, D., & Ajjawi, R. (2020). New directions for assessment in a digital world. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.). Re-imagining university assessment in a digital world (pp. 7-18). The Enabling Power of Assessment: Vol. 7. Cham, Switzerland: Springer. Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feed‐ back: A key to improving student learning? Studies in Higher Education, 36(6), 671-687. doi: 10.1080/ 03075071003731135 Beauvois, M. (1998). Conversations in slow motion: Computer-mediated communication in the foreign language classroom. The Canadian Modern Language Review, 54(2), 198-217. doi: 10.3138/ cmlr.54.2.198 Bell, K. (2018). 4 ways to use Google Keep for feedback and assessment. https: / / shakeuplearning.c om/ blog/ 4-ways-to-use-google-keep-for-feedback-and-assessment/ Berg, E. C. (1999). The effects of trained peer response on ESL students’ revision types and writing quality. Journal of Second Language Writing, 8(3), 215-241. Betterworks.com. (2021). What is agility in business? https: / / www.betterworks.com/ magazine/ what-is-agility-in-business/ Biber, D., Nekrasova, T., & Horn, B. (2011). The effectiveness of feedback for L1-English and L2-writing development: A meta-analysis. TOEFL iBT Research Report No. RR-11-05. Arizona: Northern Arizona University. Bilki, Z., & Irgin, P. (2021). Using blog-based peer comments to promote L2 writing performance. ELT Research Journal, 10(2), 140-161. Bir, S. (2017). Strategies for conducting student feedback surveys. Center for Teaching and Learning. https: / / ctl.wiley.com/ strategies-conducting-student-feedback-surveys/ Bissell, L. (2017). Screen-casting as a technology-enhanced feedback mode. Journal of Perspec‐ tives in Applied Academic Practice, 5(1), 4-12. Bitchener, J., Young, S., & Cameron, D. (2005). The effect of different types of corrective feedback on ESL student writing. Journal of Second Language Writing, 14(3), 191-205. doi: 10.1016/ j.jslw.2005.08.001 Blackboard Help. (2020). Discussions. https: / / help.blackboard.com/ Learn/ Student/ Ultra/ Interact / Discussions 254 5 References <?page no="256"?> Bless, M. M. (2017). Impact of audio feedback technology on writing instruction (Doctoral Dissertation). Walden University, Minneapolis, Minnesota, U.S. Bloch, J. (2002). Student/ teacher interaction via email: The social context of Internet discourse. Journal of Second Language Writing, 11(2), 117-134. doi: 10.1016/ S1060-3743(02)00064-4 Blume, C. (2020). German teachers’ digital habitus and their pandemic pedagogy. Postdigital Science and Education, 2(3), 879-905. doi: 10.1007/ s42438-020-00174-9 Bokhove, C. (2010). Implementing feedback in a digital tool for symbol sense. International Journal for Technology in Mathematics Education, 17(3), 121-126. Bond, S. (2009). Audio feedback. Centre for Learning Technology, London School of Economics and Political Science. http: / / eprints.lse.ac.uk/ id/ eprint/ 30693 Boone, J., & Carlson, S. (2011). Paper review revolution: Screencasting feedback for develop‐ mental writers. NADE Digest, 5(3), 15-23. Boraie, D. (2018). Types of assessment. In J. I. Liontas (Ed.), The TESOL encyclopedia of English language teaching (pp. 1-7). Hoboken, NJ: Wiley-Blackwell. Borup, J. (2021). Back to feedback basics using video recordings. Educause Review Online. https: / / er.educause.edu/ blogs/ 2021/ 2/ back-to-feedback-basics-using-video-recordings Borup, J., West, R. E., & Graham, C. R. (2012). Improving online social presence through asynchronous video. The Internet and Higher Education, 15(3), 195-203. doi: 10.1016/ j.ihe‐ duc.2011.11.001 Borup, J., West, R. E., & Thomas, R. (2015). The impact of text versus video communication on instructor feedback in blended courses. Educational Technology Research and Development, 63(2), 161-184. doi: 10.1007/ s11423-015-9367-8 Boucher, M., Creech, A., & Dubé, F. (2020). Video feedback and the choice of strategies of college-level guitarists during individual practice. Musicae Scientiae, 24(4), 430-448. doi: 10.1177/ 1029864918817577 Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698-712. doi: 10.1080/ 02602938.2012.691462 Bowen, T., & Ellis, L. (2015). Assessment for learning (AfL). In S. Wallace (Ed.). A dictionary of education (2nd ed.). Oxford Reference. Oxford: Oxford University Press. Bower, J., & Kawaguchi, S. (2011). Negotiation of meaning and corrective feedback in Japa‐ nese/ English eTandem. Language Learning & Technology, 15(1), 41-71. Brame, C. J. (2016). Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE - Life Sciences Education, 15(6), 1-6. Bramley, G., Campbell-Pilling, K., & Simmons, C. (2020). Don’t feedback in anger: Enhancing student experience of feedback. In 6th International Conference on Higher Education Advances (HEAd’20). València: Universitat Politècnica de València. doi: 10.4995/ HEAd20.2020.11216 Brearley, F. Q., & Cullen, W. R. (2012). Providing students with formative audio feedback. Bioscience Education, 20(1), 22-36. doi: 10.11120/ beej.2012.20000022 Brereton, B., & Dunne, K. (2016). An analysis of the impact of formative peer assessment and screencast tutor feedback on veterinary nursing students’ learning. All Ireland Journal of Teaching and Learning in Higher Education (AISHE-J), 8(3), 29401-29424. 255 5 References <?page no="257"?> Brick, B., & Holmes, J. (2010). Using screen capture software for student feedback: Towards a methodology. In D. G. Kinshuk, J. M. Sampson, Spector, Isaías, Pedro, & D. Ifenthaler (Eds.), Proceedings of the IADIS International Conference on Cognition and Exploratory Learning in Digital Age (IADIS CELDA 2008). Freiburg, Germany, 13-15 October 2008 (pp. 339-342). IADIS (International Association for Development of the Information Society). Bridgeman, B., & Ramineni, C. (2017). Design and evaluation of automated writing evaluation models: Relationships with writing in naturalistic settings. Assessing Writing, 34(3), 62-71. doi: 10.1016/ j.asw.2017.10.001 Brodie, L., & Loch, B. (2009). Annotations with a tablet PC or typed feedback: Does it make a difference? In 20th Australasian Association for Engineering Education Conference (AAEE 2009), University of Adelaide, 6-9 December 2009 (pp. 765-770). Brookhart, S. M. (2017). How to give effective feedback to your students (2nd ed.). Alexandria, VA: ASCD. Buckley, E., & Cowap, L. (2013). An evaluation of the use of Turnitin for electronic submission and marking and as a formative feedback tool from an educator’s perspective: Evaluation of Turnitin - an educator’s perspective. British Journal of Educational Technology, 44(4), 562-570. doi: 10.1111/ bjet.12054 Budge, K. (2011). A desire for the personal: Student perceptions of electronic feedback. Inter‐ national Journal of Teaching and Learning in Higher Education, 23(3), 342-349. Burns, M. (2019). A tech-friendly peer feedback strategy. Class Tech Tips. https: / / classtechtips. com/ 2019/ 10/ 30/ peer-feedback-strategy-spark/ Burstein, J., Chodorow, M., & Leacock, C. (2003). Criterion SM online essay evaluation: An application for automated evaluation of student essays. In Proceedings of the 15th Annual Conference on Innovative Applications of Artificial Intelligence. Acapulco, Mexico. Burstein, J., Chodorow, M., & Leacock, C. (2004). Automated essay evaluation: The Criterion online writing service. AI Magazine, 25(3), 27-36. Burstein, J., Elliot, N., Klebanov, B. B., Madnani, N., Napolitano, D., Schwartz, M., Houghton, P., & Molloy, H. (2018). Writing MentorTM: Writing progress using self-regulated writing support. The Journal of Writing Analytics, 2(1), 285-313. doi: 10.37514/ JWA-J.2018.2.1.12 Burstein, J., Riordan, B., & McCaffrey, D. (2021). Expanding automated writing evaluation. In D. Yan, A. A. Rupp, & P. W. Foltz (Eds.). Handbook of automated scoring: Theory into practice (pp. 329-346). Statistics in the Social and Behavioral Sciences. New York: Chapman and Hall/ CRC. doi: 10.1201/ 9781351264808-18 Bush, J. C. (2020). Using screencasting to give feedback for academic writing. Innovation in Language Learning and Teaching, 8(3), 1-14. doi: 10.1080/ 17501229.2020.1840571 Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245-281. doi: 10.3102/ 00346543065003245 Butler, D. (2011). Closing the loop 21st century style: Providing feedback on written assessment via mp3 recordings. Journal of the Australasian Law Teachers Association, 4(1-2), 99-107. Cahill, A., Bruno, J., Ramey, J., Ayala Meneses, G., Blood, I., Tolentino, F., Lavee, T., & Andreyev, S. (2021). Supporting Spanish writers using automated feedback. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: 256 5 References <?page no="258"?> Human Language Technologies: Demonstrations (pp. 116-124). Association for Computational Linguistics. doi: 10.18653/ v1/ 2021.naacl-demos.14 Calvo, R., Arbiol, A., & Iglesias, A. (2014). Are all chats suitable for learning purposes? A study of the required characteristics. 5th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, DSAI 2013. Procedia Computer Science, 27, 251-260. doi: 10.1016/ j.procs.2014.02.028 Campbell, B. S., & Feldmann, A. (2017). The power of multimodal feedback. Journal of Curricu‐ lum, Teaching, Learning and Leadership in Education, 2(2), 1-6. Campbell, N., & Schumm Fauster, J. (2013). Learner-centred feedback on writing: Feedback as dialogue. In M. Reitbauer, J. Schumm-Fauster, N. Campbell, & S. Mercer (Eds.), Feedback matters. Current feedback practices in the EFL classroom (pp. 55-68). Frankfurt a. M.: Peter Lang. Canale, M. (2013). From communicative competence to communicative language pedagogy. In J. C. Richards & R. W. Schmidt (Eds.), Language and communication (7th ed., pp. 2-27). London: Routledge. Canals, L., Granena, G., Yilmaz, Y., & Malicka, A. (2020). Second language learners’ and teachers’ perceptions of delayed immediate corrective feedback in an asynchronous online setting: An exploratory study. TESL Canada Journal/ Revue TESL du Canada, 37(2), 181-209. doi: 10.18806/ tesl.v37i2.1336 Cann, A. J. (2007). Podcasting is dead. Long live video! Bioscience Education, 10(1), 1-4. doi: 10.3108/ beej.10.c1 Cann, A. J. (2014). Engaging students with audio feedback. Bioscience Education, 22(1), 31-41. doi: 10.11120/ beej.2014.00027 Carless, D. (2019). Feedback loops and the longer-term: Towards feedback spirals. Assessment & Evaluation in Higher Education, 44(5), 705-714. doi: 10.1080/ 02602938.2018.1531108 Carless, D. (2020). From teacher transmission of information to student feedback literacy: Activating the learner role in feedback processes. Active Learning in Higher Education, 43(1), 1-11. doi: 10.1177/ 1469787420945845 Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325. doi: 10.1080/ 02602938.2018.1463354 Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395-407. doi: 10.1080/ 03075071003642449 Carless, D., & Winstone, N. (2020). Teacher feedback literacy and its interplay with student feed‐ back literacy. Teaching in Higher Education, 3(69), 1-14. doi: 10.1080/ 13562517.2020.1782372 Carruthers, C., McCarron, B., Bolan, P., Devine, A., & McMahon-Beattie, U. (2014). Listening and learning: Reflections on the use of audio feedback. An excellence in teaching and learning note. Business and Management Education in HE, 1(1), 4-11. doi: 10.11120/ bmhe.2013.00001 Carswell, L., Thomas, P., Petre, M., Price, B., & Richards, M. (2000). Distance education via the Internet: The student experience. British Journal of Educational Technology, 31(1), 29-46. doi: 10.1111/ 1467-8535.00133 257 5 References <?page no="259"?> Cartney, P. (2010). Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used. Assessment & Evaluation in Higher Education, 35(5), 551-564. doi: 10.1080/ 02602931003632381 Carvalho, B. (2020). How to use video to give engaging student feedback: Tips for giving effective feedback using personalized videos. https: / / spacesedu.com/ en/ how-to-use-video-for-studentfeedback/ Cavaleri, M., Di Biase, B., & Kawaguchi, S. (2013). The effect of video commentary feedback on the development of academic literacy. Learning conference, Rhodes. https: / / www.research gate.net/ publication/ 289642533_Academic_literacy_development_Does_video_commentary _feedback_lead_to_greater_engagement_and_response_than_conventional_written_feed back Cavaleri, M., Kawaguchi, S., Di Biase, B., & Power, C. (2019). How recorded audio-visual feedback can improve academic language support. Journal of University Teaching & Learning Practice, 16(4), 1-19. Cavanaugh, A. J., & Song, L. (2014). Audio feedback versus written feedback: Instructors’ and students’ perspectives. MERLOT Journal of Online Learning and Teaching, 10(1), 122-138. Centre for the Enhancement of Teaching and Learning [CETL]. (2020). How to provide highquality audio feedback? The University of Hong Kong. https: / / www.cetl.hku.hk/ tel/ guides/ audio-feedback/ Chalmers, C., MacCallum, J., Mowat, E., & Fulton, N. (2014). Audio feedback: Richer language but no measurable impact on student performance. Practitioner Research in Higher Education, 8(1), 64-73. Chang, C., Cunningham, K. J., Satar, H. M., & Strobl, C. (2018). Electronic feedback on second language writing: A retrospective and prospective essay on multimodality. Writing & Pedagogy, 9(3), 405-428. doi: 10.1558/ wap.32515 Chang, C. Y.-h. (2016). Two decades of research in L2 peer review. Journal of Writing Research, 8(1), 81-117. Chang, C.-C., Tseng, K.-H., Chou, P.-N., & Chen, Y.-H. (2011). Reliability and validity of webbased portfolio peer assessment: A case study for a senior high school’s students taking com‐ puter course. Computers & Education, 57(1), 1306-1316. doi: 10.1016/ j.compedu.2011.01.014 Chang, C.-C., Tseng, K.-H., & Lou, S.-J. (2012). A comparative analysis of the consistency and difference among teacher-assessment, student self-assessment and peer-assessment in a webbased portfolio assessment environment for high school students. Computers & Education, 58(1), 303-320. doi: 10.1016/ j.compedu.2011.08.005 Chang, C.-F. (2009). Peer review through synchronous and asynchronous CMC modes: A case study in a Taiwanese college English writing course. The JALT CALL Journal, 5(1), 45-64. Chang, C.-F. (2012). Peer review via three modes in an EFL writing course. Computers and Composition, 29(1), 63-78. doi: 10.1016/ j.compcom.2012.01.001 Chang, N., Watson, A. B., Bakerson, M. A., Williams, E. E., McGoron, F. X., & Spitzer, B. (2012). Electronic feedback or handwritten feedback: What do undergraduate students prefer and why? Journal of Teaching and Learning with Technology, 1(1), 1-23. 258 5 References <?page no="260"?> Chaudhuri, T. (2017). (De)constructing student e-portfolios in five questions: Experiences from a community of practice. In T. Chaudhuri & B. Cabau (Eds.), E-portfolios in higher education (pp. 3-19). Singapore: Springer. Chaudhuri, T., & Cabau, B. (Eds.). (2017). E-portfolios in higher education. Singapore: Springer. Chavan, P., Gupta, S., & Mitra, R. (2018). A novel feedback system for pedagogy refinement in large lecture classrooms. In J. C. Yang, M. Chang, L.-H. Wong, & M. M. T. Rodrigo (Eds.), 26th International Conference on Computers in Education. Main Conference Proceedings (pp. 464-469). Philippines: Asia-Pacific Society for Computers in Education. Chen, C.-F. E., & Cheng, W.-Y. E. (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology, 12(2), 94-112. Chen, H.-J. H., Cheng, H.-W. S., & Yang, T.-Y. C. (2017). Comparing grammar feedback provided by teachers with an automated writing evaluation system. English Teaching & Learning, 41(4), 99-131. doi: 10.6330/ ETL.2017.41.4.04 Chen, S. (2021). The impact of using social media platform WeChat for formative feedback of teaching and learning on student satisfaction (Unpublished Doctoral Dissertation). George Fox University, Newberg, Oregon, U.S. Cheng, D., & Li, M. (2020). Screencast video feedback in online TESOL classes. Computers and Composition, 58(2), 1-17. doi: 10.1016/ j.compcom.2020.102612 Cheng, G. (2017). The impact of online automated feedback on students’ reflective journal writing in an EFL course. The Internet and Higher Education, 34(7), 18-27. doi: 10.1016/ j.ihe‐ duc.2017.04.002 Cheng, G. (2022). Exploring the effects of automated tracking of student responses to teacher feedback in draft revision: Evidence from an undergraduate EFL writing course. Interactive Learning Environments, 30(2), 1-23. doi: 10.1080/ 10494820.2019.1655769 Chew, E. (2014). “To listen or to read? ” Audio or written assessment feedback for international students in the UK. On the Horizon, 22(2), 127-135. doi: 10.1108/ OTH-07-2013-0026 Chiang, I.-C. A. (2009). Which audio feedback is best? Optimising audio feedback to maximise student and staff experience. Paper presented at the A Word in Your Ear conference, Sheffield, UK. https: / / research.shu.ac.uk/ lti/ awordinyourear2009/ docs/ Chiang-full-paper.pdf Chiappetta, E. (2020). How conferencing for assessment benefits students during hybrid learning. Edutopia. https: / / www.edutopia.org/ article/ how-conferencing-assessment-benefits-student s-during-hybrid-learning Chionidou-Moskofoglou, M., Doukakis, S., & Lappa, A. (2005). The use of e-portfolios in teaching and assessment. In Proceedings of the 7th International Conference on Technology in Mathematics Teaching (pp. 224-232). Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction, 20(4), 328-338. doi: 10.1016/ j.learninstruc.2009.08.006 Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39(5), 629-643. doi: 10.1007/ s11251-010-9146-1 Chong, S. W. (2021). Reconsidering student feedback literacy from an ecological perspective. As‐ sessment & Evaluation in Higher Education, 46(1), 92-104. doi: 10.1080/ 02602938.2020.1730765 259 5 References <?page no="261"?> Chronister, M. A. (2019). The effects of media-enhanced feedback on the writing processes of students who self-identify as having ADHD: A qualitative case study. Thesis submitted in partial satisfaction of the requirements for the degree of Master of Arts in English (Composition) at the Department of English, California State University, Sacramento. Ciesielkiewicz, M. (2019). The use of e-portfolios in higher education: From the students’ perspective. Issues in Educational Research, 29(3), 649-667. Çiftçi, H. (2009). The effect of blog peer feedback on Turkish EFL students’ writing performance and their perceptions. Thesis submitted to the Institute of Educational Sciences in partial fulfillment of the requirements for the degree of Master of Arts in English Language Teaching. Yeditepe University, Istanbul, Turkey. https: / / acikbilim.yok.gov.tr/ handle/ 20.500.12812/ 3398 63. Çiftçi, H., & Kocoglu, Z. (2012). Effects of peer e-feedback on Turkish EFL students’ writing performance. Journal of Educational Computing Research, 46(1), 61-84. doi: 10.2190/ EC.46.1.c Clark, C., Strudler, N., & Grove, K. (2015). Comparing asynchronous and synchronous video vs. text based discussions in an online teacher education course. Online Learning, 19(3), 48-69. Clark-Gordon, C. V., Bowman, N. D., Hadden, A. A., & Frisby, B. N. (2019). College instructors and the digital red pen: An exploratory study of factors influencing the adoption and nonadoption of digital written feedback technologies. Computers & Education, 128(2), 414-426. doi: 10.1016/ j.compedu.2018.10.002 Clayton, C. (2018a). How to write feedback in emails. Blair English. http: / / www.blairenglish.com / articles/ general_articles/ writing-articles/ how-to-write-feedback-emails.html Clayton, C. (2018b). Writing an email of feedback: Giving feedback exercise. Blair English. http: / / www.blairenglish.com/ exercises/ emails/ exercises/ email-feedback-giving/ email-feedb ack-giving.html Coffey, M., & Gibbs, G. (2001). The evaluation of the student evaluation of educational quality questionnaire (SEEQ) in UK higher education. Assessment & Evaluation in Higher Education, 26(1), 89-93. doi: 10.1080/ 02602930020022318 Comiskey, D. (2012). Using screencasting to enhance student feedback. The Higher Education Academy. https: / / pure.ulster.ac.uk/ ws/ files/ 11445497/ Using_Screencasting_to_Enhance_Stu dent_Feedback.pdf Corder, S. P. (1967). The significance of learner’s errors. International Review of Applied Linguistics in Language Teaching (IRAL), 5(4), 161-170. doi: 10.1515/ iral.1967.5.1-4.161 Cotos, E. (2015). Automated writing analysis for writing pedagogy: From healthy tension to tangible prospects. Writing & Pedagogy, 7(2-3), 197-231. doi: 10.1558/ wap.v7i2-3.26381 Cotos, E. (2018). Automated writing evaluation. In J. I. Liontas & M. DelliCarpini (Eds.), The TESOL Encyclopedia of English Language Teaching (pp. 1-7). Hoboken, NJ: John Wiley & Sons, Inc. doi: 10.1002/ 9781118784235.eelt0391 Council of Europe. (2000). European Language Portfolio (ELP). https: / / www.coe.int/ en/ web/ portfolio/ home Cranny, D. (2016). Screencasting, a tool to facilitate engagement with formative feedback? All Ireland Journal of Teaching and Learning in Higher Education (AISHE-J), 8(3), 29101-29127. 260 5 References <?page no="262"?> Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., Orsmond, P., Gomez, S., & Park, J. (2012). The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers & Education, 58(1), 386-396. doi: 10.1016/ j.compedu.2011.08.025 Cruz Soto, E. (2016). Using screen capture technology in ELT graduate level writing feedback: A qualitative study of student attitudes and actions. Thesis submitted to the Faculty of Languages for the degree of Maestría en la Enseñanza del Inglés. Benemérita Universidad Autónoma de Puebla. Cryer, P., & Kaikumba, N. (1987). Audio-cassette tape as a means of giving feedback on written work. Assessment & Evaluation in Higher Education, 12(2), 148-153. doi: 10.1080/ 0260293870120207 Cunningham, K. J. (2017a). APPRAISAL as a framework for understanding multimodal electronic feedback: Positioning and purpose in screencast video and text feedback in ESL writing. Writing & Pedagogy, 9(3), 457-485. doi: 10.1558/ wap.31736 Cunningham, K. J. (2017b). Modes of feedback in ESL writing: Implications of shifting from text to screencast (Graduate Theses and Dissertations No. 16336). Iowa State University. https: / / lib.dr.iastate.edu/ etd/ 16336 Cunningham, K. J. (2019a). Student perceptions and use of technology-mediated text and screencast feedback in ESL writing. Computers and Composition, 52(8), 222-241. doi: 10.1016/ j.compcom.2019.02.003 Cunningham, K. J. (2019b). How language choices in feedback change with technology: Engagement in text and screencast feedback on ESL writing. Computers & Education, 135(5), 91-99. doi: 10.1016/ j.compedu.2019.03.002 Cunningham, K. J., & Link, S. (2021). Video and text feedback on ESL writing: Understanding attitude and negotiating relationships. Journal of Second Language Writing, 52(5), 1-17. doi: 10.1016/ j.jslw.2021.100797 Damayanti, I. L., Abdurahman, N. H., & Wulandari, L. (2021). Collaborative writing and peer feedback practices using Google Docs. In Proceedings of the Thirteenth Conference on Applied Linguistics (CONAPLIN 13) (pp. 225-232). Bandung, Indonesia. doi: 10.2991/ as‐ sehr.k.210427.034 Dao, P., Duong, P.-T., & Nguyen, M. X. N. C. (2021). Effects of SCMC mode and learner familiarity on peer feedback in L2 interaction. Computer Assisted Language Learning, 15(1), 1-29. doi: 10.1080/ 09588221.2021.1976212 De Coursey, C., & Dandashly, N. (2015). Digital literacies and generational micro-cultures: Email feedback in Lebanon. English Language Teaching, 8(11), 216-230. doi: 10.5539/ elt.v8n11p216 De Koning, B. B., Tabbers, H. K., Rikers, R. M. J. P., & Paas, F. (2009). Towards a framework for attention cueing in instructional animations: Guidelines for research and design. Educational Psychology Review, 21(2), 113-140. doi: 10.1007/ s10648-009-9098-7 DeBard, R., & Guidera, S. (2000). Adapting asynchronous communication to meet the seven principles of effective teaching. Journal of Educational Technology Systems, 28(3), 219-230. doi: 10.2190/ W1U9-CB67-59W0-74LH 261 5 References <?page no="263"?> Deeley, S. J. (2018). Using technology to facilitate effective assessment for learning and feedback in higher education. Assessment & Evaluation in Higher Education, 43(3), 439-448. doi: 10.1080/ 02602938.2017.1356906 Delaney, T. (2013). Using screencasts to give feedback. In D. C. Mussman (Ed.). New ways in teaching writing (2nd ed., pp. 299-301). New Ways in TESOL Series. Alexandria, VA: TESOL Press. Dembsey, J. M. (2017). Closing the Grammarly® gaps: A study of claims and feedback from an online grammar program. The Writing Center Journal, 36(1), 63‐96, 98‐100. Demirbilek, M. (2015). Social media and peer feedback: What do students really think about using Wiki and Facebook as platforms for peer feedback? Active Learning in Higher Education, 16(3), 211-224. doi: 10.1177/ 1469787415589530 Dempsey, J. V., Driscoll, M. P., & Swindell, L. K. (1993). Text-based feedback. In J. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 21-54). Englewood Cliffs, NJ: Educational Technology Publications. Diaz Maggioli, G. H. (2018). Correcting errors. In J. I. Liontas (Ed.), The TESOL encyclopedia of English language teaching (pp. 1-6). Hoboken, NJ: Wiley-Blackwell. Dictionary.com. (2021). Method. https: / / www.dictionary.com/ browse/ method Dikli, S., & Bleyle, S. (2014). Automated essay scoring feedback for second language writers: How does it compare to instructor feedback? Assessing Writing, 22(3), 1-17. doi: 10.1016/ j.asw.2014.03.006 Dippold, D. (2009). Peer feedback through blogs: Student and teacher perceptions in an advanced German class. ReCALL, 21(1), 18-36. doi: 10.1017/ S095834400900010X Dirkx, K., Joosten-ten Brinke, D., Arts, J., & van Diggelen, M. (2021). In-text and rubric-referenced feedback: Differences in focus, level, and function. Active Learning in Higher Education, 22(3), 189-201. doi: 10.1177/ 1469787419855208 Docs Editors Help. (2022). How to use Google Forms. https: / / support.google.com/ docs/ answer/ 6 281888? hl=en&co=GENIE.Platform%3DDesktop Dodigovic, M., & Tovmasyan, A. (2021). Automated writing evaluation: The accuracy of Gram‐ marly’s feedback on form. International Journal of TESOL Studies, 3(2), 71-87. doi: 10.46451/ ijts.2021.06.06 Dooley, S. (2021). Is Google Forms GDPR compliant? https: / / measuredcollective.com/ is-google-f orms-gdpr-compliant/ Dotse, D. K. (2017). Teaching methods, teaching strategies, teaching techniques, and teaching approach: What are they? University of Education, Winneba. https: / / www.academia. edu/ 35752906/ TEACHING_METHODS_TEACHING_STRATEGIES_TEACHING_TECHNIQ UES_AND_TEACHING_APPROACH_WHAT_ARE_THEY Duncan, N. (2007). ‘Feed‐forward’: Improving students’ use of tutors’ comments. Assessment & Evaluation in Higher Education, 32(3), 271-283. doi: 10.1080/ 02602930600896498 Dunn, C. K. (2015). Preparing professional writers via technologically enhanced feedback: Results of a one-year study. The Journal of Technology, Management, and Applied Engineering, 31(2), 1-19. 262 5 References <?page no="264"?> Durham University. (2019). Giving feedback using screencasts. Centre for Academic Development. https: / / sites.durham.ac.uk/ dcad-resourcebank/ ? tag=feedback E, M. K. L. (2021). Using an online social annotation tool in a content-based instruction (CBI) classroom. International Journal of TESOL Studies, 3(2), 5-22. doi: 10.46451/ ijts.2021.06.02 Ebadi, S., & Rahimi, M. (2017). Exploring the impact of online peer-editing using Google Docs on EFL learners’ academic writing skills: A mixed methods study. Computer Assisted Language Learning, 30(8), 787-815. doi: 10.1080/ 09588221.2017.1363056 Edwards, K., Dujardin, A.-F., & Williams, N. (2012). Screencast feedback for essays on a distance learning MA in professional communication: An action research project. Journal of Academic Writing, 2(1), 95-126. doi: 10.18552/ joaw.v2i1.62 Ekahitanond, V. (2013). Promoting university students’ critical thinking skills through peer feedback activity in an online discussion forum. Alberta Journal of Educational Research, 59(2), 247-265. Ekinsmyth, C. (2010). Reflections on using digital audio to give assessment feedback. Planet, 23(1), 74-77. doi: 10.11120/ plan.2010.00230074 Ellis, C. (2017). The importance of e-portfolios for effective student-facing learning analytics. In T. Chaudhuri & B. Cabau (Eds.), E-portfolios in higher education (pp. 35-49). Singapore: Springer. Ellis, R. (2009a). Corrective feedback and teacher development. L2 Journal, 1(1), 3-18. doi: 10.5070/ l2.v1i1.9054 Ellis, R. (2009b). A typology of written corrective feedback types. ELT Journal, 63(2), 97-107. doi: 10.1093/ elt/ ccn023 Elola, I., & Oskoz, A. (2016). Supporting second language writing using multimodal feedback. Foreign Language Annals, 49(1), 58-74. doi: 10.1111/ flan.12183 Ene, E., & Upton, T. A. (2014). Learner uptake of teacher electronic feedback in ESL composition. System, 46(3), 80-95. doi: 10.1016/ j.system.2014.07.011 Ene, E., & Upton, T. A. (2018). Synchronous and asynchronous teacher electronic feedback and learner uptake in ESL composition. Journal of Second Language Writing, 41(3), 1-13. doi: 10.1016/ j.jslw.2018.05.005 Espasa, A., Mayordomo, R. M., Guasch, T., & Martinez-Melo, M. (2019). Does the type of feedback channel used in online learning environments matter? Students’ perceptions and impact on learning. Active Learning in Higher Education, 2(2). doi: 10.1177/ 1469787419891307 Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70-120. doi: 10.3102/ 0034654312474350 Evans, K. P. (2018). An overview of the Pingo audience response system in undergraduate Math‐ ematics and Statistics teaching. MSOR Connections, 17(1), 25-31. doi: 10.21100/ msor.v17i1.828 Fang, B. (2019). Factors influencing faculty use of screencasting for feedback. Thesis submitted to the College of Graduate and Professional Studies of Abilene Christian University in partial fulfillment of the requirements for the degree Doctor of Education in Organizational Leadership. https: / / digitalcommons.acu.edu/ cgi/ viewcontent.cgi? article=1172&context=etd Farmer, R., Oakman, P., & Rice, P. (2016). A review of free online survey tools for undergraduate students. MSOR Connections, 15(1), 71-78. doi: 10.21100/ msor.v15i1.311 263 5 References <?page no="265"?> Farrell, O. (2020). From portafoglio to ePortfolio: The evolution of portfolio in higher education. Journal of Interactive Media in Education, 19(1), 1-14. doi: 10.5334/ jime.574 Farshi, S. S. (2015). The effect of two types of corrective feedback on EFL learners’ writing skill. Advances in Language and Literary Studies, 6(1), 26-30. doi: 10.7575/ aiac.alls.v.6n.1p.26 Fatani, T. H. (2020). Student satisfaction with videoconferencing teaching quality during the COVID-19 pandemic. BMC Medical Education, 20(1), 1-8. doi: 10.1186/ s12909-020-02310-2 FeedbackSchule. (2022). FeedbackSchule. https: / / wp.feedbackschule.de/ Ferguson, L. (2021). Using videos to give students personalized feedback. Edutopia. https: / / www. edutopia.org/ article/ using-videos-give-students-personalized-feedback Fernández-Toro, M., & Furnborough, C. (2014). Feedback on feedback: Eliciting learners’ responses to written feedback through student-generated screencasts. Educational Media International, 51(1), 35-48. doi: 10.1080/ 09523987.2014.889401 Ferrari, A. (2012). Digital competence in practice: An analysis of frameworks. JRC ( Joint Research Centre) Technical Reports. Luxembourg: Publications Office of the European Union. Ferris, D. R. (1997). The influence of teacher commentary on student revision. TESOL Quarterly, 31(2), 315-339. Ferris, D. R. (2014). Responding to student writing: Teachers’ philosophies and practices. Assessing Writing, 19(1), 6-23. doi: 10.1016/ j.asw.2013.09.004 Ferris, D. R., Pezone, S., Tade, C. R., & Tinti, S. (1997). Teacher commentary on student writing: Descriptions & implications. Journal of Second Language Writing, 6(2), 155-182. Ferris, D. R., & Roberts, B. (2001). Error feedback in L2 writing classes: How explicit does it need to be? Journal of Second Language Writing, 10(3), 161-184. doi: 10.1016/ S1060-3743(01)00039-X Firth, M., & Mesureur, G. (2010). Innovative uses for Google Docs in a university language program. The JALT CALL Journal, 6(1), 3-16. doi: 10.29140/ jaltcall.v6n1.88 Fish, W., & Lumadue, R. (2010). A technologically based approach to providing quality feedback to students: A paradigm shift for the 21st century. Academic Leadership: The Online Journal, 8(1, Article 5). Fisher, W., Cetl, C., & Keynes, M. (2008). Digital ink technology for e-assessment. In Proceedings of the 6th International Conference on Education and Information Systems, Technologies and Applications (EISTA 2008), 29 Jun - 2 Jul 2008. Orlando, Florida, USA. Fitch, J. L. (2004). Student feedback in the college classroom: A technology solution. Educational Technology Research and Development, 52(1), 71-77. doi: 10.1007/ BF02504773 Flood, M., Hayden, J. C., Bourke, B., Gallagher, P. J., & Maher, S. (2017). Design and evaluation of video podcasts for providing online feedback on formative pharmaceutical calculations assessments. American Journal of Pharmaceutical Education, 81(10, Article 6400), 100-103. Fonseca, J., Carvalho, C., Conboy, J., Valente, M., Gama, A., Salema, M., & Fiúza, E. (2015). Changing teachers’ feedback practices: A workshop challenge. Australian Journal of Teacher Education (AJTE), 40(8), 59-82. doi: 10.14221/ ajte.2015v40n8.4 Forsa. (2020a). Das Deutsche Schulbarometer Spezial: Corona-Krise. Ergebnisse einer Befragung von Lehrerinnen und Lehrern an allgemeinbildenden Schulen im Auftrag der Robert Bosch Stiftung in Kooperation mit der ZEIT. https: / / deutsches-schulportal.de/ unterricht/ das-deuts che-schulbarometer-spezial-corona-krise/ 264 5 References <?page no="266"?> Forsa. (2020b). Das Deutsche Schulbarometer Spezial: Corona-Krise: Folgebefragung. Ergebnisse einer Befragung von Lehrerinnen und Lehrern an allgemeinbildenden Schulen im Auftrag der Robert Bosch Stiftung in Kooperation mit der ZEIT. https: / / deutsches-schulportal.de/ un terricht/ lehrer-umfrage-deutsches-schulbarometer-spezial-corona-krise-folgebefragung/ France, D., & Wheeler, A. (2007). Reflections on using podcasting for student feedback. Planet, 18(1), 9-11. doi: 10.11120/ plan.2007.00180009 Franke, P., & Handke, J. (2012). E-Assessment. In J. Handke & A. M. Schäfer (Eds.), E-Learning, E-Teaching und E-Assessment in der Hochschullehre. Eine Anleitung (pp. 147-208). München: Oldenburg Verlag. Fuccio, D. S. (2014). Cloud power: Shifting L2 writing feedback paradigms via Google Docs. Journal of Global Literacies, Technologies, and Emerging Pedagogies, 2(4), 202-233. Fukkink, R. G., Trienekens, N., & Kramer, L. J. C. (2011). Video feedback in education and training: Putting learning in the picture. Educational Psychology Review, 23(1), 45-63. doi: 10.1007/ s10648-010-9144-5 Fukuta, J., Tamura, Y., & Kawaguchi, Y. (2019). Written languaging with indirect feedback in writing revision: Is feedback always effective? Language Awareness, 28(1), 1-23. doi: 10.1080/ 09658416.2019.1567742 Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113-132. doi: 10.1080/ 15434303.2011.642041 Fulcher, G., & Owen, N. (2016). Dealing with the demands of language testing and assessment. In G. Hall (Ed.). The Routledge Handbook of English Language Teaching (pp. 109-120). Routledge Handbooks in Applied Linguistics. London, New York: Routledge. Gao, J. (2021). Exploring the feedback quality of an automated writing evaluation system Pigai. International Journal of Emerging Technologies in Learning (iJET), 16(11), 322-330. doi: 10.3991/ ijet.v16i11.19657 Garner, R. (1990). When children and adults do not use learning strategies: Toward a theory of settings. Review of Educational Research, 60(4), 517-529. doi: 10.2307/ 1170504 Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7-23. doi: 10.1080/ 08923640109527071 Garrison, D.R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. doi: 10.1016/ S1096-7516(00)00016-6 Gebril, A., Boraie, D., & Arrigoni, E. (2018). Assessment literacy. In J. I. Liontas (Ed.), The TESOL encyclopedia of English language teaching (pp. 1-7). Hoboken, NJ: Wiley-Blackwell. Gedera, D. S. P. (2012). The dynamics of blog peer feedback in ESL classroom. Teaching English with Technology, 12(4), 16-30. Ghazal, S., Samsudin, Z., & Aldowah, H. (2015). Students’ perception of synchronous courses using Skype-based video conferencing. Indian Journal of Science and Technology, 8(30), 1-9. doi: 10.17485/ ijst/ 2015/ v8i1/ 84021 265 5 References <?page no="267"?> Ghosn-Chelala, M., & Al-Chibani, W. (2018). Screencasting: Supportive feedback for EFL remedial writing students. International Journal of Information and Learning Technology, 35(3), 146-159. doi: 10.1108/ IJILT-08-2017-0075 Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education (1), 3-31. Gielen, M., & De Wever, B. (2012). Peer assessment in a wiki: Product improvement, students’ learning and perception regarding peer feedback. Procedia - Social and Behavioral Sciences, 69, 585-594. doi: 10.1016/ j.sbspro.2012.11.450 Gigante, J., Dell, M., & Sharkey, A. (2011). Getting beyond “Good job”: How to give effective feedback. Pediatrics, 127(2), 205-207. doi: 10.1542/ peds.2010-3351 Giguère, C., & Parks, S. (2018). Child-to-child interaction and corrective feedback during eTandem ESL-FSL chat exchanges. Language Learning & Technology, 22(3), 176-192. Gleaves, A., & Walker, C. (2013). Richness, redundancy or relational salience? A comparison of the effect of textual and aural feedback modes on knowledge elaboration in higher education students’ work. Computers & Education, 62(1), 249-261. doi: 10.1016/ j.compedu.2012.11.004 Glei, J. K. (2016, October 7). How to give negative feedback over email. Harvard Business Review. https: / / hbr.org/ 2016/ 10/ how-to-give-negative-feedback-over-email Gonzalez, M., & Moore, N. S. (2018). Supporting graduate student writers with VoiceThread. Journal of Educational Technology Systems, 46(4), 485-504. doi: 10.1177/ 0047239517749245 Google Forms. (2022). About Google Forms. https: / / www.google.com/ forms/ about/ Gormely, K., & McDermott, P. (2011). Do you jing? How screencasting can enrich classroom teaching and learning. Language and Literacy Spectrum, 21, 12-20. Gould, J., & Day, P. (2013). Hearing you loud and clear: Student perspectives of audio feed‐ back in higher education. Assessment & Evaluation in Higher Education, 38(5), 554-566. doi: 10.1080/ 02602938.2012.660131 Green, A. (2018). Assessment of learning and assessment for learning. In J. I. Liontas (Ed.), The TESOL encyclopedia of English language teaching (pp. 1-6). Hoboken, NJ: Wiley-Blackwell. Grenny, J. (2013, January 30). The dos and donts of giving performance feedback via email. Forbes. https: / / www.forbes.com/ sites/ josephgrenny/ 2013/ 01/ 30/ the-dos-and-donts-of-giving -performance-feedback-via-email/ ? sh=13d7e0c812ab Griesbaum, J. (2017). Feedback in learning: Screencasts as tools to support instructor feedback to students and the issue of learning from feedback given to other learners. International Journal of Information and Education Technology, 7(9), 694-699. doi: 10.18178/ ijiet.2017.7.9.956 Griese, B., & Kirf, S. (2017). Fehlerkultur und Humor in der Online-Lehre: Ein Erfahrungsbericht über den Einsatz kommentierter PowerPoint- Präsentationen und Videos. In H. R. Griesehop & E. Bauer (Eds.), Lehren und Lernen online. Lehr- und Lernerfahrungen im Kontext akade‐ mischer Online-Lehre (pp. 235-262). Wiesbaden: Springer. Grigoryan, A. (2017a). Audiovisual commentary as a way to reduce transactional distance and increase teaching presence in online writing instruction: Student perceptions and preferences. Journal of Response to Writing, 3(1), 83-128. 266 5 References <?page no="268"?> Grigoryan, A. (2017b). Feedback 2.0 in online writing instruction: Combining audio-visual and text-based commentary to enhance student revision and writing competency. Journal of Computing in Higher Education, 29(3), 451-476. doi: 10.1007/ s12528-017-9152-2 Grimes, D., & Warschauer, M. (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. The Journal of Technology, Learning, and Assessment, 8(6), 4-43. Grotjahn, R., & Kleppin, K. (2017a). Feedback zu schriftlichen Lernerproduktionen. In B. Akukwe, R. Grotjahn, & S. Schipolowski (Eds.). Schreibkompetenzen in der Fremdsprache. Aufgaben‐ gestaltung, kriterienorientierte Bewertung und Feedback (pp. 255-291). Narr Studienbücher. Tübingen: Narr Francke Attempto. Grotjahn, R., & Kleppin, K. (2017b). Kriteriale Evaluation von Schreibkompetenzen. In B. Akukwe, R. Grotjahn, & S. Schipolowski (Eds.). Schreibkompetenzen in der Fremdsprache. Aufgabengestaltung, kriterienorientierte Bewertung und Feedback (pp. 117-157). Narr Studien‐ bücher. Tübingen: Narr Francke Attempto. Gruba, P., Hinkelman, D., & Cárdenas-Claros, M. S. (2016). New technologies, blended learning and the ‘flipped classroom’ in ELT. In G. Hall (Ed.). The Routledge Handbook of English Language Teaching (pp. 135-149). Routledge Handbooks in Applied Linguistics. London, New York: Routledge. Guichon, N., Bétrancourt, M., & Prié, Y. (2012). Managing written and oral negative feedback in a synchronous online teaching situation. Computer Assisted Language Learning, 25(2), 181-197. doi: 10.1080/ 09588221.2011.636054 Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of MOOC videos. In L@S 2014 Proceedings of the First ACM Conference on Learning at Scale, Atlanta (pp. 41-50). New York, NY: Association for Computing Machinery. Guo, Q., Feng, R., & Hua, Y. (2021). How effectively can EFL students use automated written corrective feedback (AWCF) in research writing? Computer Assisted Language Learning, 10(1), 1-20. doi: 10.1080/ 09588221.2021.1879161 Haddad, R. J., & Kalaani, Y. (2014). Google Forms: A real-time formative feedback process for adaptive Learning. In 2014 ASEE Annual Conference & Exposition Proceedings (pp. 24.649.1- 24.649.14). Indianapolis, Indiana: ASEE Conferences. http: / / peer.asee.org/ 20540 Hall, T., Tracy, D., & Lamey, A. (2016). Exploring video feedback in philosophy: Benefits for instructors and students. Teaching Philosophy, 39(2), 137-162. doi: 10.5840/ teachphil201651347 Handke, J. (2020). Humanoide Roboter: Showcase, Partner und Werkzeug. Marburg: Tectum. Hara, N., Bonk, C. J., & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28(2), 115-152. doi: 10.1023/ A: 1003764722829 Harding, J. M. (2018). Student perceptions of screencasted feedback in first-year composition. Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Arts in English. https: / / vtechwor ks.lib.vt.edu/ bitstream/ handle/ 10919/ 83843/ Harding_JM_T_2018.pdf Harding, L., & Kremmel, B. (2016). Teacher assessment literacy and professional development. In D. Tsagari & J. Banerjee (Eds.). Handbook of Second Language Assessment (pp. 413-427). Handbooks of Applied Linguistics: Vol. 12. Berlin, Boston: De Gruyter Mouton. 267 5 References <?page no="269"?> Harper, F., Green, H., & Fernández-Toro, M. (2018). Using screencasts in the teaching of modern languages: Investigating the use of Jing® in feedback on written assignments. The Language Learning Journal, 46(3), 277-292. doi: 10.1080/ 09571736.2015.1061586 Harris, J. B., & Hofer, M. J. (2011). Technological Pedagogical Content Knowledge (TPACK) in Action: A descriptive study of secondary teachers’ curriculum-based, technology-related instructional planning. Journal of Research on Technology in Education, 43(3), 211-229. Hartung, S. (2017). Lernförderliches Feedback in der Online-Lehre gestalten. In H. R. Griesehop & E. Bauer (Eds.), Lehren und Lernen online. Lehr- und Lernerfahrungen im Kontext akademischer Online-Lehre (pp. 199-217). Wiesbaden: Springer. Hase, S., & Saenger, H. (1997). Videomail - a personalised approach to providing feedback on assessment to distance learners. Distance Education, 18(2), 361-368. doi: 10.1080/ 0158791970180211 Hassini, E. (2006). Student-instructor communication: The role of email. Computers & Education, 47(1), 29-40. doi: 10.1016/ j.compedu.2004.08.014 Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York: Routledge. Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. London: Routledge. Hattie, J., & Clarke, S. (2019). Visible Learning: Feedback. London: Routledge. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. doi: 10.3102/ 003465430298487 Hatziapostolou, T., & Paraskakis, I. (2010). Enhancing the impact of formative feedback on student learning through an online feedback system. Electronic Journal of E-Learning, 8(2), 111-122. Hatzipanagos, S., & Rochon, R. (2010). Editorial: Introduction to the special issue: Approaches to assessment that enhance learning. Assessment & Evaluation in Higher Education, 35(5), 491-492. doi: 10.1080/ 02602938.2010.493700 Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., & Chester, J. F. (2012). Improving the accuracy of self-assessment of practical clinical skills using video feedback - the importance of including benchmarks. Medical Teacher, 34(4), 279-284. doi: 10.3109/ 0142159X.2012.658897 Haxton, K. J., & McGarvey, D. J. (2011). Screencasting as a means of providing timely, general feedback on assessment. New Directions, 34(7), 18-21. doi: 10.11120/ ndir.2011.00070018 Hazelton, L., Nastal, J., Elliot, N., Burstein, J., & McCaffrey, D. F. (2021). Formative automated writing evaluation: A standpoint theory of action. Journal of Response to Writing, 7(1), 37-91. Heath, M. (2017). Using Google Docs to improve accuracy in dialogue writings. In Proceedings of the International Conference on English Language Teaching (ICONELT 2017) (pp. 159-163). Surabaya, Indonesia: Atlantis Press. doi: 10.2991/ iconelt-17.2018.36 Hedin, B. (2012). Peer feedback in academic writing using Google Docs. LTHs 7: e Pedagogiska Inspirationskonferens, 30 augusti 2012. https: / / www.researchgate.net/ publication/ 279424603 Heimbürger, A. (2018). Using recorded audio feedback in cross-cultural e-education environ‐ ments to enhance assessment practices in a higher education. Advances in Applied Sociology, 8(2), 106-124. doi: 10.4236/ aasoci.2018.82007 268 5 References <?page no="270"?> Henderson, M. & Phillips, M. (2014). Technology enhanced feedback on assessment. Paper presented at the Australian Computers in Education Conference 2014, Adelaide, SA. http: / / acec 2014.acce.edu.au/ session/ technology-enhanced-feedback-assessment Henderson, M., & Phillips, M. (2015). Video-based feedback on student assessment: Scarily personal. Australasian Journal of Educational Technology, 31(1), 51-66. doi: 10.14742/ ajet.1878 Hennessy, C., & Forrester, G. (2014). Developing a framework for effective audio feed‐ back: A case study. Assessment & Evaluation in Higher Education, 39(7), 777-789. doi: 10.1080/ 02602938.2013.870530 Hepplestone, S., Holden, G., Irwin, B., Parkin, H. J., & Thorpe, L. (2011). Using technology to encourage student engagement with feedback: A literature review. Research in Learning Technology, 19(2), 117-127. doi: 10.1080/ 21567069.2011.586677 Hernandez, H. P., Amarles, A. M., & Raymundo, M. C. Y. (2017). Blog-assisted feedback: Its affordances in improving college ESL students’ academic writing skills. The Asian ESP Journal, 13(2), 100-143. Heron-Hruby, A., Chisholm, J. S., & Olinger, A. R. (2020). “It doesn’t feel like a conversation”: Digital field experiences and preservice teachers’ conceptions of writing response. English Education, 53(1), 72-93. Hewson, K., & Poulsen, J. (2014). A picture is worth a thousand words: Using video for powerful feedback. A Light on Teaching, 9-12. https: / / www.uleth.ca/ teachingcentre/ picture-worth-th ousand-words-using-video-powerful-feedback-2014-2015 Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269-274. doi: 10.1080/ 13562510120045230 Ho, M.-c., & Savignon, S. J. (2013). Face-to-face and computer-mediated peer review in EFL writing. CALICO Journal, 24(2), 269-290. doi: 10.1558/ cj.v24i2.269-290 Hodges, W. (2020). Alternative assessment in Sakai: Part 3 - Portfolios and peer review. https: / / w ww.sakailms.org/ post/ alternative-assessment-in-sakai-part-3-portfolios-and-peer-review Holland, S. (2020). Screencasting for instruction and feedback: Using video and audio to show, not tell. https: / / iteachu.uaf.edu/ screencasting/ Honeycutt, L. (2001). Comparing e-mail and synchronous conferencing in online peer response. Written Communication, 18(1), 26-60. Honeyman, M. (2015). An evaluation of the use of screencasting and video for assessment feedback in online higher education. http: / / blog.edtechs.info/ wp-content/ uploads/ 2015/ 07/ 1stDraftVid eoFeedbackPaper.pdf Hope, S. A. (2011). Making movies: The next big thing in feedback? Bioscience Education, 18(1), 1-14. doi: 10.3108/ beej.18.2SE Hoska, D. M. (1993). Motivating learners through CBI feedback: Developing a positive learner perspective. In J. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 105- 132). Englewood Cliffs, NJ: Educational Technology Publications. Hosseini, S. B. (2012). Asynchronous computer-mediated corrective feedback and the correct use of prepositions: Is it really effective? Turkish Online Journal of Distance Education, 13(4), 95-111. 269 5 References <?page no="271"?> Hosseini, S. B. (2013). The impact of asynchronous computer-mediated corrective feedback on increasing Iranian EFL learners’ correct use of present tenses. International Journal on New Trends in Education and Their Implications, 4(1), 138-153. Howard, S. B. (2018). The comparative usability of instructor feedback given in three modes: Screencast, audio, and written. Dissertation, Texas Tech University, Texas. https: / / ttu-ir.tdl.or g/ bitstream/ handle/ 2346/ 73819/ HOWARD-DISSERTATION-2018.pdf Hu, Q., & Johnston, E. (2012). Using a wiki-based course design to create a student-centered learning environment: Strategies and lessons. Journal of Public Affairs Education, 18(3), 493-512. doi: 10.1080/ 15236803.2012.12001696 Huang, H.-Y. C. (2016). Students and the teacher’s perceptions on incorporating the blog task and peer feedback into EFL writing classes through blogs. English Language Teaching, 9(11), 38-47. doi: 10.5539/ elt.v9n11p38 Huang, S., & Renandya, W. A. (2020). Exploring the integration of automated feedback among lower-proficiency EFL learners. Innovation in Language Learning and Teaching, 14(1), 15-26. doi: 10.1080/ 17501229.2018.1471083 Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Boston: Allyn and Bacon. Huett, J. (2004). Email as an educational feedback tool: Relative advantages and implementation guidelines. International Journal of Instructional Technology and Distance Learning, 1(6), 35-44. Hung, S.-T. A. (2016). Enhancing feedback provision through multimodal video technology. Computers & Education, 98(2), 90-101. doi: 10.1016/ j.compedu.2016.03.009 Hunukumbure, A. D., Smith, S. F., & Das, S. (2017). Holistic feedback approach with video and peer discussion under teacher supervision. BMC Medical Education, 17(1), 1-10. doi: 10.1186/ s12909-017-1017-x Hussein, M. A., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science, 5, 1-16. doi: 10.7717/ peerj-cs.208 Huxham, M., Laybourn, P., Cairncross, S., Gray, M., Brown, N., Goldfinch, J., & Earl, S. (2008). Collecting student feedback: A comparison of questionnaire and other methods. Assessment & Evaluation in Higher Education, 33(6), 675-686. doi: 10.1080/ 02602930701773000 Hyde, E. (2013). Talking Results - trialing an audio-visual feedback method for e-submissions. Innovative Practice in Higher Education, 1(3), 1-4. Hyland, F., & Hyland, K. (2001). Sugaring the pill: Praise and criticism in written feedback. Journal of Second Language Writing, 10(3), 185-212. Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing. Language Teaching, 39(2), 83-101. doi: 10.1017/ S0261444806003399 Hyland, K., & Hyland, F. (2019). Interpersonality and teacher-written feedback. In K. Hyland & F. Hyland (Eds.). Feedback in second language writing. Contexts and issues (2nd ed., pp. 165-183). The Cambridge Applied Linguistics Series. Cambridge: Cambridge University Press. Hynson, Y. (2012). An innovative alternative to providing writing feedback on student’s essays. Teaching English with Technology, 12(1), 53-57. Ice, P., Swan, K., Diaz, S., Kupczynski, L., & Swan-Dagen, A. (2010). An analysis of students’ perceptions of the value and efficacy of instructors’ auditory and text-based feedback 270 5 References <?page no="272"?> modalities across multiple conceptual levels. Journal of Educational Computing Research, 43(1), 113-134. doi: 10.2190/ EC.43.1.g Ice, P., Curtis, R., Phillips, P., & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks, 11(2), 3-25. Imperial College London. (2021). A guide to Padlet for online learning. https: / / www.imperial.ac. uk/ media/ imperial-college/ staff/ education-development-unit/ public/ Padlet-for-online-teac hing-and-learning.pdf Inbar-Lourie, O. (2013). Guest editorial to the special issue on language assessment literacy. Language Testing, 30(3), 301-307. doi: 10.1177/ 0265532213480126 Inbar-Lourie, O. (2017). Language assessment literacy. In I. Shohami, I. G. Or, & S. May (Eds.). Language testing and assessment (pp. 257-270). Encyclopedia of language and education. Cham, Switzerland: Springer. Jenkins, J. O. (2010). A multi‐faceted formative assessment approach: Better recognising the learning needs of students. Assessment & Evaluation in Higher Education, 35(5), 565-576. doi: 10.1080/ 02602930903243059 Jiang, L., & Yu, S. (2020). Appropriating automated feedback in L2 writing: Experiences of Chinese EFL student writers. Computer Assisted Language Learning, 12(2), 1-25. doi: 10.1080/ 09588221.2020.1799824 Jingxin, G., & Razali, A. B. (2020). Tapping the potential of Pigai automated writing evaluation (AWE) program to give feedback on EFL writing. Universal Journal of Educational Research, 8(12B), 8334-8343. doi: 10.13189/ ujer.2020.082638 JISC. (2019a). Getting started with e-portfolios. https: / / www.jisc.ac.uk/ guides/ getting-started-wi th-e-portfolios JISC. (2019b). How to enhance student learning, progression and employability with e-portfolios: Case studies and guidance on e-portfolios for UK further and higher education. https: / / www.ji sc.ac.uk/ guides/ e-portfolios John, P., & Woll, N. (2020). Using grammar checkers in an ESL context: An investigation of automatic corrective feedback. CALICO Journal, 37(2), 169-192. doi: 10.1558/ cj.36523 Johnson, D. W., & Johnson, R. T. (1993). Cooperative learning and feedback in technologybased instruction. In J. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 133-157). Englewood Cliffs, NJ: Educational Technology Publications. Jones, N., Georghiades, P., & Gunson, J. (2012). Student feedback via screen capture digital video: Stimulating student’s modified action. Higher Education, 64(5), 593-607. doi: 10.1007/ s10734-012-9514-7 Jug, R., Jiang, X., & Bean, S. M. (2019). Giving and receiving effective feedback: A review article and how-to guide. Archives of Pathology & Laboratory Medicine, 143(2), 244-250. Karal, H., Çebi, A., & Turgut, Y. E. (2011). Perceptions of students who take synchronous courses through video conferencing about distance education. The Turkish Online Journal of Educational Technology (TOJET), 10(4), 276-293. Kay, R. & Bahula, T. (2020). A systematic review of the literature on video feedback used in higher education. EDULearn 2020 - International Conference on Education and New Learning Technolo‐ 271 5 References <?page no="273"?> gies, Seville, Spain. https: / / www.researchgate.net/ profile/ Robin_Kay/ publication/ 342883343_ A_Systematic_Review_of_The_Literature_on_Video_Feedback_Used_in_Higher_Education / links/ 5f0b872c4585155050a2c13c/ A-Systematic-Review-of-The-Literature-on-Video-Feedba ck-Used-in-Higher-Education.pdf Kay, R., Petrarca, D., & Bahula, T. (2019). Comparing the use of written and video feedback in pre-service teacher education: A case study. Proceedings of the 11th International Conference on Education and New Learning Technologies (EduLearn19) Conference, 1-3 July 2019, Palma, Spain. https: / / library.iated.org/ view/ KAY2019COM Kay, R. H. (2020). Thriving in an online world: A guide for higher education instructors. https: / / www.researchgate.net/ publication/ 345984668_Thriving_in_an_Online_World_A_ Guide_for_Higher_Education_Instructors Keefer, K. (2020). Sample student feedback. https: / / www.trentu.ca/ teaching/ sample-student-fee dback-email Kember, D., Leung, D. Y. P., & Kwan, K. P. (2002). Does the use of student feedback questionnaires improve the overall quality of teaching? Assessment & Evaluation in Higher Education, 27(5), 411-425. doi: 10.1080/ 0260293022000009294 Kemp, C., Li, P., Li, Y., Ma, D., Ren, S., Tian, A., Di Wang, Xie, L., You, J., Zhang, J., Zhu, L., & Zhuang, H. (2019). Collaborative wiki writing gives language learners opportunities for personalised participatory peer-feedback. In S. Yu, H. Niemi, & J. Mason (Eds.). Perspectives on rethinking and reforming education. Shaping future schools with digital technology (pp. 147- 163). Singapore: Springer. doi: 10.1007/ 978-981-13-9439-3_9 Kerr, J., Dudau, A., Deeley, S., Kominis, G., & Song, Y. (2016). Audio-visual feedback: Student attainment and student and staff perceptions. College of Social Sciences, University of Glasgow. http: / / eprints.gla.ac.uk/ 119844/ 7/ 119844.pdf Kerr, W. & McLaughlin, P. (2008). The benefit of screen recorded summaries in feedback for work submitted electronically. Proceedings of the 12th CAA International Computer Assisted Assessment Conference, 8-9 July 2008, at Loughborough University. https: / / hdl.handle.net/ 213 4/ 5363 Ketchum, C., LaFave, D. S., Yeats, C., Phompheng, E., & Hardy, J. H. (2020). Video-based feedback on student work: An investigation into the instructor experience, workload, and student evaluations. Online Learning, 24(3), 85-105. doi: 10.24059/ olj.v24i3.2194 Kettle, T. (2012). Using audio podcasts to provide student feedback: Exploring the issues. Working Papers in Health Sciences, 1(1), 1-5. Khan, R. (2018). What is assessment? Purposes of assessment and evaluation. In J. I. Liontas (Ed.), The TESOL encyclopedia of English language teaching (pp. 1-7). Hoboken, NJ: Wiley- Blackwell. Kiffer, S., Bertrand, É., Eneau, J., Gilliot, J.-M., & Lameul, G. (2021). Enhancing learners’ autonomy with e-portfolios and open learner models: A literature review. Education Thinking, 1(1), 1-9. Kılıçkaya, F. (2016). Use of screencasting for delivering lectures and providing feedback in educational contexts: Issues and implications. In M. Marczak & J. Krajka (Eds.). CALL for openness (pp. 73-90). Studies in computer assisted language learning. Frankfurt am Main, New York: Peter Lang. 272 5 References <?page no="274"?> Killingback, C., Ahmed, O., & Williams, J. (2019). ‘It was all in your voice’ - Tertiary student perceptions of alternative feedback modes (audio, video, podcast, and screencast): A qualita‐ tive literature review. Nurse Education Today, 72, 32-39. doi: 10.1016/ j.nedt.2018.10.012 Killingback, C., Drury, D., Mahato, P., & Williams, J. (2020). Student feedback delivery modes: A qualitative study of student and lecturer views. Nurse Education Today, 84, 104237. doi: 10.1016/ j.nedt.2019.104237 Killoran, J. B. (2013). Reel-to-reel tapes, cassettes, and digital audio media: Reverberations from a half-century of recorded-audio response to student writing. Computers and Composition, 30(1), 37-49. doi: 10.1016/ j.compcom.2013.01.001 Kim, L. (2004). Online technologies for teaching writing: Students react to teacher response in voice and written modalities. Research in the Teaching of English, 38(3), 304-337. Kim, V. (2018). Technology-enhanced feedback on student writing in the English-medium instruction classroom. English Teaching, 73(4), 29-53. doi: 10.15858/ engtea.73.4.201812.29 Kim, Y., & Emeliyanova, L. (2021). The effects of written corrective feedback on the accuracy of L2 writing: Comparing collaborative and individual revision behavior. Language Teaching Research, 25(2), 234-255. doi: 10.1177/ 1362168819831406 King, D., McGugan, S., & Bunyan, N. (2008). Does it make a difference? Replacing text with audio feedback. Practice and Evidence of Scholarship of Teaching and Learning in Higher Education, 3(2), 145-163. Kirschner, P. A., van den Brink, H., & Meester, M. (1991). Audiotape feedback for essays in distance education. Innovative Higher Education, 15(2), 185-195. doi: 10.1007/ BF00898030 Kitchakarn, O. (2013). Peer feedback through blogs: An effective tool for improving students’ writing abilities. Turkish Online Journal of Distance Education, 14(3), 152-164. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A his‐ torical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284. doi: 10.1037/ 0033-2909.119.2.254 Kluzer, S., & Pujol Priego, L. (2018). DigComp into action: Get inspired - make it happen. A user guide to the European Digital Competence Framework. EUR - Scientific and Technical Research Reports: Vol. 29115. Luxembourg: Publications Office of the European Union. Koehler, M. J., Mishra, P., & Cain, W. (2013). What is Technological Pedagogical Content Knowledge (TPACK)? Journal of Education, 193(3), 13-19. König, J., Jäger-Biela, D. J., & Glutsch, N. (2020). Adapting to online teaching during COVID-19 school closure: Teacher education and teacher competence effects among early career teachers in Germany. European Journal of Teacher Education, 43(4), 608-622. doi: 10.1080/ 02619768.2020.1809650 Kouakou, E. A. (2018). Comment bubbles, color coding, and track changes. Exploring computermediated feedback on English learners’ writing. Northcentral University, San Diego, California, USA. https: / / www.proquest.com/ openview/ 225ac5526c655d10dd94bf4bb90f52eb/ 1? pq-origsi te=gscholar&cbl=18750 Kress, G. (2004). Literacy in the new media age (Reprinted). Literacies. London: Routledge. Kroeger, M. (2020). Exploring the potentials of CLIL for the teaching of textile design (Unpublished MA thesis). University of Osnabrück, Osnabrück, Germany. 273 5 References <?page no="275"?> Kulgemeyer, C. (2018). Towards a framework for effective instructional explanations in science teaching. Studies in Science Education, 54(2), 109-139. doi: 10.1080/ 03057267.2018.1598054 Kulhavy, R. W. (1977). Feedback in written instruction. Review of Educational Research, 47(1), 211-232. Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279-308. Kulhavy, R. W., & Wager, W. (1993). Feedback in programmed instruction: Historical context and implications for practice. In J. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 3-20). Englewood Cliffs, NJ: Educational Technology Publications. Kultusministerkonferenz [KMK]. (2016). Bildung in der digitalen Welt: Strategie der Kultusministerkonferenz. Beschluss der Kultusministerkonferenz vom 08.12.2016. https: / / www.kmk.org/ fileadmin/ Dateien/ veroeffentlichungen_beschluesse/ 2016/ 2016_12_0 8-Bildung-in-der-digitalen-Welt.pdf Kurtzberg, T. R., Belkin, L. Y., & Naquin, C. E. (2006). The effect of e‐mail on attitudes towards performance feedback. International Journal of Organizational Analysis, 14(1), 4-21. doi: 10.1108/ 10553180610739722 Kusuma, I. P. I., Mahayanti, N. W. S., Gunawan, M. H., Rachman, D., & Pratiwi, N. P. A. (2021). How well do e-portfolios facilitate students’ learning engagement in speaking courses during the COVID-19 pandemic? Indonesian Journal of Applied Linguistics, 11(2), 351-363. doi: 10.17509/ ijal.v11i2.30583 Lai, C., & Zhao, Y. (2006). Noticing and text-based chat. Language Learning & Technology, 10(3), 102-120. Lai, Y.-h. (2010). Which do students prefer to evaluate their essays: Peers or computer program. British Journal of Educational Technology, 41(3), 432-454. doi: 10.1111/ j.1467-8535.2009.00959.x Laing, J., El Ebyary, K., & Windeatt, S. (2012). How learners use automated computer-based feedback to produce revised drafts of essays. In L. Bradley & S. Thouësny (Eds.), CALL: Using, learning, knowing. EUROCALL Conference: Gothenburg, Sweden, 22-25 August 2012. Proceedings (pp. 156-160). Dublin, Ireland, Voillans, France: Research-publishing.net. Lake, W., Boyd, W., Boyd, W., & Hellmundt, S. (2017). Just another student survey? - Point-ofcontact survey feedback enhances the student experience and lets researchers gather data. Australian Journal of Adult Learning, 57(1), 82-104. Lamey, A. (2015). Video feedback in philosophy. Metaphilosophy, 46(4-5), 691-702. doi: 10.1111/ meta.12155 Langton, N. (2019). Screencast delivery of feedback on writing assignments for beginning Japanese language students: An alternative to the “red pen”. In E. Zimmerman & A. McMeekin (Eds.). Technology-supported learning in and out of the Japanese language classroom. Advances in pedagogy, teaching and research (pp. 31-58). Second Language Acquisition: Vol. 133. Bristol, Blue Ridge Summit, PA: Multilingual Matters. Law, R. (2013). Using screencasts to enhance coursework feedback for game programming students. Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education, July 1-3, 2013. Canterbury, England, UK. 274 5 References <?page no="276"?> LeBaron, S. W. M., & Jernick, J. (2000). Evaluation as a dynamic process. Family Medicine, 32(1), 13-14. Lee, A. S.-J., & Ho, W. Y. J. (2021). Case study 5, Macao: Using Google Docs for peer review. In L. Miller & J. G. Wu (Eds.), Language learning with technology. Perspectives from Asia (pp. 123-132). Singapore: Springer. doi: 10.1007/ 978-981-16-2697-5 Lee, A. R., & Bailey, D. R. (2016). Korean EFL students’ perceptions of instructor video and written feedback in a blended learning course. STEM Journal, 17(4), 133-158. doi: 10.16875/ stem.2016.17.4.133 Lee, H. W., & Cha, Y. M. (2021). The effects of digital written feedback on paper-based tests for college students. The Asia-Pacific Education Researcher, 61(2). doi: 10.1007/ s40299-021-00592-8 Lee, H. W., & Lim, K. Y. (2013). Does digital handwriting of instructors using the iPad enhance student learning? The Asia-Pacific Education Researcher, 22(3), 241-245. doi: 10.1007/ s40299-012-0016-2 Lee, I. (2014). Feedback in writing: Issues and challenges. Assessing Writing, 19(2), 1-5. doi: 10.1016/ j.asw.2013.11.009 Leibold, N., & Schwarz, L. M. (2015). The art of giving online feedback. The Journal of Effective Teaching, 15(1), 34-46. Lenards, N. D. (2017). Student perceptions of asynchronous multimodal instructor feedback: A multiple case study. Dissertation manuscript submitted to Northcentral University, Graduate Faculty of the School of Education, San Diego, California, in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy. Letón, E., Molanes-López, E. M., Luque, M., & Conejo, R. (2018). Video podcast and illustrated text feedback in a web-based formative assessment environment. Computer Applications in Engineering Education, 26(2), 187-202. doi: 10.1002/ cae.21869 Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing, 27(2), 1-18. doi: 10.1016/ j.jslw.2014.10.004 Li, Z. (2021). Teachers in automated writing evaluation (AWE) system-supported ESL writing classes: Perception, implementation, and influence. System, 99(2), 1-14. doi: 10.1016/ j.sys‐ tem.2021.102505 Li, Z., Feng, H.-H., & Saricaoglu, A. (2017). The short-term and long-term effects of AWE feedback on ESL students’ development of grammatical accuracy. CALICO Journal, 34(3), 355-375. doi: 10.1558/ cj.26382 Liang, M.-Y. (2010). Using synchronous online peer response groups in EFL writing: Revisionrelated discourse. Language Learning & Technology, 14(1), 45-64. Liao, H.-C. (2016). Using automated writing evaluation to reduce grammar errors in writing. ELT Journal, 70(3), 308-319. doi: 10.1093/ elt/ ccv058 Liaqat, A., Munteanu, C., & Demmans Epp, C. (2021). Collaborating with mature English language learners to combine peer and automated feedback: A user-centered approach to designing writing support. International Journal of Artificial Intelligence in Education, 31(4), 638-679. doi: 10.1007/ s40593-020-00204-4 275 5 References <?page no="277"?> Lin, W.-C., & Yang, S. C. (2011). Exploring students’ perceptions of integrating Wiki technology and peer feedback into English writing courses. English Teaching: Practice and Critique, 10(2), 88-103. Lin, W.-C., & Yang, S. C. (2013). Exploring the roles of Google.doc and peer e-tutors in English writing. English Teaching: Practice and Critique, 12(1), 79-90. Lindqvist, M. H. (2020). The uptake and use of digital technologies and professional development: Exploring the university teacher perspective. In A. Elçi, L. L. Beith, & A. Elçi (Eds.). Handbook of research on faculty development for digital teaching and learning (pp. 505-525). Advances in educational technologies and instructional design (AETID) book series. Hershey, PA: IGI Global. Lipnevich, A. A., & Panadero, E. (2021). A review of feedback models and theories: Descriptions, definitions, and conclusions. Frontiers in Education, 6, 1-29. doi: 10.3389/ feduc.2021.720195 Little, C. (2016). Technological review: Mentimeter smartphone student response system. Com‐ pass: Journal of Learning and Teaching, 9(13), 1-3. Liu, J., & Sadler, R. W. (2003). The effect and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes, 2(3), 193-227. doi: 10.1016/ S1475-1585(03)00025-0 Liu, S. H. J. (2015). Learner responses to feedback in text chat synchronous computer mediated communication. In AT Taipei Tech (Ed.), Global communication and beyond. Language, culture, pedagogy, and translation. Selected papers from the APLX 2015 International Conference on Applied Linguistics (pp. 145-158). Taipei, Taiwan. Lloyd, J. (2021). How to mail merge in Microsoft Word. WikiHow. https: / / www.wikihow.com/ Mail-Merge-in-Microsoft-Word Loch, B., & McLoughlin, C. (2011). An instructional design model for screencasting: Engaging students in self-regulated learning. In G. Williams, P. Statham, N. Brown, & B. Cleland (Eds.), Ascilite 2011. Changing demands, changing directions. Proceedings of the Australasian Society for Computers in Learning in Tertiary Education’s (ASCILITE) conference held in Wrest Point, Hobart, Tasmania, Australia, 4-7 December 2011 (pp. 816-821). Hobart: University of Tasmania. Loewen, S., & Erlam, R. (2006). Corrective feedback in the chatroom: An experimental study. Computer Assisted Language Learning, 19(1), 1-14. doi: 10.1080/ 09588220600803311 Lorenzo Galés, N., & Gallon, R. (2019). Educational agility. In M. Kowalczuk-Walędziak, A. Korzeniecka-Bondar, W. Danilewicz, & G. Lauwers (Eds.), Rethinking teacher education for the 21st century. Trends, challenges and new directions (pp. 98-110). Opladen, Berlin & Toronto: Verlag Barbara Budrich. Lu, H. (2021a). Electronic portfolios in higher education: A review of the literature. European Journal of Education and Pedagogy, 2(3), 96-101. doi: 10.24018/ ejedu.2021.2.3.119 Lu, H. (2021b). The implementation of ePortfolios in an online graduate program. European Journal of Open Education and E-learning Studies, 6(2), 167-180. doi: 10.46827/ ejoe.v6i2.4021 Lunt, T., & Curran, J. (2010). ‘Are you listening please? ’: The advantages of electronic audio feedback compared to written feedback. Assessment & Evaluation in Higher Education, 35(7), 759-769. doi: 10.1080/ 02602930902977772 276 5 References <?page no="278"?> Lv, X. (2018). A study on the application of automatic scoring and feedback system in college English writing. International Journal of Emerging Technologies in Learning (iJET), 13(3), 188-196. doi: 10.3991/ ijet.v13i03.8386 Lyster, R., & Ranta, L. (1997). Corrective feedback and learner uptake: Negotiation of form in communicative classrooms. Studies in Second Language Acquisition, 19(1), 37-66. doi: 10.1017/ S0272263197001034 Ma, Q. (2020). Examining the role of inter-group peer online feedback on wiki writing in an EAP context. Computer Assisted Language Learning, 33(3), 197-216. doi: 10.1080/ 09588221.2018.1556703 Macgregor, G., Spiers, A., & Taylor, C. (2011). Exploratory evaluation of audio email technology in formative assessment feedback. Research in Learning Technology, 19(1), 39-59. doi: 10.3402/ rlt.v19i1.17119 MacKenzie, A. (2021). Student perceptions of screencast video feedback for summative project assessment tasks in an Engineering Technology Management course. In M. E. Auer & D. Centea (Eds.). Visions and Concepts for Education 4.0. Proceedings of the 9th International Conference on Interactive Collaborative and Blended Learning (ICBL2020) (pp. 219-229). Advances in Intelligent Systems and Computing. Mahboob, A., & Devrim, D. Y. (2011). Providing effective feedback in an online environment. Society of Pakistan English Language Teachers (SPELT) Quarterly, 26(3), 2-15. Mahoney, P., Macfarlane, S., & Ajjawi, R. (2019). A qualitative synthesis of video feedback in higher education. Teaching in Higher Education, 24(2), 157-179. doi: 10.1080/ 13562517.2018.1471457 Mandernach, B. J. (2018). Strategies to maximize the impact of feedback and streamline your time. Journal of Educators Online, 15(3), 1-15. Mann, S. (2015). Using screen capture software to improve the value of feedback on academic assignments in teacher education. In T. S. C. Farrell (Ed.). International perspectives on English language teacher education. Innovations from the field (pp. 160-180). International Perspectives on English Language Teaching. London: Palgrave Macmillan UK. Marriott, P., & Teoh, L. K. (2012). Using screencasts to enhance assessment feedback: Students’ perceptions and preferences. Accounting Education, 21(6), 583-598. doi: 10.1080/ 09639284.2012.725637 Marshall, D. T., Love, S., & Scott, L. (2020). “It’s not like he was being a robot”: Student perceptions of video-based writing feedback in online graduate coursework. International Journal for the Scholarship of Teaching and Learning, 14(1), 1-10. doi: 10.20429/ ijsotl.2020.140110 Martin, M. (2005). Seeing is believing: The role of videoconferencing in distance learning. British Journal of Educational Technology, 36(3), 397-405. doi: 10.1111/ j.1467-8535.2005.00471.x Martin Mota, S., & Baró Vivancos, S. (2018). Screencasting: Its characteristics and some applications for providing feedback in language learning. APAC ELT Journal (87), 33-38. Martínez-Arboleda, A. (2018). Student feedback through desktop capture: Creative screencasting. Paper presented at The Seventh International Conference on E-Learning and E-Tech‐ nologies in Education (ICEEE2018). Lodz: University of Technology, Lodz. https: / / www.resear 277 5 References <?page no="279"?> chgate.net/ publication/ 327835356_Proceedings_of_the_Seventh_International_Conference_ on_E-Learning_and_E-Technologies_in_Education_ICEEE2018_Lodz_Poland_2018 Mathieson, K. (2012). Exploring student perceptions of audiovisual feedback via screen‐ casting in online courses. American Journal of Distance Education, 26(3), 143-156. doi: 10.1080/ 08923647.2012.689166 Mathisen, P. (2012). Video feedback in higher education - A contribution to improving the quality of written feedback. Nordic Journal of Digital Literacy, 7(2), 97-116. Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions? Educational Psychologist, 32(1), 1-19. Mayer, R. E. (2002). Multimedia learning. The Annual Report of Educational Psychology in Japan, 41, 27-29. Mayer, R. E. (2005a). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.). The Cambridge handbook of multimedia learning (pp. 31-48). Cambridge handbooks in psychology. Cambridge: Cambridge University Press. Mayer, R. E. (2005b). Introduction to multimedia learning. In R. E. Mayer (Ed.). The Cambridge handbook of multimedia learning (pp. 1-16). Cambridge handbooks in psychology. Cambridge: Cambridge University Press. Mayer, R. E. (2005c). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.). The Cambridge handbook of multimedia learning (pp. 201-212). Cambridge handbooks in psychology. Cambridge: Cambridge Univer‐ sity Press. Mayer, R. E. (Ed.). (2005d). The Cambridge handbook of multimedia learning. Cambridge hand‐ books in psychology. Cambridge: Cambridge University Press. Mayer, R. E. (2006). Multimedia learning (8th ed.). Cambridge: Cambridge University Press. Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychology, 90(2), 312-320. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43-52. doi: 10.1207/ S15326985EP3801_6 Mayhew, E. (2017). Playback feedback: The impact of screen-captured video feedback on student satisfaction, learning and attainment. European Political Science, 16(2), 179-192. doi: 10.1057/ eps.2015.102 McCartan, S., & Short, A. (2020). Screencasting: A tool to enhance workplace feedback practices and improve employee learning and performance? SocArXiv, 1-38. doi: 10.31235/ osf.io/ an4kg McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues in Educational Research, 25(2), 153-169. McCarthy, J. (2020). Student perceptions of screencast video feedback for summative assessment tasks in the Creative Arts. In C. Dann & S. O’Neill (Eds.). Technology-enhanced formative assessment practices in higher education (pp. 177-192). Advances in higher education and professional development (AHEPD). Hershey, PA: Information Science Reference. McDowell, J. (2020a). A theory-practice research framework for video-enhanced learning, assessment, and feedback. In C. Dann & S. O’Neill (Eds.). Technology-enhanced formative 278 5 References <?page no="280"?> assessment practices in higher education (pp. 100-126). Advances in higher education and professional development (AHEPD). Hershey, PA: Information Science Reference. McDowell, J. (2020b). The student experience of video-enhanced learning, assessment, and feedback. In C. Dann & S. O’Neill (Eds.). Technology-enhanced formative assessment practices in higher education (pp. 20-40). Advances in higher education and professional development (AHEPD). Hershey, PA: Information Science Reference. McDowell, J. (2020c). Towards a dialogic model of video-enhanced learning, assessment, and feedback. In C. Dann & S. O’Neill (Eds.). Technology-enhanced formative assessment practices in higher education (pp. 127-154). Advances in higher education and professional development (AHEPD). Hershey, PA: Information Science Reference. McGarrell, H., & Alvira, R. (2013). Innovation in techniques for teacher commentary on ESL writers’ drafts. Cahiers de l’ILOB/ OLBIWP (OLBI Working Papers), 5, 37-55. doi: 10.18192/ olbiwp.v5i0.1117 McLaughlin, P., Kerr, W., & Howie, K. (2007). Fuller, richer feedback, more easily delivered, using tablet PCs. In F. Khandia (Ed.), 11th CAA International Computer Assisted Assessment Conference. Proceedings of the Conference on 10th and 11th July 2007 at Loughborough University (pp. 329-342). Loughborough: Loughborough University. https: / / hdl.handle.net/ 2 134/ 4572 McLeod, R. H., Kim, S., & Resua, K. A. (2019). The effects of coaching with video and email feedback on preservice teachers’ use of recommended practices. Topics in Early Childhood Special Education, 38(4), 192-203. doi: 10.1177/ 0271121418763531 McVey, M. (2008). Writing in an online environment: Student views of “inked” feedback. International Journal of Teaching and Learning in Higher Education, 20(1), 39-50. Merriam-Webster Online. (2021a). Method. https: / / www.merriam-webster.com/ dictionary/ method Merriam-Webster Online. (2021b). Strategy. https: / / www.dictionary.com/ browse/ strategy Merry, S., & Orsmond, P. (2008). Students’ attitudes to and usage of academic feedback provided via audio files. Bioscience Education, 11(1), 1-11. doi: 10.3108/ beej.11.3 Mesfin, G., Ghinea, G., Grønli, T.-M., & Hwang, W.-Y. (2018). Enhanced agility of e-learning adoption in high schools. International Forum of Educational Technology & Society, 21(4), 157-170. Middleton, A. (2011). The changing face of feedback - how staff are using media-enhanced feedback. Educational Developments: The Magazine of the Staff and Educational Development Association Ltd (SEDA), 12(3), 25-27. Middleton, A., & Nortcliffe, A. (2010). Audio feedback design: Principles and emerging practice. International Journal of Continuing Engineering Education and Life-Long Learning, 20(2), 208-223. doi: 10.1504/ IJCEELL.2010.036816 Min, H.-T. (2005). Training students to become successful peer reviewers. System, 33(2), 293-308. doi: 10.1016/ j.system.2004.11.003 Miranty, D., & Widiati, U. (2021). An automated writing evaluation (AWE) in higher education. Pegem Journal of Education and Instruction, 11(4), 126-137. doi: 10.47750/ pegegog.11.04.12 279 5 References <?page no="281"?> Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054. Modise, M. P. (2021). Postgraduate students’ perception of the use of e-portfolios as a teaching tool to support learning in an open and distance education institution. Journal of Learning for Development, 8(2), 283-297. Molloy, E., Boud, D., & Henderson, M. (2020). Developing a learning-centred framework for feedback literacy. Assessment & Evaluation in Higher Education, 45(4), 527-540. doi: 10.1080/ 02602938.2019.1667955 Monitor Lehrerbildung. (2021). Lehrkräfte vom ersten Semester an für die digitale Welt qualifizie‐ ren: Policy Brief November 2021. https: / / www.monitor-lehrerbildung.de/ export/ sites/ default/ .content/ Downloads/ Monitor-Lehrerbildung_Digitale-Welt_Policy-Brief-2021.pdf Monteiro, K. (2014). An experimental study of corrective feedback during video-conferencing. Language Learning & Technology, 18(3), 56-79. Moore, N. S., & Filling, M. L. (2012). iFeedback: Using video technology for improving student writing. Journal of College Literacy & Learning, 38, 3-14. Mork, C.-M. (2014). Benefits of using online student response systems in Japanese EFL class‐ rooms. JALT CALL Journal, 10(2), 127-137. Morra, A. M., & Asís, M. I. (2009). The effect of audio and written teacher responses on EFL student revision. Journal of College Reading and Learning, 39(2), 68-81. Morris, C., & Chikwa, G. (2014). Screencasts: How effective are they and how do students engage with them? Active Learning in Higher Education, 15(1), 25-37. doi: 10.1177/ 1469787413514654 Morrison, J. (2013). A technology enhanced approach to improving feedback satisfaction: An investigation into using screencast-video as a means of producing feedback delivered via the Moodle gradebook. Paper presented at the 20th Annual Conference of the Association for Learning Technology (ALTC 2013), 10-12 September 2013, Nottingham, UK. https: / / www.alt.a c.uk/ altc2013 Mory, E. H. (2004). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (2nd ed., pp. 745-783). Mahwah, NJ: Erlbaum. Motallebzadeh, K., & Amirabadi, S. (2011). Online interactional feedback in second language writing: Through peer or tutor? Theory and Practice in Language Studies (TPLS), 1(5), 534-540. doi: 10.4304/ tpls.1.5.534-540 Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector, M. D. Merrill, & J. van Merrienboer (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 125-143). Mahwah, NY: Erlbaum. Narciss, S. (2013). Designing and evaluating tutoring feedback strategies for digital learning environments on the basis of the interactive tutoring feedback model. Digital Education Re‐ view (23), 7-26. Narciss, S. (2018). Feedbackstrategien für interaktive Lernaufgaben. In H. Niegemann & A. Weinberger (Eds.). Handbuch Bildungstechnologie. Konzeption und Einsatz digitaler Lernum‐ gebungen (pp. 1-24). Springer Reference Psychologie. Berlin, Heidelberg: Springer. Nash, R. A., & Winstone, N. E. (2017). Responsibility-sharing in the giving and receiving of as‐ sessment feedback. Frontiers in Psychology, 8(Article 1519), 1-9. doi: 10.3389/ fpsyg.2017.01519 280 5 References <?page no="282"?> Nelson, M. E., & Kern, R. (2012). Language teaching and learning in The Postlinguistic Condition? In M. van Manen (Ed.). The tact of teaching. The meaning of pedagogical thoughtfulness (pp. 47-66). SUNY series, the philosophy of education. Albany: State University of New York Press. Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37(4), 375-401. doi: 10.1007/ s11251-008-9053-x Neumann, K. L., & Kopcha, T. J. (2019). Using Google Docs for peer-then-teacher review on middle school students’ writing. Computers and Composition, 54(2), 1-16. doi: 10.1016/ j.compcom.2019.102524 Newby, D., Allan, R., Fenner, A.-B., Jones, B., Komorowska, H., & Soghikyan, K. (2007). European portfolio for student teachers of languages (EPOSTL): A reflection tool for language teacher education. Languages for social cohesion: Language education in a multilingual and multicultural Europe. Strasbourg: Council of Europe Publ. Newby, D., Fenner, A.-B., & Jones, B. (Eds.). (2011). Using the European portfolio for student teachers of languages. Strasbourg: Council of Europe Publishing. Nguyen, P. T. T. (2012). Peer feedback on second language writing through blogs: The case of a Vietnamese EFL classroom. International Journal of Computer-Assisted Language Learning and Teaching, 2(1), 13-23. doi: 10.4018/ ijcallt.2012010102 Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501-517. doi: 10.1080/ 02602931003786559 Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122. doi: 10.1080/ 02602938.2013.795518 Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. Nnadozie, V., Anyanwu, C. C., Ngwenya, J., & Khanare, F. P. (2020). Divergence and the use of digital technology in learning: Undergraduate students’ experiences of email feedback in a South African university. Journal of University Teaching and Learning Practice, 17(3), 137-148. doi: 10.53761/ 1.17.3.10 Nortcliffe, A., & Middleton, A. (2008). A three year case study of using audio to blend the engineer’s learning environment. Engineering Education, 3(2), 45-57. doi: 10.11120/ ened.2008.03020045 Nortcliffe, A., & Middleton, A. (2011). Smartphone feedback: Using an iPhone to improve the distribution of audio feedback. The International Journal of Electrical Engineering & Education, 48(3), 280-293. doi: 10.7227/ IJEEE.48.3.6 Nourinezhad, S., Hadipourfard, E., Bavali, M., & Kruk, R. (2021). The effect of audio-visual feedback on writing components and writing performance of medical university students in two different modes of instruction, flipped and traditional. Cogent Education, 8(1), 1-21. doi: 10.1080/ 2331186X.2021.1978621 281 5 References <?page no="283"?> Nova, M. (2018). Utilizing Grammarly in evaluating academic writing: A narrative research on EFL students’ experience. Premise Journal of English Education and Applied Linguistics, 7(1), 80-96. Novakovich, J. (2016). Fostering critical thinking and reflection through blog-mediated peer feedback. Journal of Computer Assisted Learning, 32(1), 16-30. doi: 10.1111/ jcal.12114 Nurmukhamedov, U., & Kim, S. H. (2010). ‘Would you perhaps consider …’: Hedged comments in ESL writing. ELT Journal, 64(3), 272-282. doi: 10.1093/ elt/ ccp063 O’Malley, P. J. (2011). Combining screencasting and a Tablet PC to deliver personalised student feedback. New Directions in the Teaching of Physical Sciences (7), 27-30. doi: 10.29311/ ndtps.v0i7.464 O’Dowd, R. (2007). Evaluating the outcomes of online intercultural exchange. ELT Journal, 61(2), 144-152. doi: 10.1093/ elt/ ccm007 Olesova, L. A., Weasenforth, D., Richardson, J. C., & Meloni, C. (2011). Using asynchronous instructional audio feedback in online environments: A mixed methods study. MERLOT Journal of Online Learning and Teaching, 7(1), 30-42. Olsen, J. C. (2022). Teaching portfolio: Examples of feedback on student writing. https: / / blogs.comm ons.georgetown.edu/ jco34/ sample-assignments/ examples-of-feedback-on-student-writing/ Oomen-Early, J. (2008). Using asynchronous audio communication (AAC) in the online class‐ room: A comparative study. MERLOT Journal of Online Learning and Teaching, 4(3), 267-276. Orlando, J. (2016). A comparison of text, voice, and screencasting feedback to online students. American Journal of Distance Education, 30(3), 156-166. doi: 10.1080/ 08923647.2016.1187472 Osborn, S. & Wenson, T. (2011). Use Word’s Track Changes feature to provide better feedback while “going green”. In EdTech Day Conference. Presentation 16. http: / / vc.bridgew.edu/ edtec h/ 2011/ sessions/ 16 Özkul, S., & Ortaçtepe, D. (2017). The use of video feedback in teaching process-approach EFL writing. TESOL Journal, 8(4), 862-877. doi: 10.1002/ tesj.362 Palermo, C., & Wilson, J. (2020). Implementing automated writing evaluation in different instructional contexts: A mixed-methods study. Journal of Writing Research, 12(1), 63-108. doi: 10.17239/ jowr-2020.12.01.04 Panke, S. (2014). E-portfolios in higher education settings: A literature review. In E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1530-1539). Paris, S. G., Lipson, M. Y., & Wixson, K. K. (1983). Becoming a strategic reader. Contemporary Educational Psychology, 8(3), 293-316. Parkes, J., Abercrombie, S., & McCarty, T. (2013). Feedback sandwiches affect perceptions but not performance. Advances in health sciences education: theory and practice, 18(3), 397-407. doi: 10.1007/ s10459-012-9377-9 Parton, B. S., Crain-Dorough, M., & Hancock, R. (2010). Using flip camcorders to create video feedback: Is it realistic for professors and beneficial to students? International Journal of Instructional Technology & Distance Learning, 7(1), 15-23. 282 5 References <?page no="284"?> Pearson, J. (2021). Technology enhanced feedback: Using Padlet to give feedback in a lan‐ guage course. https: / / blogs.kcl.ac.uk/ aflkings/ 2021/ 01/ 19/ technology-enhanced-feedback-usi ng-padlet-to-give-feedback-in-a-language-course/ Pedrosa-de-Jesus, H., & Moreira, A. C. (2012). Promoting questioning skills by biology under‐ graduates: The role of assessment and feedback in an online discussion forum. Reflecting Education, 8(1), 57-77. Pegrum, M., & Oakley, G. (2017). The changing landscape of e-portfolios: Reflections on 5 Years of Implementing E-Portfolios in pre-service teacher education. In T. Chaudhuri & B. Cabau (Eds.), E-portfolios in higher education (pp. 21-34). Singapore: Springer. Peled, Y., Bar-Shalom, O., & Sharon, R. (2014). Characterisation of pre-service teachers’ attitude to feedback in a wiki-environment framework. Interactive Learning Environments, 22(5), 578-593. doi: 10.1080/ 10494820.2012.731002 Penna, C. (2019). Three ways to use video feedback to enhance student engagement. The Schol‐ arly Teacher. https: / / www.scholarlyteacher.com/ post/ three-ways-to-use-video-feedback-to -enhance-student-engagement Perkoski, R. (2017). The impact of multimedia feedback on student perceptions: Video screencast with audio compared to text based email. Dissertation, University of Pitts‐ burgh. http: / / d-scholarship.pitt.edu/ 31759/ 1/ Dissertation_Perkoski_Plus%20-%20MNE%20E dits%20(5.8.2017)_done.pdf Pham, V. P. H., & Usaha, S. (2009). Blog-based peer response for EFL writing: A case study in Vietnam. AsiaCall Online Journal, 4(1), 1-29. Pham, V. P. H., & Usaha, S. (2016). Blog-based peer response for L2 writing revision. Computer Assisted Language Learning, 29(4), 724-748. doi: 10.1080/ 09588221.2015.1026355 Phillips, M., Ryan, T., & Henderson, M. (2017). A cross-disciplinary evaluation of digitally recorded feedback in higher education. In H. Partridge, K. Davis, & J. Thomas (Eds.), Me, Us, IT! Proceedings ASCILITE2017: 34th International Conference on Innovation, Practice and Research in the Use of Educational Technologies in Tertiary Education (pp. 364-371). Toowoomba, Australia: University of Southern Queensland. Picardo, J. (2017). How to do IT: Using digital technology to support effective assessment and feedback. Teacher reflection. https: / / impact.chartered.college/ article/ picardo-using-digital-te chnology-support-effective-feedback-assessment/ Pichardo, J. I., López-Medina, E. F., Mancha-Cáceres, O., González-Enríquez, I., Hernández- Melián, A., Blázquez-Rodríguez, M., Jiménez, V., Logares, M., Carabantes-Alarcon, D., Ramos- Toro, M., Isorna, E., Cornejo-Valle, M., & Borrás-Gené, O. (2021). Students and teachers using Mentimeter: Technological innovation to face the challenges of the COVID-19 pandemic and post-pandemic in higher education. Education Sciences, 11(Article 667), 1-18. doi: 10.3390/ educsci11110667 Pinsky, L. E., & Wipf, J. E. (2000). A picture is worth a thousand words: Practical use of videotape in teaching. Journal of General Internal Medicine, 15(11), 805-810. doi: 10.1046/ j.1525-1497.2000.05129.x 283 5 References <?page no="285"?> Pitt, E., & Winstone, N. (2020). Towards technology enhanced dialogic feedback. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.). Re-imagining university assessment in a digital world (pp. 79-94). The Enabling Power of Assessment: Vol. 7. Cham, Switzerland: Springer. Poehner, M. E., & Infante, P. (2016). Dynamic Assessment in the language classroom. In D. Tsagari & J. Banerjee (Eds.). Handbook of Second Language Assessment (pp. 275-290). Handbooks of Applied Linguistics: Vol. 12. Berlin, Boston: De Gruyter Mouton. Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory into Practice, 48(1), 4-11. doi: 10.1080/ 00405840802577536 Porsch, R. (2010). Zur Förderung der Schreibkompetenz: Rückmeldungen. Praxis Fremdsprache‐ nunterricht Englisch, 6, 12-15. Potdevin, F., Vors, O., Huchez, A., Lamour, M., Davids, K., & Schnitzler, C. (2018). How can video feedback be used in physical education to support novice learning in gymnastics? Effects on motor learning, self-assessment and motivation. Physical Education and Sport Pedagogy, 23(6), 559-574. doi: 10.1080/ 17408989.2018.1485138 Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289. doi: 10.1080/ 02602930903541007 Prilop, C. N., Weber, K. E., & Kleinknecht, M. (2020). Effects of digital video-based feedback environments on pre-service teachers’ feedback competence. Computers in Human Behavior, 102(4), 120-131. doi: 10.1016/ j.chb.2019.08.011 Rahman, S. A., Rahim Salam, A., & Yusof, M. A. M. (2014). Screencast feedback practice on students’ writing. Paper presented at the Asia-Pacific Social Science Conference (APSSC) 2014, 8-10 January, Seoul, South Korea. https: / / www.researchgate.net/ publication/ 289396637 Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4-13. doi: 10.1002/ bs.3830280103 Ranalli, J. (2018). Automated written corrective feedback: How well can students make use of it? Computer Assisted Language Learning, 31(7), 653-674. doi: 10.1080/ 09588221.2018.1428994 Ranta, L., & Lyster, R. (2018). Form-focused instruction. In P. Garrett & J. M. Cots (Eds.). The Routledge handbook of language awareness (pp. 40-56). Routledge handbooks in linguistics. New York: Routledge. Rassaei, E. (2017). Video chat vs. face-to-face recasts, learners’ interpretations and L2 develop‐ ment: A case of Persian EFL learners. Computer Assisted Language Learning, 30(1-2), 133-148. doi: 10.1080/ 09588221.2016.1275702 Rau, F. (2017). Interaktives und kollaboratives Lernen mit sozialen Medien? Spannungsfelder in der Hochschullehre. In H. R. Griesehop & E. Bauer (Eds.), Lehren und Lernen online. Lehr- und Lernerfahrungen im Kontext akademischer Online-Lehre (pp. 131-148). Wiesbaden: Springer. Rawle, F., Thuna, M., Zhao, T., & Kaler, M. (2018). Audio feedback: Student and teaching assistant perspectives on an alternative mode of feedback for written assignments. The Canadian Journal for the Scholarship of Teaching and Learning, 9(2), Article 2. doi: 10.5206/ cjsotl-rcacea.2018.2.2 284 5 References <?page no="286"?> Razagifard, P., & Razzaghifard, V. (2011). Corrective feedback in a computer-mediated commu‐ nicative context and the development of second language grammar. Teaching English with Technology, 11(2), 1-17. Redecker, C., & Punie, Y. (2017). European Framework for the Digital Competence of Educators. JRC Science for Policy Report. Luxembourg: Publications Office of the European Union. Reich, J., Murnane, R., & Willett, J. (2012). The state of wiki usage in U.S. K-12 schools: Lever‐ aging Web 2.0 data warehouses to assess quality and equity in online learning environments. Educational Researcher, 41(1), 7-15. doi: 10.3102/ 0013189X11427083 Reinhardt, W., Sievers, M., Magenheim, J., Kundisch, D., Herrmann, P., Beutner, M., & Zoyke, A. (2012). PINGO: Peer instruction for very large groups. In A. Ravenscroft, S. Lindstaedt, C. D. Kloos, & D. Hernández-Leo (Eds.). 21st Century Learning for 21st Century Skills. 7th European Conference on Technology Enhanced Learning, EC-TEL 2012, Saarbrücken, Germany, September 18 - 21, 2012. Proceedings (pp. 507-512). Lecture notes in computer science: Vol. 7563. Berlin, Heidelberg: Springer. doi: 10.1007/ 978-3-642-33263-0_51 Renzella, J., & Cain, A. (2020). Enriching programming student feedback with audio comments. In Proceedings of the ACM/ IEEE 42nd International Conference on Software Engineering: Software Engineering Education and Training (pp. 173-183). Seoul, South Korea: ACM. doi: 10.1145/ 3377814.3381712 Reynolds, B. L., Kao, C.-W., & Huang, Y.-y. (2021). Investigating the effects of perceived feedback source on second language writing performance: A quasi-experimental study. The Asia-Pacific Education Researcher, 30(6), 585-595. doi: 10.1007/ s40299-021-00597-3 Ritchie, S. M. (2016). Self-assessment of video-recorded presentations: Does it improve skills? Active Learning in Higher Education, 17(3), 207-221. doi: 10.1177/ 1469787416654807 Robinson, M., Loch, B., & Croft, T. (2015). Student perceptions of screencast feedback on mathematics assessment. International Journal of Research in Undergraduate Mathematics Education, 1(3), 363-385. doi: 10.1007/ s40753-015-0018-6 Rochera, M. J., Engel, A., & Coll, C. (2021). The effects of teacher’ feedback: A case study of an online discussion forum in Higher Education. Revista de Educación a Distancia (RED), 21(67), 1-25. doi: 10.6018/ red.476901 Rodina, H. (2008). Paperless, painless: Using MS Word tools for feedback in writing assignments. The French Review, 82(1), 106-116. Rodway-Dyer, S., Dunne, E., & Newcombe, M. (2009). Audio and screen visual feedback to support student learning. Paper presented at the ALT-C 2009 “In dreams begins responsibility” - choice, evidence and change, 8 - 10 September 2009, Manchester, UK. Association for Learning Technology (ALT). https: / / repository.alt.ac.uk/ 641/ Ross, J., & Lancastle, S. Increasing student engagement with formative assessment through the application of screen cast video feedback: A case study in Engineering. In Proceedings of EDULEARN19 Conference, 1-3 July 2019, Palma, Mallorca, Spain (pp. 3962-3971). http: / / lib.ui b.kz/ edulearn19/ files/ papers/ 1013.pdf Rostami, A., & Hoveidi, A. (2014). Improving descriptive writing skills using blog-based peer feedback. International Journal of Language Learning and Applied Linguistics World, 5(2), 227-234. 285 5 References <?page no="287"?> Rotheram, B. (2007). Using an MP3 recorder to give feedback on student assignments. Educa‐ tional Developments, 8(2), 7-10. Rotheram, B. (2009). Sounds good: Using audio to give assessment feedback. Assessment, Teaching & Learning Journal, 7, 22-24. Rottermond, H., & Gabrion, L. (2021). Feedback as a connector in remote learning environments. Michigan Reading Journal, 53(2), 38-44. Rudolph, J. (2018). A brief review of Mentimeter - A student response system. Journal of Applied Learning & Teaching, 1(1), 35-37. Ruffini, M. (2012). Screencasting to engage learning. Educause Review. http: / / www.educause.ed u/ ero/ article/ screencasting-engage-learning Ryan, T., Henderson, M., & Phillips, M. (2016). “Written feedback doesn’t make sense”: Enhanc‐ ing assessment feedback using technologies. In Proceedings of the 2016 Conference by the Australian Association for Research in Education (AARE). https: / / www.aare.edu.au/ data/ 2016 _Conference/ Full_papers/ 752_Tracii_Ryan.pdf Ryan, T., Henderson, M., & Phillips, M. (2019). Feedback modes matter: Comparing student perceptions of digital and non‐digital feedback modes in higher education. British Journal of Educational Technology, 50(3), 1507-1523. doi: 10.1111/ bjet.12749 Rybakova, K. (2020). Humanizing online assessment: Screencasting as a multimedia feedback tool for first generation college students. In P. M. Sullivan, J. L. Lantz, & B. A. Sullivan (Eds.). Handbook of research on integrating digital technology with literacy pedagogies (pp. 500-518). Advances in educational technologies and instructional design (AETID) book series. Hershey, PA: Information Science Reference. Rybicki, J.-M., & Nieminen, J. (2012). KungFu Writing, a new cloud-based feedback tool. In L. Bradley & S. Thouësny (Eds.), CALL: Using, learning, knowing. EUROCALL Conference: Gothenburg, Sweden, 22-25 August 2012. Proceedings (pp. 254-258). Dublin, Ireland, Voillans, France: Research-publishing.net. Sabbaghan, S. (2017). Enhancing student assessment through veedback. In A. P. Preciado Babb, L. Yeworiew, & S. Sabbaghan (Eds.), Selected Proceedings of the IDEAS Conference 2017: Leading Educational Change (pp. 93-102). Calgary, Canada: Werklund School of Education, University of Calgary. Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119-144. doi: 10.1007/ BF00117714 Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535-550. doi: 10.1080/ 02602930903541015 Saeed, M. A., & Al Qunayeer, H. S. (2020). Exploring teacher interactive e-feedback on students’ writing through Google Docs: Factors promoting interactivity and potential for learning. The Language Learning Journal, 14(3), 1-18. doi: 10.1080/ 09571736.2020.1786711 Saeed, M. A. (2021). Does teacher feedback mode matter for language students? Asian EFL Journal, 28(1.1). https: / / www.asian-efl-journal.com/ monthly-editions-new/ 2021-monthlyedition/ volume-28-issue-1-1-february-2021/ index.htm Sakai. (2022). Sakai community documentation: Portfolios overview. https: / / sakai.screenstepslive .com/ s/ sakai_help/ m/ 17325/ l/ 179732-portfolios-overview 286 5 References <?page no="288"?> Samani, E., & Noordin, N. (2013). A comparative study of the effect of recasts and prompts in syn‐ chronous computer-mediated communication (SCMC) on students’ achievement in grammar. Middle-East Journal of Scientific Research, 15(1), 46-54. doi: 10.5829/ idosi.mejsr.2013.15.1.2274 Sammartano, M. (2017). Voice comments in Google Docs [Video]. https: / / www.youtube.com/ wat ch? v=0VOxr1u8UmA Samuels, L. E. (2006). The effectiveness of web conferencing technology in student-teacher confer‐ encing in the writing classroom. A study of first-year student writers. A thesis submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the Degree of Master of Arts English. https: / / repository.lib.ncsu.edu/ handle/ 1840.16/ 511 Sari, A. B. P. (2019). EFL peer feedback through the chatroom in Padlet. Language and Language Teaching Journal, 22(1), 46-57. doi: 10.24071/ llt.2019.220105 Saricaoglu, A. (2019). The impact of automated feedback on L2 learners’ written causal explanations. ReCALL, 31(2), 189-203. doi: 10.1017/ S095834401800006X Saricaoglu, A., & Bilki, Z. (2021). Voluntary use of automated writing evaluation by content course students. ReCALL, 33(3), 265-277. doi: 10.1017/ S0958344021000021 Satar, H. M., & Özdener, N. (2008). The effects of synchronous CMC on speaking proficiency and anxiety: Text versus voice chat. The Modern Language Journal, 92(4), 595-613. doi: 10.1111/ j.1540-4781.2008.00789.x Sauro, S. (2009). Computer-mediated corrective feedback and the development of L2 grammar. Language Learning & Technology, 13(1), 96-120. http: / / llt.msu.edu/ vol13num1/ sauro.pdf Sayed, O. H. (2010). Developing business management students’ persuasive writing through blog-based peer-feedback. English Language Teaching, 3(3), 54-66. doi: 10.5539/ elt.v3n3p54 Schilling, W. W., & Estell, J. K. (2014). Enhancing student comprehension with video grading. Computers in Education Journal, 5(1), 28-39. https: / / coed.asee.org/ 2014/ 01/ 05/ enhancing-stu dent-comprehension-with-video-grading/ Schluer, J. (in prep.). Synchronous and asynchronous online feedback in remote research colloquia: Development over two years of Covid-19 ELT. In N. Radić, А. Atabekova, M. Freddi, & J. Schmied (Eds.), World universities’ response to COVID-19. Developments and innovation in language teaching and learning. Research-publishing.net. Schluer, J. (2020a). Feedbackvideos erstellen lernen: Praxisbericht zur Förderung digitaler Feed‐ back-Kompetenzen im Lehramtsstudium. Themenspecial “Digitale Medien im Lehramtsstu‐ dium” [Special Issue: Digital media in teacher education.]. https: / / www.e-teaching.org/ praxis / erfahrungsberichte/ feedbackvideos-erstellen-lernen-praxisbericht-zur-foerderung-digitaler -feedback-kompetenzen-im-lehramtsstudium Schluer, J. (2020b). Individual learner support in digital ELT courses: Insights from teacher education. Special Issue: ELT in the Time of the Coronavirus 2020 (Part 2). International Journal of TESOL Studies, 2(3), 41-63. doi: 10.46451/ ijts.2020.09.17 Schluer, J. (2021a). Digitales Feedback mittels Screencasts in der Lehrkräfteausbildung: Re‐ zeptions- und Produktionsperspektiven. In M. Eisenmann & J. Steinbock (Eds.). Sprache, Kulturen, Identitäten: Umbrüche durch Digitalisierung. Dokumentation zum 28. Kongress für Fremdsprachendidaktik der Deutschen Gesellschaft für Fremdsprachenforschung Würzburg 2019 287 5 References <?page no="289"?> (pp. 161-175). Beiträge zur Fremdsprachenforschung: Vol. 16. Baltmannsweiler: Schneider Verlag Hohengehren. Schluer, J. (2021b). Introduction to Pingo [Video]. https: / / www.youtube.com/ watch? v=JqlmJf W 4jV0 (short link: https: / / tinyurl.com/ JSchluerPingo) Schluer, J. (2021c, September 22). Lernunterstützung mittels Feedbackvideos: Eine Standortbestim‐ mung. Vortrag auf dem 29. Kongress der Deutschen Gesellschaft für Fremdsprachenforschung (DGFF) zum Thema “Standortbestimmungen”. Essen: Universität Duisburg-Essen. Schluer, J. (2021d). Multimodales Feedback lernförderlich gestalten: Möglichkeiten und Her‐ ausforderungen für (angehende) Fremdsprachenlehrkräfte. Zeitschrift für Fremdsprachenfor‐ schung (ZFF), 32(2), 157-180. Schluer, J. (2021e). Video: An introduction to screencast feedback in EFL teacher education. https: / / www.youtube.com/ watch? v=Z1fLMcc75JE (short link: https: / / tinyurl.com/ JSchluerSCFBint ro) Schluer, J. (2022). Digital feedback overview: An interactive map. https: / / view.genial.ly/ 628228fb e4015300116d254f/ guide-digital-feedback-overview (short link: https: / / tinyurl.com/ Digital FeedbackOverview) Schluer, J. (to appear 2022). Pre-service teachers’ perceptions of their digital feedback literacy development before and during the pandemic. International Journal of TESOL Studies. Schneider, S., Schluer, J., Rey, G. D., Kretzer, E., & Fröhlich, A. (2022, September 10). Optimale Gestaltung von Lehrvideos: Sollten Sie sich zeigen (oder nicht)? Der Einfluss der Sichtbarkeit und Emotionalität von Lehrpersonen als Feedback-Gebende in instruktionalen Feedback-Videos. Vortrag auf dem 52. Kongress der Deutschen Gesellschaft für Psychologie (DGPs). View on | of science, Universität Hildesheim. Schön, D. A. (1983). The reflective practitioner: How professionals think in action. London: Maurice Temple Smith Ltd. Seckman, C. (2018). Impact of interactive video communication versus text-based feedback on teaching, social, and cognitive presence in online learning communities. Nurse Educator, 43(1), 18-22. doi: 10.1097/ NNE.0000000000000448 Sedrakyan, G., Malmberg, J., Verbert, K., Järvelä, S., & Kirschner, P. A. (2020). Linking learning behavior analytics and learning science concepts: Designing a learning analytics dashboard for feedback to support learning regulation. Computers in Human Behavior, 107(4), 105512. doi: 10.1016/ j.chb.2018.05.004 Semke, H. D. (1984). Effects of the red pen. Foreign Language Annals, 17(3), 195-202. doi: 10.1111/ j.1944-9720.1984.tb01727.x Séror, J. (2012). Show me! Enhanced feedback through screencasting technology. TESL Canada Journal/ Revue TESL du Canada, 30(1), 104-116. Sharifi, M., Soleimani, H., & Jafarigohar, M. (2017). E-portfolio evaluation and vocabulary learning: Moving from pedagogy to andragogy. British Journal of Educational Technology, 48(6), 1441-1450. doi: 10.1111/ bjet.12479 Sheen, Y. (2007). The effect of focused written corrective feedback and language aptitude on ESL learners’ acquisition of articles. TESOL Quarterly, 41(2), 255-283. 288 5 References <?page no="290"?> Sheen, Y., & Ellis, R. (2011). Corrective feedback in language teaching. In E. Hinkel (Ed.), Handbook of research in second language teaching and learning. Volume 2 (pp. 593-610). New York: Routledge. Sheffield Hallam University. (2019). Recording individual audio feedback for students: Enhancing student feedback engagement by providing MP3 audio feedback in formative assessment exercises. https: / / academic.shu.ac.uk/ assessmentessentials/ wp-content/ uploads/ 2019/ 10/ Rec ording-Individual-Audio-Feedback-rachel-bower-Case-Study.pdf Sherafati, N., Largani, F. M., & Amini, S. (2020). Exploring the effect of computer-mediated teacher feedback on the writing achievement of Iranian EFL learners: Does motivation count? Education and Information Technologies, 25(5), 4591-4613. doi: 10.1007/ s10639-020-10177-5 Shintani, N. (2016). The effects of computer-mediated synchronous and asynchronous direct corrective feedback on writing: A case study. Computer Assisted Language Learning, 29(3), 517-538. doi: 10.1080/ 09588221.2014.993400 Shintani, N., & Aubrey, S. (2016). The effectiveness of synchronous and asynchronous written corrective feedback on grammatical accuracy in a computer-mediated environment. The Modern Language Journal, 100(1), 296-319. doi: 10.1111/ modl.12317 Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189. doi: 10.3102/ 0034654307313795 Siegel, M. (2012). New times for multimodality? Confronting the accountability culture. Journal of Adolescent & Adult Literacy, 55(8), 671-681. doi: 10.1002/ JAAL.00082 Silva, M. L. (2012). Camtasia in the classroom: Student attitudes and preferences for video commentary or Microsoft Word comments during the revision process. Computers and Composition, 29(1), 1-22. doi: 10.1016/ j.compcom.2011.12.001 Silva, M. L. (2017). Commenting with Camtasia: A descriptive study of the affordances and constraints of peer-to-peer screencast feedback. In S. Plane, C. Bazerman, F. Rondelli, C. Donahue, A. N. Applebee, C. Boré, P. Carlino, M. M. Larruy, P. Rogers, & D. Russell (Eds.). Research on writing. Multiple perspectives (pp. 325-346). International Exchanges on the Study of Writing. Fort Collins, Colorado: The WAC Clearinghouse. Sipple, S. (2007). Ideas in practice: Developmental writers’ attitudes toward audio and written feedback. Journal of Developmental Education, 30(3), 22‐24, 26, 28, 30‐31. Skinner, B., & Austin, R. (1999). Computer conferencing - does it motivate EFL students? ELT Journal, 53(4), 270-279. Skoyles, A., & Bloxsidge, E. (2017). Have you voted? Teaching OSCOLA with Mentimeter. Legal Information Management, 17(4), 232-238. doi: 10.1017/ S1472669617000457 Smith, C. D., Whiteley, H. E., & Smith, S. (1999). Using email for teaching. Computers & Education, 33(1), 15-25. doi: 10.1016/ S0360-1315(99)00013-5 Smith, D. (2020). Returning feedback: Generating feedback sheets and returning these to students. https: / / davethesmith.wordpress.com/ rapid-feedback-generator/ returning-feedback/ Soden, B. (2016). Combining screencast and written feedback to improve the assignment writing of TESOL taught master’s students. The European Journal of Applied Linguistics and TEFL, 5(1), 213-236. 289 5 References <?page no="291"?> Soden, B. (2017). The case of screencast feedback: Barriers to the use of learning technology. Innovative Practice in Higher Education, 3(1), 1-21. Solhi, M., & Eğinli, İ. (2020). The effect of recorded oral feedback on EFL learners’ writing. Dil ve Dilbilimi Çalışmaları Dergisi, 16(1), 1-13. doi: 10.17263/ jlls.712628 Soltanpour, F., & Valizadeh, M. (2018). The effect of individualized technology-mediated feedback on EFL learners’ argumentative essays. International Journal of Applied Linguistics and English Literature, 7(3), 125-136. doi: 10.7575/ aiac.ijalel.v.7n.3p.125 Soria, S., Gutiérrez-Colón, M., & Frumuselu, A. D. (2020). Feedback and mobile instant messaging: Using WhatsApp as a feedback tool in EFL. International Journal of Instruction, 13(1), 797-812. doi: 10.29333/ iji.2020.13151a Sotillo, S. (2005). Corrective feedback via instant messenger learning activities in NS-NNS and NNS-NNS dyads. CALICO Journal, 22(3), 467-496. doi: 10.1558/ cj.v22i3.467-496 Sotillo, S. (2010). Quality and type of corrective feedback, noticing, and learner uptake in synchronous computer-mediated text-based and voice chats. In M. Pütz & L. Sicola (Eds.). Cognitive processing in second language acquisition. Inside the learner’s mind (pp. 351-370). Converging Evidence in Language and Communication. Amsterdam, Philadelphia: John Ben‐ jamins. Speicher, O., & Stollhans, S. (2015). Feedback on feedback - does it work? In F. Helm, L. Bradley, M. Guarda, & S. Thouësny (Eds.), Critical CALL. Proceedings of the 2015 EUROCALL Conference, Padova, Italy (pp. 507-511). Dublin, Ireland: Research-publishing.net. Spencer, D. (2012). Anhang Storyboards und Skripte. In J. Handke & A. Sperl (Eds.), Das Inverted Classroom Model. Begleitband zur ersten deutschen ICM-Konferenz (pp. 157-163). München: Oldenbourg Verlag. Spencer, D. H., & Hiltz, S. R. (2003). A field study of use of synchronous chat in online courses. In Proceedings of 36th Annual Hawaii International Conference on System Sciences. Big Island, HI, USA: IEEE. doi: 10.1109/ HICSS.2003.1173742 Sprague, A. (2016). Restoring student interest in reading teacher feedback through the use of video feedback in the ESL writing classroom. Ohio Journal of English Language Arts, 56(1), 23-27. Sprague, A. (2017). Analyzing the feedback preferences and learning styles of second-language students in ESOL writing courses at Bowling Green State University. A Dissertation submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy. https: / / etd.ohiolink.edu/ apexprod/ rws _olink/ r/ 1501/ 10 Stannard, R. (2006). The spelling mistake: Scene one, take one. The Times Higher Education, 8(1772), 18-19. Stannard, R. (2007). Using screen capture software in student feedback. The English Subject Centre Archive. http: / / english.heacademy.ac.uk/ 2016/ 01/ 16/ using-screen-capture-software-i n-student-feedback/ Stannard, R. (2008). Screen capture software for feedback in language education. In M. Thomas (Ed.), Proceedings of the 2nd International Wireless Ready Symposium. Interactivity, Collabo‐ 290 5 References <?page no="292"?> ration & Feedback in Language Learning Technologies (pp. 16-20). NUCB Graduate School. http: / / wirelessready.nucba.ac.jp/ Stannard.pdf Stannard, R. (2015). 10 ways to use screen capture technology. ETpedia. https: / / www.myetpedia. com/ screen-capture-technology-in-elt/ Stannard, R. (2019). A review of screen capture technology feedback research. Studia Universi‐ tatis Babeș-Bolyai Philologia, 64(2), 61-72. doi: 10.24193/ subbphilo.2019.2.05 Stannard, R., & Mann, S. (2018). Using screen capture feedback to establish social presence and increase student engagement: A genuine innovation in feedback. In C. H. Xiang (Ed.). Cases on audio-visual media in language education (pp. 93-116). Advances in educational technologies and instructional design (AETID) book series. Hershey, Pennsylvania: IGI Global. Stannard, R., & Sallı, A. (2019). Using screen capture technology in teacher education. In S. Walsh & S. Mann (Eds.). The Routledge Handbook of English Language Teacher Education (pp. 459-472). Routledge handbooks in applied linguistics. London, New York, NY: Routledge. Stevenson, M., & Phakiti, A. (2014). The effects of computer-generated feedback on the quality of writing. Assessing Writing, 19(2), 51-65. doi: 10.1016/ j.asw.2013.11.007 Stieglitz, G. (2013). Screencasting: Informing students, shaping instruction. UAE Journal of Educational Technology and eLearning, 4(1), 58-62. Stockwell, J. (2008). Audio feedback for students: JISC audio mini-project case study. https: / / www.advance-he.ac.uk/ knowledge-hub/ audio-feedback-students-jisc-audio-miniproject-case-study Stoneham, R., & Prichard, M. (2013). Look, Listen & Learn! Do students actually look at and/ or listen to online feedback? Compass: The Journal of Learning and Teaching at the University of Greenwich, 7, 1-3. Storch, N. (2017). Peer corrective feedback in computer-mediated collaborative writing. In N. Hossein & E. Kartchava (Eds.). ESL & Applied Linguistics Professional Series. Corrective feedback in second language teaching and learning. Research, theory, applications, implications (pp. 66-79). Routledge. doi: 10.4324/ 9781315621432-6 Strobl, C., Ailhaud, E., Benetos, K., Devitt, A., Kruse, O., Proske, A., & Rapp, C. (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers & Education, 131(1), 33-48. doi: 10.1016/ j.compedu.2018.12.005 Sugar, W., Brown, A., & Luterbach, K. (2010). Examining the anatomy of a screencast: Uncovering common elements and instructional strategies. International Review of Research in Open and Distance Learning, 11(3), 1-20. Sull, E. C. (2014). Responding to online student e-mails and other posts. Distance Learning, 11(3), 1-4. Sullivan, P. (2020). Using Google Apps as a tool to advance student learning via productive small group discussions and teacher feedback in an online environment. In R. E. Ferdig, E. Baumgartner, R. Hartshorne, R. Kaplan-Rakowski, & C. Mouza (Eds.), Teaching, technology, and teacher education during the COVID-19 pandemic. Stories from the field (pp. 667-671). Association for the Advancement of Computing in Education (AACE). https: / / www.learn techlib.org/ p/ 216903/ 291 5 References <?page no="293"?> Sun, Y., & Doman, E. (2018). Peer assessment. In J. I. Liontas (Ed.), The TESOL encyclopedia of English language teaching (pp. 1-7). Hoboken, NJ: Wiley-Blackwell. Svinicki, M. D. (2001). Encouraging your students to give feedback. New Directions for Teaching and Learning, 87, 17-24. Swanson, A. C. & Tucker, J. (2012). Use of Jing and JoinMe to provide asynchronous and synchro‐ nous audio and video feedback in the online classroom. Paper presented at the 2012 ADEIL Con‐ ference at Grand Junction, CO. https: / / www.researchgate.net/ publication/ 280601538_Use_of_ Jing_and_JoinMe_to_provide_Asynchronous_and_Synchronous_Audio_and_Video_Feedba ck_in_the_Online_Classroom Swartz, B. & Gachago, D. (2018). Students’ perceptions of screencast feedback in postgraduate research supervision. Proceedings of the International Conference on e-Learning, Cape Town. https: / / www.researchgate.net/ publication/ 344567034_Students%27_Perceptions_of_ Screencast_Feedback_in_Postgraduate_Research_Supervision Tafazoli, D., Nosratzadeh, H., & Hosseini, N. (2014). Computer-mediated corrective feedback in ESP courses: Reducing grammatical errors via email. Procedia - Social and Behavioral Sciences, 136, 355-359. doi: 10.1016/ j.sbspro.2014.05.341 Tansomboon, C., Gerard, L. F., Vitale, J. M., & Linn, M. C. (2017). Designing automated guidance to promote productive revision of science explanations. International Journal of Artificial Intelligence in Education, 27(4), 729-757. doi: 10.1007/ s40593-017-0145-0 Tanveer, A., Malghani, M., Khosa, D., & Khosa, M. (2018). Efficacy of written corrective feedback as a tool to reduce learners’ errors on L2 writing. International Journal of English Linguistics, 8(5), 166-180. Tärning, B. (2018). Review of feedback in digital applications - Does the feedback they provide support learning? Journal of Information Technology Education: Research, 17, 247-283. doi: 10.28945/ 4104 Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403-412. doi: 10.1177/ 0265532213480338 teaCh. (2019). Using the English version of teaCh: Step-by-step guide. http: / / wp.feedbackschule.d e/ wp-content/ uploads/ 2019/ 01/ Using-the-English-Version-of-teach.pdf The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60-92. Thompson, A. D., & Mishra, P. (2007). Breaking News: TPCK becomes TPACK! Journal of Computing in Teacher Education, 24(2), 38. Thompson, R., & Lee, M. J. (2012). Talking with students through screencasting: Experimenta‐ tions with video feedback to improve student learning. The Journal of Interactive Technology and Pedagogy, 1. https: / / jitp.commons.gc.cuny.edu/ talking-with-students-through-screencas ting-experimentations-with-video-feedback-to-improve-student-learning/ Thouësny, S. (2012). Scoring rubrics and Google scripts: A means to smoothly provide language learners with fast corrective feedback and grades. In L. Bradley & S. Thouësny (Eds.), CALL: Using, learning, knowing. EUROCALL Conference: Gothenburg, Sweden, 22-25 August 2012. Proceedings (pp. 286-291). Dublin, Ireland, Voillans, France: Research-publishing.net. 292 5 References <?page no="294"?> Tochon, F. (2008). A brief history of video feedback and its role in foreign language education. CALICO Journal, 25(3), 420-435. doi: 10.1558/ cj.v25i3.420-435 Tokdemir Demirel, E., & Güneş Aksu, M. (2019). The application of technology to feedback in academic writing classes: The use of screencasting feedback and student attitudes. Ufuk Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 8(16), 183-203. Turner, W., & West, J. (2013). Assessment for “digital first language” speakers: Online video assessment and feedback in higher education. International Journal of Teaching and Learning in Higher Education, 25(3), 288-296. Tyrer, C. (2021). The voice, text, and the visual as semiotic companions: An analysis of the materiality and meaning potential of multimodal screen feedback. Education and Information Technologies, 26(4), 4241-4260. doi: 10.1007/ s10639-021-10455-w Udeshinee, W. A. P., Knutsson, O., Barbutiu, S. M., & Jayathilake, C. (2021). Text chat as a mediating tool in providing teachers’ corrective feedback in the ESL context: Social and cultural challenges. Asian EFL Journal Research Articles, 28(1), 171-195. UNESCO. (2018). UNESCO ICT Competency Framework for Teachers: Version 3. Paris: UNESCO. https: / / unesdoc.unesco.org/ ark: / 48223/ pf0000265721 Vahedipour, R., & Rezvani, E. (2017). Impact of wiki-based feedback on grammatical accuracy of Iranian EFL learners’ writing skill. International Journal of Foreign Language Teaching & Research, 5(20), 111-124. Vallely, K. S. A., & Gibson, P. (2018). Engaging students on their devices with Mentimeter. Compass: Journal of Learning and Teaching, 11(2), 1-6. doi: 10.21100/ compass.v11i2.843 van der Zijden, J., Scheerens, J., & Wijsman, L. (2021). Experiences and understanding of screencast feedback on written reports in the Bachelor Pharmacy. Transformative Dialogues: Teaching and Learning Journal, 14(1), 46-67. van Oldenbeek, M., Winkler, T. J., Buhl-Wiggers, J., & Hardt, D. (2019). Nudging in blended learning: Evaluation of email-based progress feedback in a flipped classroom information systems course. In Association for Information Systems (Ed.), Proceedings of the 27th European Conference on Information Systems (ECIS). Stockholm & Uppsala, Sweden: AIS Electronic Library (AISeL). Vasantha Raju, N. & Harinarayana, N. S. (2016). Online survey tools: A case study of Google Forms. Paper presented at the National Conference on Scientific, Computational & Information Research Trends in Engineering, GSSS-IETW, Mysore. https: / / www.researchgate.net/ publicati on/ 326831738_Online_survey_tools_A_case_study_of_Google_Forms Vincelette, E. J., & Bostic, T. (2013). Show and tell: Student and instructor perceptions of screencast assessment. Assessing Writing, 18(4), 257-277. doi: 10.1016/ j.asw.2013.08.001 Vo, H. T. T. (2017). Multilingual students’ perceptions of and experiences with instructor feedback methods in a U.S. first-year composition class. A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts in English: Teaching English as a Second Language. https: / / cornerstone.lib.mnsu.edu/ cgi/ viewcontent.cgi? article=1690&context=etds Voelkel, S., & Bennett, D. (2014). New uses for a familiar technology: Introducing mobile phone polling in large classes. Innovations in Education and Teaching International, 51(1), 46-58. doi: 10.1080/ 14703297.2013.770267 293 5 References <?page no="295"?> Voelkel, S., & Mello, L. V. (2014). Audio feedback - better feedback? Bioscience Education, 22(1), 16-30. doi: 10.11120/ beej.2014.00022 Voerman, L., Meijer, P. C., Korthagen, F. A.J., & Simons, R. J. (2012). Types and frequencies of feedback interventions in classroom interaction in secondary education. Teaching and Teacher Education, 28(8), 1107-1115. doi: 10.1016/ j.tate.2012.06.006 Voghoei, S., Tonekaboni, N. H., Yazdansepas, D., Soleymani, S., Farahani, A., & Arabnia, H. R. (2020). Personalized feedback emails: A case study on online introductory computer science courses. In Proceedings of the 2020 ACM Southeast Conference (ACMSE 2020) (pp. 18-25). Tampa FL USA: ACM. doi: 10.1145/ 3374135.3385274 Vogt, K., & Froehlich, V. (2018). Providing feedback. In D. Tsagari, K. Vogt, V. Froelich, I. Csépes, A. Fekete, A. Green, L. Hamp-Lyons, N. Sifakis, & S. Kordia (Eds.), Handbook of assessment for language teachers (pp. 129-147). Cyprus: Nicosia. Vojak, C., Kline, S., Cope, B., McCarthey, S., & Kalantzis, M. (2011). New spaces and old places: An analysis of writing assessment software. Computers and Composition, 28(2), 97-111. doi: 10.1016/ j.compcom.2011.04.004 Vonderwell, S. (2003). An examination of asynchronous communication experiences and perspectives of students in an online course: A case study. The Internet and Higher Education, 6(1), 77-90. doi: 10.1016/ S1096-7516(02)00164-1 Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, Mass.: Harvard University Press. Waer, H. (2021). The effect of integrating automated writing evaluation on EFL writing apprehension and grammatical knowledge. Innovation in Language Learning and Teaching, 1-25. doi: 10.1080/ 17501229.2021.1914062 Walker, A. S. (2017). I hear what you’re saying: The power of screencasts in peer-to-peer review. Journal of Writing Analytics, 1, 356-391. Wallace, S. (2015a). Summative assessment. In S. Wallace (Ed.). A dictionary of education (2nd ed.). Oxford Reference. Oxford: Oxford University Press. Wallace, S. (2015b). Testing. In S. Wallace (Ed.). A dictionary of education (2nd ed.). Oxford Reference. Oxford: Oxford University Press. Waltemeyer, S. & Cranmore, J. (2018). Screencasting technology to increase engagement in online higher education courses. https: / / dl.acm.org/ doi/ fullHtml/ 10.1145/ 3302261.3236693 Walters, K. (2020). Tired of long video calls and emails? Try asynchronous video. https: / / www.v idyard.com/ blog/ save-time -with-asynchronous-video/ ? utm_source=community&utm_con‐ tent=blog Wampfler, P. (2019). Spracharbeit mit Online-Tools. Deutschunterricht (1), 36-38. Wang, J., & Bai, L. (2021). Unveiling the scoring validity of two Chinese automated writing evaluation systems: A quantitative study. International Journal of English Linguistics, 11(2), 68-84. doi: 10.5539/ ijel.v11n2p68 Wannemacher, K., Jungermann, I., Scholz, J., Tercanli, H., & von Villiez, A. (2016). Digitale Lernszenarien im Hochschulbereich: Arbeitspapier Nr. 15. Im Auftrag der Themengruppe „Innovationen in Lern‐ und Prüfungsszenarien“, koordiniert vom CHE im Hochschulforum 294 5 References <?page no="296"?> Digitalisierung. Berlin: Hochschulforum Digitalisierung. https: / / hochschulforumdigitalisier ung.de/ sites/ default/ files/ dateien/ HFD%20AP%20Nr%2015_Digitale%20Lernszenarien.pdf Ware, P., Kern, R., & Warschauer, M. (2016). The development of digital literacies. In P. K. Matsuda & R. M. Manchón (Eds.), Handbook of second and foreign language writing (pp. 307- 328). Boston, Berlin: Walter de Gruyter. Warnock, S. (2008). Responding to student writing with audio-visual feedback. In T. Carter, M. A. Clayton, A. D. Smith, & T. G. Smith (Eds.). Writing and the iGeneration. Composition in the computer-mediated classroom (pp. 201-227). Fountainhead Press X series for professional development. Southlake, Tex.: Fountainhead Press. Warschauer, M., & Grimes, D. (2008). Automated writing assessment in the classroom. Pedago‐ gies: An International Journal, 3(1), 22-36. doi: 10.1080/ 15544800701771580 Warschauer, M., & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research, 10(2), 157-180. doi: 10.1191/ 1362168806lr190oa Watling, C., & Lingard, L. (2019). Giving feedback on others’ writing. Perspectives on Medical Education, 8(1), 25-27. doi: 10.1007/ s40037-018-0492-z Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31(3), 379-394. doi: 10.1080/ 02602930500353061 Webb, M. (2010). Beginning teacher education and collaborative formative e‐assessment. Assessment & Evaluation in Higher Education, 35(5), 597-618. doi: 10.1080/ 02602931003782541 Weßels, D. (2021). Verführerische Werkzeuge: Plagiate und KI-gestützte Textproduktion an Hochschulen. Forschung & Lehre (12), 1018-1019. West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student percep‐ tions, engagement and understanding using online video feedback. Innovations in Education and Teaching International, 53(4), 400-410. doi: 10.1080/ 14703297.2014.1003954 White, A. (2021). Providing post-results feedback to students - mark breakdowns and results commentary. https: / / amandalovestoaudit.com/ 2021/ 07/ providing-post-results-feedback-to-s tudents-mark-breakdowns-and-results-commentary Whitehurst, J. (2014). Screencast feedback for clear and effective revisions of high-stakes process assignments. https: / / cccc.ncte.org/ cccc/ owi-open-resource/ screencast-feedback Wierzbicka, A. (1985). Different cultures, different languages, different speech acts. Journal of Pragmatics, 9(2-3), 145-178. doi: 10.1016/ 0378-2166(85)90023-2 Wiesemes, R., & Wang, R. (2010). Video conferencing for opening doors in initial teacher education: Sociocultural processes of mimicking and improvisation. International Journal of Media, Technology and Lifelong Learning, 6(1), 1-15. Wigham, C. R., & Chanier, T. (2013). Interactions between text chat and audio modalities for L2 communication and feedback in the synthetic world Second Life. Computer Assisted Language Learning, 28(3), 260-283. doi: 10.1080/ 09588221.2013.851702 Wildemann, A. & Hosenfeld, I. (2020). Bundesweite Elternbefragung zu Homeschooling während der Covid 19-Pandemie: Erkenntnisse zur Umsetzung des Homeschoolings in Deutschland. https: / / www.zepf.eu/ wp-content/ uploads/ 2020/ 06/ Bericht_HOMEschooling2020.pdf 295 5 References <?page no="297"?> Wilson, J., & Czik, A. (2016). Automated essay evaluation software in English Language Arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education, 100(7), 94-109. doi: 10.1016/ j.compedu.2016.05.004 Wilson, J., & Roscoe, R. D. (2020). Automated writing evaluation and feedback: Multiple metrics of efficacy. Journal of Educational Computing Research, 58(1), 87-125. doi: 10.1177/ 0735633119830764 Winchell, Z. (2018). E-portfolios and their uses in higher education. Wiley Education Services. https: / / ctl.wiley.com/ e-portfolios-and-their-uses-in-higher-education/ Winstone, N., & Carless, D. (2020). Designing effective feedback processes in higher education: A learning-focused approach. Society for Research into Higher Education (SRHE). Abingdon, Oxon: Routledge. Winstone, N. E., & Boud, D. (2019). Developing assessment feedback: From occasional survey to everyday practice. In S. Lygo-Baker, I. M. Kinchin, & N. E. Winstone (Eds.), Engaging student voices in higher education (pp. 109-123). Cham: Springer International Publishing. doi: 10.1007/ 978-3-030-20824-0_7 Wisniewski, B., Zierer, K., & Hattie, J. (2020). The power of feedback revisited: A meta-analysis of educational feedback research. Frontiers in Psychology, 10(Article 3087), 1-14. doi: 10.3389/ fpsyg.2019.03087 Woo, M. M., Chu, S. K. W., & Li, X. (2013). Peer-feedback and revision process in a wiki mediated collaborative writing. Educational Technology Research and Development, 61(2), 279-309. doi: 10.1007/ s11423-012-9285-y Wood, J. M. (2019). A dialogic, technology-mediated approach to supporting feedback engagement in a higher education context: Perceived effects on learners’ feedback recipience. Dissertation submitted to the University College London, Institute of Education. https: / / discovery.ucl.ac. uk/ id/ eprint/ 10090843/ 1/ Wood_000_Thesis.pdf Wood, J. M. (2021). A dialogic technology-mediated model of feedback uptake and literacy. Assessment & Evaluation in Higher Education, 27(1), 1-18. doi: 10.1080/ 02602938.2020.1852174 Wood, J. M. (2022). Supporting the uptake process with dialogic peer screencast feedback: A sociomaterial perspective. Teaching in Higher Education, 14(1), 1-23. doi: 10.1080/ 13562517.2022.2042243 Woodard, W. J. (2016). Teaching techniques: Audiovisual feedback in EFL/ ESL writing classes. English Teaching Forum, 54(2), 29-32. Woods, R., & Keeler, J. (2001). The effect of instructor’s use of audio e-mail messages on student participation in and perceptions of online learning: A preliminary case study. Open Learning: The Journal of Open, Distance and e-Learning, 16(3), 263-278. doi: 10.1080/ 02680510120084977 Wright, E. (2022). How to insert audio files in Microsoft Word [Video]. https: / / www.youtube.com / watch? v=Z-kOZqdOZP8 Wu, W.-S. (2006). The effect of blog peer review and teacher feedback on the revisions of EFL writers. Journal of Education and Foreign Languages and Literature, 3, 125-139. Xie, F., Chakraborty, B., Ong, M. E. H., Goldstein, B. A., & Liu, N. (2020). AutoScore: A machine learning-based automatic clinical score generator and its application to mortality Prediction Using Electronic Health Records. JMIR medical informatics, 8(10). doi: 10.2196/ 21798 296 5 References <?page no="298"?> Xie, Y., Ke, F., & Sharma, P. (2008). The effect of peer feedback for blogging on college students’ reflective learning processes. Internet and Higher Education, 11(1), 18-25. doi: 10.1016/ j.ihe‐ duc.2007.11.001 Xu, Y. (2018). Not just listening to the teacher’s voice: A case study of a university English teacher’s use of audio feedback on social media in China. Frontiers in Education, 3(65), 1-9. doi: 10.3389/ feduc.2018.00065 Xu, Y., & Carless, D. (2016). ‘Only true friends could be cruelly honest’: Cognitive scaffolding and social-affective support in teacher feedback literacy. Assessment & Evaluation in Higher Education, 42(7), 1082-1094. doi: 10.1080/ 02602938.2016.1226759 Yim, S., Zheng, B., & Warschauer, M. (2017). Feedback and revision in cloud-based writing: Var‐ iations across feedback source and task type. Writing & Pedagogy, 9(3), 517-553. doi: 10.1558/ wap.32209 Yohon, T., & Zimmerman, D. (2004). Strategies for online critiquing of student assignments. Jour‐ nal of Business and Technical Communication, 18(2), 220-232. doi: 10.1177/ 1050651903260851 Yoon, S. Y., & Lee, C.-H. (2009). A study on voice recordings and feedback through BBS in teaching and learning pronunciation. Multimedia- Assisted Language Learning, 12(2), 187-216. doi: 10.15702/ MALL.2009.12.2.187 Yu, F.-Y., & Yu, H.-J. J. (2002). Incorporating e-mail into the learning process: Its impact on student academic achievement and attitudes. Computers & Education, 38(1-3), 117-126. doi: 10.1016/ S0360-1315(01)00085-9 Zaky, H. (2021). Google Docs in undergraduate online composition class: Investigating learning styles’ impact on writing corrective feedback. Asian Journal of Interdisciplinary Research, 4(1), 65-84. doi: 10.34256/ ajir2117 Zamel, V. (1985). Responding to student writing. TESOL Quarterly, 19(1), 79-101. Zhai, N., & Ma, X. (2021). Automated writing evaluation (AWE) feedback: A systematic investigation of college students’ acceptance. Computer Assisted Language Learning, 9(4), 1-26. doi: 10.1080/ 09588221.2021.1897019 Zhang, H., Song, W., Shen, S., & Huang, R. (2014). The effects of blog-mediated peer feedback on learners’ motivation, collaboration, and course satisfaction in a second language writing course. Australasian Journal of Educational Technology, 30(6), 670-685. doi: 10.14742/ ajet.860 Zhang, M. (2013). Contrasting automated and human scoring of essays. R & D Connections, 21, 1-10. Zhang, S. (2021). Review of automated writing evaluation systems. Journal of China Computer- Assisted Language Learning, 1(1), 170-176. doi: 10.1515/ jccall-2021-2007 Zhang, Y. (2018). Analysis of using multimodal feedback in writing instruction from EFL learners’ perspective. English Language and Literature Studies, 8(4), 21-29. doi: 10.5539/ ells.v8n4p21 Zhang, Z. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43(2), 1-14. doi: 10.1016/ j.asw.2019.100439 Zhu, C. (2012). Providing formative feedback to students via emails and feedback strategies based on student metacognition. Reflecting Education, 8(1), 78-93. 297 5 References <?page no="299"?> Zhu, Q., & Carless, D. (2018). Dialogue within peer feedback processes: Clarification and negotiation of meaning. Higher Education Research & Development, 37(4), 883-897. doi: 10.1080/ 07294360.2018.1446417 Ziegler, N., & Mackey, A. (2017). Interactional feedback in synchronous computer-mediated communication. In N. Hossein & E. Kartchava (Eds.). Corrective feedback in second lan‐ guage teaching and learning: Research, theory, applications, implications (pp. 80-94). ESL & Applied Linguistics Professional Series. Abingdon, Oxon, New York, NY: Routledge. doi: 10.4324/ 9781315621432-7 Zierer, K., & Wisniewski, B. (2018). Using student feedback for successful teaching. Abingdon, Oxon, New York, NY: Routledge. Zourou, K. (2011). Towards a typology of corrective feedback moves in an asynchronous distance language learning environment. In W. M. Chan, K. N. Chin, M. Nagami, & T. Suthiwan (Eds.). Media in foreign language teaching and learning (pp. 217-242). Studies in second and foreign language education: Vol. 5. New York: Walter de Gruyter. 298 5 References <?page no="300"?> BUCHTIPP Christiane Lütge, Thorsten Merse (eds.) Digital Teaching and Learning: Perspectives for English Language Education 1. Auflage 2021, 298 Seiten €[D] 26,99 ISBN 978-3-8233-8244-7 eISBN 978-3-8233-9244-6 The ongoing digitalization of social environments and personal lifeworlds has made it crucial to pinpoint the possibilities of digital teaching and learning also in the context of English language education. Here, the call is not just to cope with technological challenges posed through the advent of new digital media, but more so, to reconsider and design processes of language as well as cultural and literary learning in light of digital advancements and potentials. The book Digital Teaching and Learning: Perspectives for English Language Education offers university students, trainee teachers, in-service teachers and teacher educators an in-depth exploration of the intricate relationship between English language education and digital teaching and learning. Located at the intersection of research, theory and teaching practice, it thoroughly legitimizes the use of digital media in English language education ranging from primary to upper secondary school contexts. The oftentimes dauntingly vast array of digital media is sorted and transferred into concrete scenarios for their competence-oriented and task-based classroom use that productively round off this book. Narr Francke Attempto Verlag GmbH + Co. KG \ Dischingerweg 5 \ 72070 Tübingen \ Germany Tel. +49 (0)7071 97 97 0 \ Fax +49 (0)7071 97 97 11 \ info@narr.de \ www.narr.de <?page no="301"?> ISBN 978-3-8233-8532-5 The crucial role of feedback in the learning process is undisputed. But how can feedback be exchanged in the digital age? This book equips teachers and learners with a research-based overview of digital feedback methods. This includes, for instance, feedback in text editors, cloud documents, chats, forums, wikis, surveys, mails as well as multimodal feedback in video conferences and recorded audio, video and screencast feedback. The book discusses the advantages and limitations of each digital feedback method and offers suggestions for their practical application in the classroom. They can be utilized in online teaching as well as to enrich on-site teaching. The book also provides ideas for combining different feedback methods synergistically and closes with recommendations for developing dynamic digital feedback literacies among teachers and students.