25 Learning in conversation with Generative AI In language learning, AI-facilitated simulations let learners practise in various scenarios, such as ordering from a waiter in French at a Parisian café or discussing travel plans for the Great Wall of China with an explorer in Mandarin. This immersive practice can not only boost conversational fluency and listening skills but also introduces learners to the cultural nuances of language use, like the many different terms for ‘straw’ in Spanish used across Latin American countries. This is something that students can ask an AI chatbot to do, or they might use a specialised tool like Duolingo Roleplay which provides tailored conversational partners for this purpose. If a learner wants to go deeper culturally or historically, they can engage in immersive game-based problem- solving as described in the Immersive Language and Culture section of this report. Simulation and role-playing with GenAI can also be useful for professional skill development; for example, preparing for job interviews, or medical diagnosis based on (simulated) patient interviews. By taking on various roles, GenAI can simulate different stakeholders in a variety of professional settings with different kinds of characteristics (e.g., secure an advantageous contract to produce snack bars with someone who takes a competing approach to negotiation), providing learners with a diverse range of interactive experiences. GenAI can also be helpful in taking on the role of an intellectual sparring partner, particularly in fields requiring robust argumentation like law, philosophy, and ethics. By challenging learners to defend their viewpoints, identify logical fallacies, and counter opposing arguments, GenAI can help them both improve a specific argument to be made and sharpen their critical thinking and debate skills. Generative AI as a curriculum partner for teachers GenAI also has the potential to become a collaborator for teachers in the production of educational resources and activities. This use of AI showcases how a dialogue between a teacher or course designer and AI can develop for supporting the creation of learning content such as lesson objectives, assessment rubrics and interactive activities. Important in doing so is careful, creative and iterative prompting of the AI and critical evaluation of the responses it generates both with respect to content veracity and inclusiveness of course designs. A female student in a classroom pointing to an AI human-like robot in front of her.
26 Innovating Pedagogy 2024 To partner with AI in producing useful course resources, it is helpful to leverage known prompting strategies4, 5 such as: • providing detailed information about the learning context • giving examples of the kinds of activities or materials you are looking for • asking the AI to take on a persona (e.g., ‘you are an expert curriculum developer...’) • splitting complex tasks into several smaller tasks • providing recipe-like instructions of the steps the AI should follow • adding key instructions at the end of a prompt. It can be valuable to provide the AI with pedagogical principles to follow in its creations, for example designing for inclusivity, active learner engagement, and student collaboration. Specific prompts have also been developed to support educators in these tasks (for example, see the Github repository ‘Prompts for Education’), such as a prompt for planning lessons or generating ideas for assignments. Early evaluation of efforts to create curriculum with AI suggests that it can aid with brainstorming ideas, creating an outline, assessing adherence to specific writing guidance (such as readability and inclusive language)6. Yet, it is worth noting that GenAI outputs need to be verified by topic experts who can comment on how the prompts are designed, the quality of the materials produced and their suitability in teaching. Even after using best-in-class models, the content created still needs human adjustments and checking, ranging from small tweaks to wholesale regeneration to make it suitable for the learning context it is being designed for (e.g., online learning university, elementary classroom, science museum). Challenges, barriers, limitations While the possibilities for learning through dialogue with GenAI are diverse and exciting, there are also important challenges to note. Initially, students have shown varying levels of engagement; these range from rich in-depth learning conversations to simple requests to be given ‘the answer’. Some students resist interaction, for example replying ‘idk’ (‘I don’t know’) to all questions the chatbot asks. This highlights the need for guidance (for students and teachers) in AI literacy, including how to ‘talk’ to GenAI to foster productive dialogue and how to critically assess the GenAI’s responses. The latter is crucial as GenAI models represent how things are talked about generally, not according to verified disciplinary knowledge. As a result, these models are known to produce ‘hallucinations’ – responses that sound believable but are actually incorrect or outdated. These same concerns are important for educators to be aware of and identify how to overcome; for example by instructing AI to answer using information from specific resources. GenAI responses may also oversimplify complex concepts or add irrelevant details to an explanation. Together, these create a danger of generating misconceptions and/or shallow learning. Research has also shown that people may tend to rely on AI, rather than learn from AI or work in collaboration with AI7. Another concern is the risk of perpetuating social biases in the training data, such as returning fewer examples of women and people of colour when asking about scientists who have made a difference in a given field8, 9. One example of a model built on a more diverse training set, including sources that have data from indigenous oral tradition, is Latimer.ai. Unequal access to the latest and most advanced GenAI models, which are generally not free, can also exacerbate educational inequalities giving some students or educators access to better tutors, curriculum partners, or simulators than others. Finally, privacy concerns surrounding the use and storage of student data by GenAI platforms must be addressed to safeguard students’ personal information. Conclusions This pedagogy has potential to support students’ critical questioning and argumentation skills. This will be increasingly important as GenAI generated information becomes more prevalent and requires a more critical stance. GenAI is expected to improve over time, presenting new and unanticipated capabilities. This may have different implications for how students learn and engage with GenAI outputs..
27 Learning in conversation with Generative AI References 1. An overview of the Socratic approach and dialogic teaching: Prendergast, D. (2021) ‘An investigation into the impact of dialogic teaching and Socratic questioning on the development of children’s understanding of complex historical concepts’. Available at: https://durham-repository.worktribe. com/output/1140584 (Accessed 26 June 2024). 2. A brief overview of what ChatGPT is and how it can be used in education: University of Central Arkansas, Center for Excellence in Teaching and Academic Leadership (2024) ‘Chat GPT: What is it?’. Available at: https://uca.edu/cetal/chat-gpt/#:~:text=One%20 of%20the%20key%20features,stories%2C%20 essays%2C%20and%20poems. (Accessed 26 June 2024). 3. University students’ perceptions of using a GenAI digital assistant in their studies: Rienties, B., Domingue, J., Duttaroy, S., Herodotou, C., Tessarolo, F., and Whitelock, D. (2024) ‘I would love this to be like an assistant, not the teacher: a voice of the customer perspective of what distance learning students want from an Artificial Intelligence Digital Assistant’, arXiv preprint arXiv:2403.15396. Available at: https://arxiv.org/ abs/2403.15396 (Accessed 26 June 2024). 4. A guide on how to create good prompts: Open AI developer platform (2024) Prompt engineering. Available at: https://platform.openai. com/docs/guides/prompt-engineering/six- strategie (Accessed 26 June 2024). 5. Some advanced techniques in prompt design and prompt engineering: Microsoft Azure (2024) Prompt engineering techniques. Available at: https://learn. microsoft.com/en-us/azure/ai-services/ openai/concepts/advanced-prompt- engineering?pivots=programming-language- chat-completions (Accessed 26 June 2024). 6. A paper showing early findings from using GenAI in the design of an online course: Ullman, T., Bektik, D., Edwards, C., Herodutu, C., and Whitelock, D. (2024) ‘Teaching with Generative AI: Moving Forward with Content Creation’, EDEN 24 Annual Conference, Graz, Austria. 7. Research study examining how students interact with AI in relation to their learning agency: Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., and Siemens, G. (2023) ‘Impact of AI assistance on student agency’, Computers & Education, 210(104967). Available at: https://doi.org/10.1016/j. compedu.2023.104967 (Accessed 26 June 2024). 8. A paper discussing bias in large language models in stigmatised groups: Mei, K., Fereidooni, S., and Caliskan, A. (2023) ‘Bias against 93 stigmatized groups in masked language models and downstream sentiment classification tasks’, Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 1699–1710). 9. A paper discussing bias in large language models in relation to queer people: Dhingra, H., Jayashanker, P., Moghe, S., and Strubell, E. (2023) ‘Queer people are people first: Deconstructing sexual identity stereotypes in large language models’, arXiv Preprint. Available at: https://arxiv.org/abs/2307.00101 (Accessed 26 June 2024). Resources • An example of a GenAI virtual tutor assistant called Khanmigo produced by Khan Academy: Available at: https://blog.khanacademy.org/ parents-khanmigo/ (Accessed 26 June 2024). • A video discussing how a GenAI virtual tutor works produced by the Khan Academy Chief Learning Officer Kristen DiCerbo and GSV Ventures VP Claire Zau: Available at: https://www.youtube.com/ watch?v=h6QOR1nRgXI (Accessed 26 June 2024). • A personalised language learning tool, Duolingo, that makes use of GenAI to support language learning: Available at: https://blog.duolingo.com/duolingo- max/ (Accessed 26 June 2024). • A prompt library to turn GPT4, Claude, Gemini, Bing into tutors, feedback engines, negotiation simulators and more written by Dr Dr. Ethan Mollick and Dr. Lilach Mollick: Available at: https://www.moreusefulthings.com/ student-exercises (Accessed 26 June 2024). • Two papers that provide examples of prompts: Mollick, E., and Mollick, L. (2023) ‘Assigning AI: Seven approaches for students, with prompts’. Available at: https://arxiv.org/abs/2306.10052 (Accessed 26 June 2024). Mollick, E., and Mollick, L. (2023) ‘Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts’, The Wharton School Research Paper. Available at: https://ssrn.com/abstract=4391243 (Accessed 26 June 2024)..
28 Innovating Pedagogy 2024 Talking AI ethics with young people Affording children and young people their rights related to AI and education Introduction Digital technologies, including developments in artificial intelligence (AI) and automation, have brought many changes in the world. This is especially true for young people, many of whom are growing up in an increasingly digitally-enabled environment, with technologies being a key aspect of the ecosystem of their everyday lives. The impact is felt in all aspects of their family life, play and leisure activities, communication, and education. Over the years various concerns have been raised around children and young people’s participation in digital spaces and use of digital tools. These have intensified following recent rapid developments and deployment of forms of AI due to a growing recognition of high commercial interests within industry, and associated evidence that points to children’s long-standing protections being ignored online (e.g., rights to privacy)1. As a result, advocates have pushed towards centring children and young people’s rights and interests in the current global debates around AI (e.g., work by 5Rights Foundation). Representing and addressing children’s interests and rights in the emerging digital landscape is a pressing issue to consider. This pedagogy considers the criticality of foregrounding the voices of children and young people in the design, development and implementation of AI tools in education. It highlights a stronger emphasis on pedagogical practice involving talking to children about AI and understanding how children themselves might be able to approach such issues outlined above so they can participate equally, knowledgeably and safely. This pedagogy is about supporting children’s ‘lived experiences’ and voices and ensuring that these occupy a prominent place in discussions and initiatives related to AI in Education. More than an afterthought: Putting youth perspectives on AI ethics first The United Nations Convention on the Rights of the Child (UNCRC)2 in 1989 called for greater levels of child and youth engagement, and the opportunity for children and young people to have a voice. This is still highly relevant today. Children and young people’s views on their digital lives matter, and they have a lot to say about the elements of the digital world they engage with. Children for example, gain agency, pleasure and value from the digital world but they get frustrated when digital design, provision and regulation fail to meet their needs3. Children not only have the right to express their views on such matters but also that their views are taken seriously by adults and are actioned in ways that change and improve children and young people’s lives. For this reason, in the report ‘Children and Young People’s Voices’ commissioned by the Digital Futures Commission and 5Rights Foundation, Mukherjee and Livingstone3 stressed that children’s interests and rights often come as an ‘afterthought’. To address this issue, children’s perspectives need to come first and be framed within a holistic rights-based framework informed by advances in childhood ethics and rights. In a recent interview, Professor Sonia Livingstone, based at the London School of Economics4, explained that there have been two main ways of addressing such issues: on the one hand, she says, it is about recognising the risks associated with children being exposed to materials and matters thought to be inappropriate and harmful to children. Examples may include pornography, deepfakes, exploitation, child trafficking, sexual violence, addiction, radicalisation and so on. On the other hand, it is about recognising the opportunities offered by digital technologies such as expanding and augmenting access to educational opportunities, offering opportunities to communicate with friends and family (near and distant) and so on..
29 Talking AI ethics with young people Children and young people’s views on their digital lives matter and have a lot to say about the elements of the digital world they engage with. In the same interview4, Livingstone referred to children demanding greater digital literacy: ‘they want to understand the technology better, they want to know how to sift truth from falsehood, they want to understand the business models. And they want their parents and teachers and others who might assist them to have those digital skills as well. Otherwise, they face a key barrier’. It is indeed critical to learn from children and young people’s experiences of engaging with digital technologies, and to listen to what they want and expect from the key players who could make a difference to their (digital) lives. It is also about translating their ideas into meaningful action and advocacy work. From talking to doing: Engaging with Ethics in AI and Education Several initiatives are being taken worldwide to shape the regulatory environment and associated legislation, with the most important development the recently passed Artificial Intelligence Act (AI Act), a European Union regulation on AI which aims to establish a common regulatory and legal framework for AI. Many initiatives also look to establish ethical principles for the adoption of AI in education5, including the processes needed to understand and make pedagogical choices that are ethical, and to account for the ever-present possibility of unintended consequences. An area of concern is related to generation and use of educational data where data, indicators and metrics are increasingly defining school and university cultures. Data has always been collected from children and young people in education settings and can offer valuable insights. However, the volume and types of data processed have grown exponentially in recent years. Young people have little choice and control over this, and they, their parents, and school/educators, are often unaware of the extent or purposes of this data processing within or beyond the education system. Therefore, children and young people’s education data is a key area to focus on when engaging with ethics. A relevant example is from an initiative at the University of Technology in Sydney (UTS)6. This involved a novel community consultation process at UTS, using the principles and methods of Deliberative Democracy to consult with the UTS community on the following question: What principles should govern UTS use of analytics and artificial intelligence to improve teaching and learning for all, while minimising the possibility of harmful outcomes?. The consultation process was implemented with UTS students, tutors and academics, and experts, who engaged in a series of five online workshops over several weeks. The outputs of the process offer a representative expression of the community’s values, interests and concerns, in response to this brief, with an aim to influence how these technologies are being deployed responsibly in the university. Another example is related to making national policies around AI more accessible and relevant to young people. One team of educators in the United States sought to redesign the White House’s Blueprint for an AI Bill of Rights for and with younger audiences7. Students were asked to critically examine extant policies and ethical frameworks, and apply them to the ways that they use and are used by AI in their own daily lives. In their local context, this meant asking students to adapt a framework like the Blueprint, redesigning and/or remixing the text to suit their needs and creating art and text to feature rights that are most important to them. However, this same pedagogical approach may be applied in other contexts, with other local or national policies. It can also include involving students in collaboratively setting policies and expectations for use of technology in classrooms alongside educators..
30 Innovating Pedagogy 2024 Challenges, barriers, limitations A main challenge is related to the nature of the fast-changing environment of AI in education. Those who are placed in a position of responsibility such as educators, parents and other caregivers, are often not able to respond knowledgeably, lacking the knowledge and capacity to assist young people themselves, so they too should be supported. This also applies to policy makers, who need to have the skills and knowledge to understand how to introduce a rights-based approach to digital environment they are regulating. An orchestrated approach should take place where industry, government and other key players must collaborate to offer accessible family- and child-friendly resources, designs and support (which may become obsolete rapidly given on-going developments). Another challenge is the variations in terms of how children’s rights are defined and protected worldwide, with major differences across national legislative frameworks. Many countries have enacted or are considering national legislation to protect children’s rights, which will enable monitoring and enforcement of legislation and breaches when it comes to children’s data. This means that technology companies operating at global levels need to work within diverse local frameworks and this will require substantial investment on their side in navigating such diverse frameworks. This may also affect efforts to encourage industry stakeholders to consider ethics and children’s rights as they develop tools and content. Finally, one challenge is related to moving from ideation about AI ethics/rights toward advocacy and change. It is easy to ask young people to dream about how they want their world to look, but much harder to translate that into action (like legal protections, meaningful opportunities). Barriers to translating talk to action include youth status (especially dependent upon geopolitical context) and corporate control of many AI tools and related documentation about how they work/were trained. Young people interact with ‘Glow’, an installation which generates photorealistic images of faces and allows various attributes of these to be altered..
31 Talking AI ethics with young people Conclusions This pedagogy draws on a rights approach to children and young people’s participation in the digital environment and necessitates action to mitigate risks, avoid harm and optimise opportunities for youth, including taking part in co-design processes. The pedagogy is phrased as an action (i.e., ‘talking’) to emphasise human action and values in determining how communities and their practices, socio-technical designs and systems, economic logics and political factors help or fail to serve children and young people’s interests. It offers a practical step that may contribute towards realising children’s rights in the digital environment. This may be about ‘reaching out’ to young people and inviting them to the discussion, but more importantly it is about expanding and creating dialogic spaces, widening participation to such spaces, helping young people understand their rights, and providing an opportunity for them to engage with ethical decision-making and policy-setting with technologies that impact them. It is also about listening, and acting upon points raised by young people to reach a stronger understanding on how they negotiate the digital environment. References 1. United Nations (2021) Artificial Intelligence and Privacy, and children’s privacy – Report of the Special Rapporteur of the right to privacy, A/HRC/46/37. Available at: https://www.ohchr. org/en/documents/thematic-reports/ahrc4637- artificial-intelligence-and-privacy-and- childrens-privacy (Accessed 1 July 2024). 2. UN Convention on the Rights of the Child. Available at: https://www.unicef.org.uk/what-we- do/un-convention-child-rights/ (Accessed 1 July 2024). 3. Mukherjee, S. and Livingstone, S. (2020) ‘Children and Young People’s Voices’, Digital Futures Commission. London: 5Rights Foundation. Available at: https://digitalfuturescommission.org. uk/wp-content/uploads/2020/10/Children-and- Young-Peoples-Voices.pdf (Accessed 1 July 2024). 4. Millwood Hargrave, A. (2024) ‘Children’s rights in the digital age’, InterMedia, 52(1). Available at: https://iicintermedia.org/vol-52-issue-1/ childrens-rights-in-the-digital-age/ (Accessed 1 July 2024). 5. Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Buckingham Shum, S., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I. and Koedinger, K. R. (2021) ‘Ethics of AI in Education: Towards a Community-Wide Framework’, International Journal of Artificial Intelligence in Education. Available at: https://doi.org/10.1007/s40593-021-00239-1 (Accessed 1 July 2024). 6. Buckingham Shum, S. (2022) ‘What principles should govern UTS’ use of analytics and artificial intelligence to improve teaching and learning for all, while minimising the possibility of harmful outcomes?’ [Blog], 31 January 2022. Available at: https://simon.buckinghamshum.net/2022/01/ deliberative-democracy-for-edtech-ethics/ (Accessed 1 July 2024). 7. Burriss, S.K. and EngageAI Staff. (2023) ‘Redesigning an AI Bill of Rights For & With Young People’, EngageAI Institute Nexus Blog. Available at: https://engageai.org/2023/10/18/redesigning- an-ai-bill-of-rights-for-with-young-people/ (Accessed 1 July 2024)..
32 Innovating Pedagogy 2024 Resources • Guidance for policy makers: Child Online Safety Global Toolkit. Available at: https://childonlinesafetytoolkit.org/ (Accessed 1 July 2024). • 5Rights Foundation is a London-based non- governmental non-profit charitable organisation that works towards creating a digital environment fit for children and young people. The website includes resources published around three main areas of work: data and privacy, child-centred design and children’s rights: 5Rights Foundation. Available at: https://5rightsfoundation.com/ (Accessed 1 July 2024). • Blog post by Gazal Shekhawat and Sonia Livingstone who call attention to the interests of children in the global AI arms race and provide five key sources of AI guidance: AI and children’s rights: a guide to the transnational guidance. Available at: https://blogs. lse.ac.uk/parenting4digitalfuture/2023/11/08/ai- rights/ (Accessed 1 July 2024). • European Union Artificial Intelligence act: EU AI Act: first regulation on artificial intelligence. Available at: https://www.europarl.europa.eu/ topics/en/article/20230601STO93804/eu-ai- act-first-regulation-on-artificial-intelligence (Accessed 1 July 2024). • Essays by various authors, published by 5Rights Foundation and Digital Futures Commission: Education Digital Futures: Critical, Regulatory, and Practical Reflections. Available at: https:// educationdatafutures.digitalfuturescommission. org.uk/ (Accessed 1 July 2024). • Curricular Resources from Day of AI: AI is for everyone. Available at: https://www. dayofai.org/teacher-pages/ai-blueprint-bill-of- rights (Accessed 1 July 2024). • UNICEF’s Manifesto: The Case for Better Governance of Children’s Data: A Manifesto. Available at: https://www. unicef.org/globalinsight/media/1741/file/ UNICEF%20Global%20Insight%20Data%20 Governance%20Manifesto.pdf (Accessed 1 July 2024). • A report presenting the views and perspectives of young people on AI and its impact on children’s rights: Chaudron, S. and Di Gioia, R., ‘Artificial Intelligence and the Rights of the Child – Young people’s views and perspectives’, EUR 31094 EN, Publications Office of the European Union, Luxembourg, 2022, ISBN 978-92-76-53101-2, doi:10.2760/272950, JRC129099. Available at: https://publications. jrc.ec.europa.eu/repository/handle/JRC129099 (Accessed 1 July 2024)..
33 AI-enhanced multimodal writing AI-enhanced multimodal writing Extending multimodal authoring and developing critical reflection Introduction As educators, we understand that meaning- making is not limited to one mode of expression; we sing, draw, speak, write, move, and gesture as we learn and communicate. Learners are more likely to understand new concepts when multiple modes of communication – sound, visuals, movement, and texts – are used. This perspective on learning and meaning-making is not new; however, digital technologies and generative AI (GenAI) have made possible the use of multiple modes for composing narrative, argument, explanation, interpretation, and other forms traditionally assigned to print texts. Generative AI, a form of artificial intelligence that can generate images, sounds, texts, videos, and other modes based on user prompts, is a tool that can support composers of multimodal texts. Thinking differently about arguments In a professional development activity, a group of teachers created a multimodal project arguing for a new way of dealing with refuse near a local riverbank, incorporating images, videos, voice-over commentaries, statistics, and music. They did this to see whether they could set a similar task for their students. The teachers explained that, while effective, this form of argument required them to think differently and they needed more time to complete the task. One of the teachers explained: ‘I thought using image and sound in my essays would be easy, but I found that using those media required that I think and plan in ways I didn’t when I wrote my usual essays.’ The key message from this experience is that educators should consider the purpose, processes and tools, as well as new ways of working when they design learning tasks that allow learners to create multimodal compositions. New generative AI tools may be able to support both educators designing multimodal tasks and learners creating them. The use of multiple modes of communication are already part of adolescents’ out-of- school lives; they use video, image, text, sound, and animations to create stories and communicate. The New London Group1 identified five modes of meaning-making: 1. oral and written language 2. still and moving images 3. music and sound effects 4. facial expression and body language 5. spatial layout and organisation of objects. Previous technologies already afforded the combinations of these multiple modes to make meaning, but multimodal AI increases both production speed and access to those modes. Previous digital tools may have required some skill with the mode, such as artistic or musical skill in order to develop a new song, whereas new AI tools allow students with little skill in these areas to create. This speeds up both design and revision, making the modes more accessible. Nevertheless, the use of AI requires a new and more sophisticated set of skills as students consider how to prompt the AI and how the modes work together. Multimodal composing includes the process of meaning-making across different modalities in digital formats, such as videos, collages, and soundscapes for immersive environments. These compositions incorporate more than one mode, so meaning is constructed within and across the modes. For example, consider the case of an elementary school student who wrote a story about finding a puppy. She included images of a skinny, hungry puppy in a box, accompanied by sad music. However, at the end, she included pictures of the puppy playing with toys, and happy, upbeat music. The images provide visual impact while the music adds emotional weight to the written words. The student writer used generative AI to create the images so that the reader could see what she was imagining. Using AI provided opportunities to create multiple layers of meaning in the story..
34 Innovating Pedagogy 2024 The student noted she had described the image she wanted, so that the AI could provide this image. She explained that she had to revise her image several times because the outputs the AI produced were not what she had imagined or the image of the puppy it created had too many legs. Additionally, both students and teachers identified prompt engineering and revising (the process of crafting prompts for the AI and amending them) as unexpectedly important parts of the composing process. As one teacher explained, ‘I realised that I had to help my students learn how to write and revise prompts as part of multimodal composing with AI.’ Considering classroom use Teacher planning and reasoning Educators first need to determine their instructional goals for incorporating AI into multimodal writing tasks and what tools are available for students to use. They should also evaluate the critical and ethical use of the AI tools. As they incorporate AI in multimodal writing instruction, they should consider the task their students will be asked to complete, its instructional purpose, how the AI tool might support learning, and what other supports learners will need to use the AI tool in completing their task. The following questions provide helpful guidance: Task 1. What are the skills embedded in the task? How might AI support the development of those skills? 2. What are the supports that students need to accomplish this task? Are these supports that AI might be able to provide? Tools 3. What multimodal elements does AI provide? What supports might students need to use these elements effectively? 4. What ethical considerations need to be discussed in advance of using AI? Reflection 5. Does AI support the communicative purpose of the task? 6. How might reflection on the composing process (e.g., revisions of prompts) illustrate student thinking? 7. How might the iterative process of revising the prompts and discussing their choices help students move beyond a simple use of AI as a tool for cutting corners or cheating, towards thinking of it as a tool to support their learning? Recursive process of prompt writing and revising Crafting a prompt for a composing task with AI is a component of the writing process. A prompt (the writer’s text-based input for the AI) is a tool that writers carefully craft to achieve their intended results from the AI. For example, in a university class, undergraduate students might examine literature by using image and text. The students can use AI tools to create images that provide commentary about an intriguing aspect of the text. They can then reflect on how the AI-generated image might be improved and revise their initial prompt in order to create a better image. Following this pattern, a student analysed Ophelia, the AI character in The Infinity Courts2, focusing on the character’s motivation. His first prompt, ‘Depict a mind trapped in a machine’, produced an unsatisfactory output, so he amended his prompt to say, ‘Depict an AI trapped and suffering inside a machine.’ He revised his prompt multiple times to create an image that provided the commentary he wanted to make. The generative AI art tool allowed recursive critical thought and revision of his artwork, and the engineering of the prompt revealed the depth of his understanding of the AI character. For a different class, an engineering student designed a magical waste disposal system for the Harry Potter Image generated by the prompt “Depict a mind trapped in a machine”.
35 AI-enhanced multimodal writing the use of AI requires a new and more sophisticated set of skills as students consider how to prompt the AI and how the modes work together Wizarding World using generative AI image creators to design the individual components and show what the unit looked like once installed at Hogwarts. He noted that he revised his prompts more than ten times for each piece. Capturing the prompts and revision choices as part of the assignment allowed the teacher to gain a better understanding of the students’ thinking. Retelling and Remixing Stories In another university class, students were asked to retell a story by creating a multimodal composition through the use of generative AI. Before composing, students learned that a retelling is a new version of a story that can include a variety of elements, such as updating the text for a new time period or setting; extending the text to involve new information, perspectives (genders, ethnicity, language, etc.), characters or settings; and/or remixing the genre of the original text and adding multiple modes to it (visuals, sounds, text, and movement). Students selected from a list of free AI tools for different modes (e.g., ChatGPT, Stable Diffusion, Craiyon, Riffusion, Kapwing). The teachers created short tutorials for how to use each tool, shared examples of remixes with AI, and integrated a workshop model for students to share their work. A theme across the student experiences was how AI shaped the creative possibilities of the multimodal design. Some AI tools limited what the composers could create while others opened up new possibilities. One student felt she had to sacrifice her vision of creating a song because the free music generators did not provide the options she needed, including being able to download an entire song. She adapted by creating multiple short snippets of songs. Another student described how the AI took her in new directions that changed her final retelling of Romeo and Juliet. She originally thought AI would limit students’ creativity, however, her experience made her realise that in fact the use of AI could boost creativity as it helped them experiment with different visions. Across both examples, students developed critical reflection as an aspect of the work they produced and in response to the multiple tools they used. Challenges, barriers, limitations While AI is an exciting new type of digital tool, there are challenges and potential barriers to its use. Most AI tools require consistent, reliable online access. Not all schools and students can access reliable, affordable internet connections, potentially extending the existing digital divide. Additionally, there are potential ethical issues as there may be biases in the algorithms that the novice users of the AI are not aware of or cannot control. The AIs may not provide information for their data selection process or what corpus of work they were trained on to create the works they produce. If AI does not provide the provenance of its images, there is a potential for copyright issues. AI can infringe on digital copyrights of human artists by undercutting their work or using it without authorisation3. If an AI creates an image in response to the prompt ‘create a painting in the style of’ a famous or working artist, or ‘write a short story in the style of J. K. Rowling’, it could create potential ethical issues for those working artists and writers. Conclusions Generative AI offers a great many multimodal communication options, enabling authors to produce multimodal pieces more quickly and expand their meaning-making. Multimodal AI tools allow authors to focus on the product rather than the technical skills needed for specific multimedia elements. Multimodal composing with AI requires reflection and critique on both process and product as authors engage with incorporating a new composing tool. As generative AI continues to develop, authors will continue to find ways to achieve their creative vision within the confines of the tool as well as critically examine the ethical implication of AI use. The ethical use of this new innovation requires teachers and learners to explore, create, critique, and reflect on the process of composing and the product. This cycle of exploration and reflection will develop users as more critical and adaptable practitioners of generative multimodal AI, allowing for new composing possibilities..
36 Innovating Pedagogy 2024 References 1. A classic article on multiliteracies, from The New London Group: Cazden, C., Cope, B., Fairclough, N., Gee, J. et al. (1996) ‘A pedagogy of multiliteracies: Designing social futures’, Harvard Educational Review, 66(1), pp. 60–92. Available at: https://www.sfu.ca/~decaste/newlondon.htm (Accessed 4 July 2024). 2. A science fiction/fantasy book: Bowman, A. D. (2021) The Infinity Courts. Simon & Schuster. 3. An article on the Ars Technica website: Edwards, B. (2024) ‘Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods’, Ars Technica. Available at: https://arstechnica.com/information- technology/2024/04/billie-eilish-pearl-jam-200- artists-say-ai-poses-existential-threat-to-their- livelihoods/ (Accessed 4 July 2024). Resources • A free AI video creation platform: Kapwing. Available at: https://www.kapwing. com/6540f8469e9045e0bd5db2e0/studio/editor (Accessed 4 July 2024). • A free AI art creation tool: Stable Diffision Online. Available at: https://stablediffusionweb.com/#ai-image- generator (Accessed 4 July 2024). • An example of AI generating incorrect images – Gemini AI producing historical inaccuracies: ‘Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images’, Article by Benj Edwards on Ars Technica. Available at: https://arstechnica.com/information- technology/2024/02/googles-hidden-ai- diversity-prompts-lead-to-outcry-over- historically-inaccurate-images/ (4 July 2024). • Demo of AI Song Generating Tool, from a team in San Francisco: Riffusion – Create any music you imagine. Available at: https://www.riffusion.com (Accessed 4 July 2024)..
37 Intelligent textbooks Intelligent textbooks Making reading engaging, ‘smart’ and comprehensive Introduction Intelligent textbooks are digital or web- based books that combine content reading with interactive elements such as automatic question answering. Such online books can ask readers questions to prompt deep learning in a section highlighted by the reader1. With the advent of Artificial intelligence (AI), intelligent texts have become ‘smarter’, enabling personalised forms of learning by tracking the behaviour of readers such as page navigation and dwell time (amount of time a reader spends on a page) and by adapting content in real time to meet the needs of the reader such as raising questions about the content. Intelligent textbooks also allow for greater interactivity and engagement between the reader and the text through generative exercises that help learners generate knowledge and ideas. They have great promise in making education and training more accessible, affordable, efficient, and adaptable to individual learning needs2. The use of intelligent textbooks is timely. Over the last two decades, a shift from using printed textbooks to adopting digital textbooks has been observed. Digital textbooks became interactive by using technologies such as multimedia, digital glossaries, self-assessments, and social annotation. They became more widely used, especially following the shift to online learning due to COVID-19. As a result, learners are more willing to study online and are more knowledgeable about how to use digital tools and content3. Generation Effects An important theoretical basis for intelligent textbooks is ‘generation effects’, which demonstrate that remembering improves when ideas are generated by a person’s mind rather than simply reading about them4. Studies have found that generation effects are a stable construct in learning5 and that generating ideas, as compared to passively processing ideas, can increase learning6. Intelligent textbooks, as compared to traditional paper texts and digital texts, can help students generate knowledge while reading by making the reading processes interactive. The simplest approach is for intelligent texts to support ‘read to write’ approaches to learning (reading and analysing exemplary texts in preparation for writing). However, intelligent textbooks with these supports have only recently become feasible with the advent of large language models (LLMs) that can read text and provide opportunities for AI generated feedback and dialogues. For example, the intelligent Textbooks for Enhanced Lifelong Learning (iTELL) framework generates intelligent texts that require learners to construct responses to AI generated questions and produce summaries of the texts that they have read. Students are given the opportunity to revise their responses based on AI generated feedback, providing opportunities for deliberate practice. Within iTELL, learners can also use a ‘think aloud’ protocol to explain their understanding of text using self- explanation dialogues that are prompted and mediated by generative AI. In this way, iTELL transforms reading into an interactive learning experience7..
38 Innovating Pedagogy 2024 A few studies have been published about student experiences with intelligent textbooks. For instance, business students used an interactive and adaptive digital textbook for three semesters and assessed its impact on their learning experience8. The textbook could track topics mastered by students and suggest areas that need further practice. It also offered students reports documenting progress. Students reported satisfaction with using the textbook and indicated that it helped them master their class work. They also found the textbook to be more favourable than hardcopy textbooks due to the adaptive features that helped them understand key concepts. Recent studies with iTELL also indicate increased efficiency in learning. For example, iTELL asks students to generate ‘constructed responses’ (i.e., short answers to short questions) for segments of texts based on questions automatically derived from Generative AI. The constructed responses are automatically scored used LLMs, and feedback is provided to users immediately. Over 80% of students using iTELL reported that the constructed responses were relevant and helped with learning. The top student responses indicated that the automatically generated constructed responses were informative, helpful, and supportive9. A recent study with iTELL in an introductory computer science class demonstrates the potential for intelligent textbooks to increase learning gains over digital texts10. The intelligent text was developed from an introductory programming textbook. The version of iTELL required students to complete constructed response items and summaries, both of which were scored automatically by LLMs that provided qualitative feedback to students. Within the class, 121 students elected to use the iTELL version of the textbook and 356 used a digital text. Survey results indicated that students using iTELL responded positively to the constructed response and summary items and felt both items helped them learn. An analysis of scores between pre-tests and post-tests for students that used iTELL and those that did not showed small learning gains for the iTELL students. Screenshot of the iTELL interface.