Nicola Fern

Digital learning professional, PhD student

Menu
  • About Me
  • Blog
  • CV
  • Portfolio
  • Publications & Presentations
    • AIED2025 Interactive Event Session
    • Poster session – PGR Conference 2023
  • Glossary
  • Contact
Menu

AIED2025 Interactive Event Session

Implementation of a RAG-LLM Contextual Agent for iVR Learning

Download the submitted article as a PDF
AIED Presentation.pdf
Table Of Contents
  1. Implementation of a RAG-LLM Contextual Agent for iVR Learning
  2. Video overview
  3. System aims
  4. System design diagrams
    • Component overview
    • Context and response flows
  5. Article references

Video overview

System aims

  1. To allow learners to initiate a chat session and pose questions using their
    voice.
  2. To allow for both textual and verbal responses from the LLM.
  3. To allow user control for LLM speech to be paused and replayed.
  4. To allow for integration with a customised Retrieval Augmented Generation
    (RAG) model for persistent context and knowledge.
  5. To incorporate context within the LLM prompt that will give the LLM adhoc
    context depending on the game state or specific object the learner is
    asking about.

System design diagrams

Component overview

The system makes use of an external LLM (Eden AI) to process queries. This was chosen due to its flexibility and ease of use, facilitating easy testing of different models. A local system was not considered, since the application is for live teaching and integrating with standalone headsets and unsupervised use is crucial. It also integrates with Wit.ai to provide dictation and text to speech (TTS) functions.

A block diagram titled 'RAG AI System Structure' displays the architecture of a conversational AI system. It is organized into five main sections: 1. Co-ordinator: Contains the 'LLM Chat Manager' in a box, with a description outlining its role in event registration, chat request processing, and conversation flow coordination.  2. User Interface: Features 'Chat UI Manager' and 'Rate Limit Status Display' as submodules, handling the current UI and token tracker feedback. 3. Input/Output: Includes 'Wit.ai (Dictation)' with the 'Dictation Manager', and 'Wit.ai (Text-to-Speech)' with the 'TTS Manager' and 'TTS Queue Manager'. Functions are described for processing transcriptions, splitting LLM responses, and controlling playback. 4. Processing: Lists 'Context Manager' and 'LLM Service' (EdenAI RAG Chatbot), which manage context sets and LLM communications. 5. Rate Management: Shows 'API Rate Limiter Service' and 'Token Bucket Rate Limiter', ensuring dictation and TTS functions stay within Wit.ai limits. Black arrows and grouping brackets illustrate relationships and responsibilities between modules.
Fig. 1 System structure

Context and response flows

The RAG system contains base chatbot instructions and context, with activity-specific instructions, and object-specific context prepended to the student’s query.

A flowchart titled 'RAG AI Implementation Diagrams – Context and Response Flows' illustrates interactions between components in API access and an in-app workflow. A key at the bottom states pink shapes are 'Instructor Determined', blue are 'Student Determined', and yellow is 'AI Determined'.  1. API Access Section (left): Contains 'Persistent Context (pink)' and 'Role Instructions (pink)', both leading into a 'Rag AI' module using arrows. 2. In-App Section (right):  The 'Student - Generates Query' box (blue) points to both 'Context Manager' and the central 'LLM Chat Integration' box. 3. 'Context Manager' receives additional arrows from 'Instructions' (pink) and 'Object-specific context' (pink) modules. 4. The 'LLM Chat Integration' box (labeled 'Combines Instruction / Context / Query') points to 'Response output (Text/TTS)' (yellow). 5. A dotted arrow from 'Rag AI' connects to 'LLM Chat Integration', indicating the chat manager sends a combined prompt and receives the response.  Legend: The diagram uses arrows to show data and instruction flows between these modules in conversational AI processing.
Fig. 2 Diagram showing the context and response flows of the system, highlighting the input/output control for each element.

Article references

5189002 {5189002:HXU3RYB5},{5189002:MGRNGC8L},{5189002:6YJQENBX},{5189002:NKT7XGAZ},{5189002:VN3R262U},{5189002:RASV3NUR},{5189002:ZCZGX2AY},{5189002:WJINJTK6},{5189002:6DRDGIID},{5189002:VAW4G3BU},{5189002:4IC89ZTV},{5189002:KPPH5FNM} 1 apa 50 default 1294 https://www.nicolafern.com/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22HXU3RYB5%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ding%20and%20Chen%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDing%2C%20S.%2C%20%26amp%3B%20Chen%2C%20Y.%20%282025%29.%20%26lt%3Bi%26gt%3BRAG-VR%3A%20Leveraging%20retrieval-augmented%20generation%20for%203D%20question%20answering%20in%20VR%20environments%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2Faf54e8314d03df54d1e1857096b053692e325cbc%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2Faf54e8314d03df54d1e1857096b053692e325cbc%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22RAG-VR%3A%20Leveraging%20retrieval-augmented%20generation%20for%203D%20question%20answering%20in%20VR%20environments%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shiyi%22%2C%22lastName%22%3A%22Ding%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ying%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20advances%20in%20large%20language%20models%20%28LLMs%29%20provide%20new%20opportunities%20for%20context%20understanding%20in%20virtual%20reality%20%28VR%29.%20However%2C%20VR%20contexts%20are%20often%20highly%20localized%20and%20personalized%2C%20limiting%20the%20effectiveness%20of%20general-purpose%20LLMs.%20To%20address%20this%20challenge%2C%20we%20present%20RAG-VR%2C%20the%20first%203D%20question-answering%20system%20for%20VR%20that%20incorporates%20retrieval-augmented%20generation%20%28RAG%29%2C%20which%20augments%20an%20LLM%20with%20external%20knowledge%20retrieved%20from%20a%20localized%20knowledge%20database%20to%20improve%20the%20answer%20quality.%20RAG-VR%20includes%20a%20pipeline%20for%20extracting%20comprehensive%20knowledge%20about%20virtual%20environments%20and%20user%20conditions%20for%20accurate%20answer%20generation.%20To%20ensure%20efficient%20retrieval%2C%20RAG-VR%20offloads%20the%20retrieval%20process%20to%20a%20nearby%20edge%20server%20and%20uses%20only%20essential%20information%20during%20retrieval.%20Moreover%2C%20we%20train%20the%20retriever%20to%20effectively%20distinguish%20among%20relevant%2C%20irrelevant%2C%20and%20hard-to-differentiate%20information%20in%20relation%20to%20questions.%20RAG-VR%20improves%20answer%20accuracy%20by%2017.9%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2Faf54e8314d03df54d1e1857096b053692e325cbc%22%2C%22collections%22%3A%5B%22D2WGRHRC%22%2C%22YN3P4HXF%22%5D%2C%22dateModified%22%3A%222025-05-01T14%3A20%3A00Z%22%7D%7D%2C%7B%22key%22%3A%226YJQENBX%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Izquierdo-Domenech%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BIzquierdo-Domenech%2C%20J.%2C%20Linares-Pellicer%2C%20J.%2C%20%26amp%3B%20Ferri-Molla%2C%20I.%20%282024%29.%20Virtual%20Reality%20and%20Language%20Models%2C%20a%20New%20Frontier%20in%20Learning.%20%26lt%3Bi%26gt%3BInternational%20Journal%20of%20Interactive%20Multimedia%20and%20Artificial%20Intelligence%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B8%26lt%3B%5C%2Fi%26gt%3B%285%29%2C%2046.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.9781%5C%2Fijimai.2024.02.007%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.9781%5C%2Fijimai.2024.02.007%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Virtual%20Reality%20and%20Language%20Models%2C%20a%20New%20Frontier%20in%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Juan%22%2C%22lastName%22%3A%22Izquierdo-Domenech%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jordi%22%2C%22lastName%22%3A%22Linares-Pellicer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabel%22%2C%22lastName%22%3A%22Ferri-Molla%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.9781%5C%2Fijimai.2024.02.007%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2F16119e5236d9bd344e6d2027424e880cc6966454%22%2C%22collections%22%3A%5B%22D2WGRHRC%22%2C%22YN3P4HXF%22%5D%2C%22dateModified%22%3A%222025-05-01T14%3A10%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22NKT7XGAZ%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lewis%20et%20al.%22%2C%22parsedDate%22%3A%222020-12-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLewis%2C%20P.%2C%20Perez%2C%20E.%2C%20Piktus%2C%20A.%2C%20Petroni%2C%20F.%2C%20Karpukhin%2C%20V.%2C%20Goyal%2C%20N.%2C%20K%26%23xFC%3Bttler%2C%20H.%2C%20Lewis%2C%20M.%2C%20Yih%2C%20W.%2C%20Rockt%26%23xE4%3Bschel%2C%20T.%2C%20Riedel%2C%20S.%2C%20%26amp%3B%20Kiela%2C%20D.%20%282020%29.%20Retrieval-augmented%20generation%20for%20knowledge-intensive%20NLP%20tasks.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%2034th%20International%20Conference%20on%20Neural%20Information%20Processing%20Systems%26lt%3B%5C%2Fi%26gt%3B%2C%209459%26%23x2013%3B9474.%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Retrieval-augmented%20generation%20for%20knowledge-intensive%20NLP%20tasks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patrick%22%2C%22lastName%22%3A%22Lewis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ethan%22%2C%22lastName%22%3A%22Perez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aleksandra%22%2C%22lastName%22%3A%22Piktus%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fabio%22%2C%22lastName%22%3A%22Petroni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vladimir%22%2C%22lastName%22%3A%22Karpukhin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Naman%22%2C%22lastName%22%3A%22Goyal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heinrich%22%2C%22lastName%22%3A%22K%5Cu00fcttler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mike%22%2C%22lastName%22%3A%22Lewis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wen-tau%22%2C%22lastName%22%3A%22Yih%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tim%22%2C%22lastName%22%3A%22Rockt%5Cu00e4schel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Riedel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Douwe%22%2C%22lastName%22%3A%22Kiela%22%7D%5D%2C%22abstractNote%22%3A%22Large%20pre-trained%20language%20models%20have%20been%20shown%20to%20store%20factual%20knowledge%20in%20their%20parameters%2C%20and%20achieve%20state-of-the-art%20results%20when%20fine-tuned%20on%20downstream%20NLP%20tasks.%20However%2C%20their%20ability%20to%20access%20and%20precisely%20manipulate%20knowledge%20is%20still%20limited%2C%20and%20hence%20on%20knowledge-intensive%20tasks%2C%20their%20performance%20lags%20behind%20task-specific%20architectures.%20Additionally%2C%20providing%20provenance%20for%20their%20decisions%20and%20updating%20their%20world%20knowledge%20remain%20open%20research%20problems.%20Pre-trained%20models%20with%20a%20differentiable%20access%20mechanism%20to%20explicit%20non-parametric%20memory%20can%20overcome%20this%20issue%2C%20but%20have%20so%20far%20been%20only%20investigated%20for%20extractive%20downstream%20tasks.%20We%20explore%20a%20general-purpose%20fine-tuning%20recipe%20for%20retrieval-augmented%20generation%20%28RAG%29%20%5Cu2014%20models%20which%20combine%20pre-trained%20parametric%20and%20non-parametric%20memory%20for%20language%20generation.%20We%20introduce%20RAG%20models%20where%20the%20parametric%20memory%20is%20a%20pre-trained%20seq2seq%20model%20and%20the%20non-parametric%20memory%20is%20a%20dense%20vector%20index%20of%20Wikipedia%2C%20accessed%20with%20a%20pre-trained%20neural%20retriever.%20We%20compare%20two%20RAG%20formulations%2C%20one%20which%20conditions%20on%20the%20same%20retrieved%20passages%20across%20the%20whole%20generated%20sequence%2C%20and%20another%20which%20can%20use%20different%20passages%20per%20token.%20We%20fine-tune%20and%20evaluate%20our%20models%20on%20a%20wide%20range%20of%20knowledge-intensive%20NLP%20tasks%20and%20set%20the%20state%20of%20the%20art%20on%20three%20open%20domain%20QA%20tasks%2C%20outperforming%20parametric%20seq2seq%20models%20and%20task-specific%20retrieve-and-extract%20architectures.%20For%20language%20generation%20tasks%2C%20we%20find%20that%20RAG%20models%20generate%20more%20specific%2C%20diverse%20and%20factual%20language%20than%20a%20state-of-the-art%20parametric-only%20seq2seq%20baseline.%22%2C%22date%22%3A%22December%206%2C%202020%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2034th%20International%20Conference%20on%20Neural%20Information%20Processing%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22978-1-7138-2954-6%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%22D2WGRHRC%22%5D%2C%22dateModified%22%3A%222025-05-01T09%3A44%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22VN3R262U%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Marquardt%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMarquardt%2C%20A.%2C%20Golchinfar%2C%20D.%2C%20%26amp%3B%20Vaziri%2C%20D.%20%282025%29.%20%26lt%3Bi%26gt%3BRAGatar%3A%20Enhancing%20LLM-driven%20Avatars%20with%20RAG%20for%20Knowledge-Adaptive%20Conversations%20in%20Virtual%20Reality%26lt%3B%5C%2Fi%26gt%3B.%201604%26%23x2013%3B1605.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FVRW66409.2025.00447%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FVRW66409.2025.00447%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22RAGatar%3A%20Enhancing%20LLM-driven%20Avatars%20with%20RAG%20for%20Knowledge-Adaptive%20Conversations%20in%20Virtual%20Reality%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Marquardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Golchinfar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daryoush%22%2C%22lastName%22%3A%22Vaziri%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20virtual%20reality%20system%20that%20enables%20users%20to%20seamlessly%20switch%20between%20general%20conversations%20and%20domain-specific%20knowledge%20retrieval%20through%20natural%20interactions%20with%20AI-driven%20avatars.%20By%20combining%20MetaHuman%20technology%20with%20self-hosted%20large%20language%20models%20and%20retrieval-augmented%20generation%2C%20our%20system%20demonstrates%20how%20immersive%20AI%20interactions%20can%20enhance%20learning%20and%20training%20applications%20where%20both%20general%20communication%20and%20expert%20knowledge%20are%20required.%22%2C%22date%22%3A%222025%5C%2F03%5C%2F01%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22C%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.1109%5C%2FVRW66409.2025.00447%22%2C%22ISBN%22%3A%22979-8-3315-1484-6%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.computer.org%5C%2Fcsdl%5C%2Fproceedings-article%5C%2Fvrw%5C%2F2025%5C%2F148400b604%5C%2F26aUZtE5I1q%22%2C%22collections%22%3A%5B%224B93DUF2%22%5D%2C%22dateModified%22%3A%222025-04-30T14%3A57%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22WJINJTK6%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22N%5Cu00e9meth%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BN%26%23xE9%3Bmeth%2C%20R.%2C%20T%26%23xE1%3Btrai%2C%20A.%2C%20Szab%26%23xF3%3B%2C%20M.%2C%20%26amp%3B%20Tam%26%23xE1%3Bsi%2C%20%26%23xC1%3B.%20%282024%29.%20Using%20a%20RAG-enhanced%20large%20language%20model%26%23xA0%3B%20in%20a%20virtual%20teaching%20assistant%20role%3A%20Experiences%20from%20a%20pilot%20project%20in%20statistics%20education.%20%26lt%3Bi%26gt%3BHungarian%20Statistical%20Review%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B7%26lt%3B%5C%2Fi%26gt%3B%282%29%2C%203%26%23x2013%3B27.%20Crossref.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.35618%5C%2Fhsr2024.02.en003%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.35618%5C%2Fhsr2024.02.en003%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Using%20a%20RAG-enhanced%20large%20language%20model%20%20in%20a%20virtual%20teaching%20assistant%20role%3A%20Experiences%20from%20a%20pilot%20project%20in%20statistics%20education%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e1ta%22%2C%22lastName%22%3A%22N%5Cu00e9meth%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annam%5Cu00e1ria%22%2C%22lastName%22%3A%22T%5Cu00e1trai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mikl%5Cu00f3s%22%2C%22lastName%22%3A%22Szab%5Cu00f3%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu00c1rp%5Cu00e1d%22%2C%22lastName%22%3A%22Tam%5Cu00e1si%22%7D%5D%2C%22abstractNote%22%3A%22The%20role%20of%20artificial%20intelligence%20%28AI%29%20in%20education%20is%20expected%20to%20grow%2C%20but%20how%20it%20transforms%20teaching%5Cn%5Cnand%20learning%20remains%20unclear.%20This%20study%20explores%20the%20use%20of%20an%20AI%20tutor%20that%20is%20similar%20to%20ChatGPT%5Cn%5Cnenhanced%20with%20retrieval-augmented%20generation%20%28RAG%29%2C%20in%20a%20pilot%20project%20at%20the%20Faculty%20of%20Social%5Cn%5CnSciences%20of%20E%5Cu00f6tv%5Cu00f6s%20Lor%5Cu00e1nd%20University%20in%20Budapest%2C%20Hungary.%20The%20tutor%20provided%20a%20searchable%5Cn%5Cnknowledge%20base%20for%20students%20preparing%20for%20admission%20to%20the%20MSc%20in%20Survey%20Statistics%20and%20Data%5Cn%5CnAnalytics.%20Instructor%20feedback%20highlighted%20the%20tutor%5Cu2019s%20ability%20to%20deliver%20accurate%2C%20textbook-based%5Cn%5Cnresponses%2C%20but%20noted%20limitations%20in%20addressing%20real-world%5Cncomplexities.%20Student%20feedback%2C%20which%20was%5Cn%5Cngathered%20through%20focus%20groups%20and%20surveys%2C%20showed%20high%20satisfaction%20and%20many%20used%20the%20tool%20for%5Cn%5Cnactive%20learning%20such%20as%20comparing%20concepts%20and%20organising%20material.%20Students%20had%20the%20flexibility%20to%5Cn%5Cnadapt%20the%20tutor%20to%20their%20own%20learning%20strategy%2C%20and%20they%20also%20noted%20the%20importance%20of%20the%20tutor%20as%20a%5Cn%5Cntime-saving%20supplement%20rather%20than%20a%20replacement%20for%20comprehensive%20study.%20Approximately%2015%25%20of%5Cn%5Cnstudent%20queries%20demonstrated%20critical%20thinking%2C%20where%20students%20used%20the%20AI%20tutor%20to%20confirm%20their%20own%5Cn%5Cninterpretations.%20Similarly%2C%20around%2015%25%20showed%20active%20learning%2C%20seeking%20explanations%20and%5Cn%5Cncomparisons%20or%20generated%20study%20guides%2C%20while%20nearly%2030%25%20engaged%5Cndirectly%20with%20course%20material%2C%5Cn%5Cnreferencing%20specific%20concepts%20and%20theories%20from%20their%20readings.%5CnInstructor%20evaluation%20revealed%20that%5Cn%5Cn76%25%20of%20the%20AI%20tutor%5Cu2019s%20responses%20were%20fully%20correct%2C%2017%25%20mostly%20correct%20and%20only%206%25%20were%20misleading.%5Cn%5CnThe%20findings%20suggest%20that%20RAG%20models%20hold%20promise%20for%20enhancing%5Cnlearning%20by%20offering%20reliable%2C%5Cn%5Cninteractive%20and%20efficient%20support%20for%20students%20and%20educators.%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.35618%5C%2Fhsr2024.02.en003%22%2C%22ISSN%22%3A%222630-9130%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdx.doi.org%5C%2F10.35618%5C%2Fhsr2024.02.en003%22%2C%22collections%22%3A%5B%224B93DUF2%22%5D%2C%22dateModified%22%3A%222025-04-30T11%3A40%3A55Z%22%7D%7D%2C%7B%22key%22%3A%22VAW4G3BU%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Prasongpongchai%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPrasongpongchai%2C%20T.%2C%20Pataranutaporn%2C%20P.%2C%20Kanapornchai%2C%20C.%2C%20Lapapirojn%2C%20A.%2C%20Ouppaphan%2C%20P.%2C%20Winson%2C%20K.%2C%20Lertsutthiwong%2C%20M.%2C%20%26amp%3B%20Maes%2C%20P.%20%282024%29.%20Interactive%20AI-Generated%20Virtual%20Instructors%20Enhance%20Learning%20Motivation%20and%20Engagement%20in%20Financial%20Education.%20In%20A.%20M.%20Olney%2C%20I.-A.%20Chounta%2C%20Z.%20Liu%2C%20O.%20C.%20Santos%2C%20%26amp%3B%20I.%20I.%20Bittencourt%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BArtificial%20Intelligence%20in%20Education.%20Posters%20and%20Late%20Breaking%20Results%2C%20Workshops%20and%20Tutorials%2C%20Industry%20and%20Innovation%20Tracks%2C%20Practitioners%2C%20Doctoral%20Consortium%20and%20Blue%20Sky%26lt%3B%5C%2Fi%26gt%3B%20%28Vol.%202151%2C%20pp.%20217%26%23x2013%3B225%29.%20Springer%20Nature%20Switzerland.%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-64312-5_26%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Interactive%20AI-Generated%20Virtual%20Instructors%20Enhance%20Learning%20Motivation%20and%20Engagement%20in%20Financial%20Education%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Andrew%20M.%22%2C%22lastName%22%3A%22Olney%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Irene-Angelica%22%2C%22lastName%22%3A%22Chounta%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Zitao%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Olga%20C.%22%2C%22lastName%22%3A%22Santos%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ig%20Ibert%22%2C%22lastName%22%3A%22Bittencourt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thanawit%22%2C%22lastName%22%3A%22Prasongpongchai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pat%22%2C%22lastName%22%3A%22Pataranutaporn%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chonnipa%22%2C%22lastName%22%3A%22Kanapornchai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Auttasak%22%2C%22lastName%22%3A%22Lapapirojn%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pichayoot%22%2C%22lastName%22%3A%22Ouppaphan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kavin%22%2C%22lastName%22%3A%22Winson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monchai%22%2C%22lastName%22%3A%22Lertsutthiwong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pattie%22%2C%22lastName%22%3A%22Maes%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22bookTitle%22%3A%22Artificial%20Intelligence%20in%20Education.%20Posters%20and%20Late%20Breaking%20Results%2C%20Workshops%20and%20Tutorials%2C%20Industry%20and%20Innovation%20Tracks%2C%20Practitioners%2C%20Doctoral%20Consortium%20and%20Blue%20Sky%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-031-64311-8%20978-3-031-64312-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2F978-3-031-64312-5_26%22%2C%22collections%22%3A%5B%224B93DUF2%22%5D%2C%22dateModified%22%3A%222025-04-24T13%3A18%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22RASV3NUR%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Maslych%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMaslych%2C%20M.%2C%20Pumarada%2C%20C.%2C%20Ghasemaghaei%2C%20A.%2C%20%26amp%3B%20LaViola%2C%20J.%20J.%20%282024%29.%20Takeaways%20from%20Applying%20LLM%20Capabilities%20to%20Multiple%20Conversational%20Avatars%20in%20a%20VR%20Pilot%20Study.%20%26lt%3Bi%26gt%3BArXiv%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3Babs%5C%2F2501.00168%26lt%3B%5C%2Fi%26gt%3B%2C%20null.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2501.00168%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2501.00168%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Takeaways%20from%20Applying%20LLM%20Capabilities%20to%20Multiple%20Conversational%20Avatars%20in%20a%20VR%20Pilot%20Study%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mykola%22%2C%22lastName%22%3A%22Maslych%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christian%22%2C%22lastName%22%3A%22Pumarada%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amirpouya%22%2C%22lastName%22%3A%22Ghasemaghaei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%20J.%22%2C%22lastName%22%3A%22LaViola%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2501.00168%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2Fed6d8c1d45e84bd0ae6f94c8ab5106dd1a60819f%22%2C%22collections%22%3A%5B%22D2WGRHRC%22%2C%224B93DUF2%22%2C%22YN3P4HXF%22%5D%2C%22dateModified%22%3A%222025-04-24T08%3A36%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22KPPH5FNM%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222023-04-19%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20R.%2C%20Zou%2C%20D.%2C%20%26amp%3B%20Cheng%2C%20G.%20%282023%29.%20A%20review%20of%20chatbot-assisted%20learning%3A%20pedagogical%20approaches%2C%20implementations%2C%20factors%20leading%20to%20effectiveness%2C%20theories%2C%20and%20future%20directions.%20%26lt%3Bi%26gt%3BInteractive%20Learning%20Environments%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B32%26lt%3B%5C%2Fi%26gt%3B%288%29%2C%204529%26%23x2013%3B4557.%20Crossref.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10494820.2023.2202704%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10494820.2023.2202704%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20review%20of%20chatbot-assisted%20learning%3A%20pedagogical%20approaches%2C%20implementations%2C%20factors%20leading%20to%20effectiveness%2C%20theories%2C%20and%20future%20directions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruofei%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Di%22%2C%22lastName%22%3A%22Zou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gary%22%2C%22lastName%22%3A%22Cheng%22%7D%5D%2C%22abstractNote%22%3A%22The%20chatbot%20has%20been%20increasingly%20applied%20and%20investigated%20in%20education%2C%20along%20with%20many%20review%20studies%20from%20different%20aspects.%20However%2C%20few%20reviews%20have%20been%20conducted%20on%20chatbot-assisted%20learning%20from%20the%20pedagogical%20and%20implementational%20aspects%2C%20which%20may%20provide%20implications%20for%20future%20application%20and%20investigation%20of%20educational%20chatbots.%20To%20fill%20in%20the%20gaps%2C%20we%20reviewed%20relevant%20studies%20from%20the%20pedagogical%20and%20implementational%20aspects.%20Forty-six%20articles%20from%20Web%20of%20Science%20and%20Scopus%20databases%20were%20screened%20by%20predefined%20criteria%20and%20analysed%20step%20by%20step%20following%20the%20PRISMA%20framework.%20The%20finding%20showed%20diversified%20learning%20activities%20%28i.e.%20exercise%2C%20instructions%2C%20role-playing%20activities%2C%20collaborative%20product%20design%2C%20independent%20writing%2C%20storytelling%5C%2Fbook-reading%2C%20digital%20gameplay%2C%20and%20open-ended%20debates%29%20that%20chatbots%20could%20support%20through%20presenting%20knowledge%2C%20facilitating%20practices%2C%20supervising%20and%20guiding%20learning%20activities%2C%20and%20providing%20emotional%20support.%20Chabot-assisted%20learning%20was%20applied%20in%2014%20disciplines%2C%20mostly%20in-class%20for%20one%20session%2C%20and%20had%20overall%20positive%20outcomes%20from%20academic%20and%20affective%20aspects.%20Based%20on%20the%20review%20results%2C%20we%20proposed%20a%20RAISE%20model%20of%20effective%5Cnchatbot-assisted%20learning%3A%20Repetitiveness%2C%20Authenticity%2C%5CnInteractivity%2C%20Student-centredness%2C%20and%20Enjoyment.%20We%20identified%20eght%20theories%20that%20might%20be%20useful%20in%20analysing%20and%20supporting%5Cnchatbot-assisted%20learning%3A%20constructivist%20theories%2C%5Cnsituated%5C%2Fcontextualised%20learning%20theories%2C%20cognitive%20theories%20of%5Cnmultimedia%20learning%2C%20self-regulated%20learning%20theories%2C%20output%5Cnhypotheses%2C%20flow%20theory%2C%20collaborative%20learning%20theories%20and%5Cnmotivation%20theories.%20Future%20studies%20on%20chatbot-assisted%20learning%20may%20be%20conducted%20on%20the%20use%20of%20theoretical%20frameworks%2C%20the%20application%20of%20various%20technological-pedagogical%20approaches%20and%20learning%20activities%2C%20and%20the%20long-term%2C%20out-of-class%20implementations%20in%20new%20areas.%22%2C%22date%22%3A%222023-4-19%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F10494820.2023.2202704%22%2C%22ISSN%22%3A%221049-4820%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdx.doi.org%5C%2F10.1080%5C%2F10494820.2023.2202704%22%2C%22collections%22%3A%5B%224B93DUF2%22%5D%2C%22dateModified%22%3A%222025-04-24T08%3A36%3A16Z%22%7D%7D%2C%7B%22key%22%3A%224IC89ZTV%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Thway%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BThway%2C%20M.%2C%20Recatala-Gomez%2C%20J.%2C%20Lim%2C%20F.%20S.%2C%20Hippalgaonkar%2C%20K.%2C%20%26amp%3B%20Ng%2C%20L.%20W.%20T.%20%282024%29.%20Battling%20Botpoop%20using%20GenAI%20for%20Higher%20Education%3A%20A%20Study%20of%20a%20Retrieval%20Augmented%20Generation%20Chatbots%20Impact%20on%20Learning.%20%26lt%3Bi%26gt%3BArXiv%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3Babs%5C%2F2406.07796%26lt%3B%5C%2Fi%26gt%3B%2C%20null.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2406.07796%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2406.07796%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Battling%20Botpoop%20using%20GenAI%20for%20Higher%20Education%3A%20A%20Study%20of%20a%20Retrieval%20Augmented%20Generation%20Chatbots%20Impact%20on%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maung%22%2C%22lastName%22%3A%22Thway%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jose%22%2C%22lastName%22%3A%22Recatala-Gomez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fun%20Siong%22%2C%22lastName%22%3A%22Lim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kedar%22%2C%22lastName%22%3A%22Hippalgaonkar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leonard%20W.%20T.%22%2C%22lastName%22%3A%22Ng%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2406.07796%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2F79a6d1bd9dfad6c858369015cae56cf1e1ad3f9d%22%2C%22collections%22%3A%5B%22D2WGRHRC%22%2C%224B93DUF2%22%5D%2C%22dateModified%22%3A%222025-04-23T15%3A43%3A07Z%22%7D%7D%2C%7B%22key%22%3A%22MGRNGC8L%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22edenai%22%2C%22parsedDate%22%3A%222025-03-17%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3Bedenai.%20%282025%29.%20%26lt%3Bi%26gt%3Bedenai%5C%2Funity-plugin%26lt%3B%5C%2Fi%26gt%3B.%20Eden%20AI.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fgithub.com%5C%2Fedenai%5C%2Funity-plugin%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fgithub.com%5C%2Fedenai%5C%2Funity-plugin%26lt%3B%5C%2Fa%26gt%3B%20%28Original%20work%20published%202023%29%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22computerProgram%22%2C%22title%22%3A%22edenai%5C%2Funity-plugin%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22programmer%22%2C%22name%22%3A%22edenai%22%7D%5D%2C%22abstractNote%22%3A%22The%20Unity%20EdenAI%20Plugin%20simplifies%20integrating%20AI%20tasks%20like%20text-to-speech%2C%20chatbots%20and%20other%20generative%20AI%20into%20Unity%20applications%20using%20the%20EdenAI%20API.%22%2C%22versionNumber%22%3A%22%22%2C%22date%22%3A%222025-03-17T18%3A42%3A15Z%22%2C%22system%22%3A%22%22%2C%22company%22%3A%22Eden%20AI%22%2C%22programmingLanguage%22%3A%22C%23%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fgithub.com%5C%2Fedenai%5C%2Funity-plugin%22%2C%22collections%22%3A%5B%227ME8Z5N3%22%5D%2C%22dateModified%22%3A%222025-04-01T17%3A37%3A43Z%22%7D%7D%2C%7B%22key%22%3A%226DRDGIID%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ortega%5Cu2010Ochoa%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOrtega%26%23x2010%3BOchoa%2C%20E.%2C%20Arguedas%2C%20M.%2C%20%26amp%3B%20Daradoumis%2C%20T.%20%282023%29.%20Empathic%20pedagogical%20conversational%20agents%3A%20A%20systematic%20literature%20review.%20%26lt%3Bi%26gt%3BBritish%20Journal%20of%20Educational%20Technology%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B55%26lt%3B%5C%2Fi%26gt%3B%283%29%2C%20886%26%23x2013%3B909.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1111%5C%2Fbjet.13413%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1111%5C%2Fbjet.13413%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Empathic%20pedagogical%20conversational%20agents%3A%20A%20systematic%20literature%20review%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elvis%22%2C%22lastName%22%3A%22Ortega%5Cu2010Ochoa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22Arguedas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thanasis%22%2C%22lastName%22%3A%22Daradoumis%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20%28AI%29%20and%20natural%20language%20processing%20technologies%20have%20fuelled%20the%20growth%20of%20Pedagogical%20Conversational%20Agents%20%28PCAs%29%20with%20empathic%20conversational%20capabilities.%20However%2C%20no%20systematic%20literature%20review%20has%20explored%20the%20intersection%20between%20conversational%20agents%2C%20education%20and%20emotion.%20Therefore%2C%20this%20study%20aimed%20to%20outline%20the%20key%20aspects%20of%20designing%2C%20implementing%20and%20evaluating%20these%20agents.%20The%20data%20sources%20were%20empirical%20studies%2C%20including%20peer-reviewed%20conference%20papers%20and%20journal%20articles%2C%20and%20the%20most%20recent%20publications%2C%20from%20the%20ACM%20Digital%20Library%2C%20IEEE%20Xplore%2C%20ProQuest%2C%20ScienceDirect%2C%20Scopus%2C%20SpringerLink%2C%20Taylor%20%26amp%3B%20Francis%20Online%2C%20Web%20of%20Science%20and%20Wiley%20Online%20Library.%20The%20remaining%20papers%20underwent%20a%20rigorous%20quality%20assessment.%20A%20filter%20study%20meeting%20the%20objective%20was%20based%20on%20keywords.%20Comparative%20analysis%20and%20synthesis%20of%20results%20were%20used%20to%20handle%20data%20and%20combine%20study%20outcomes.%20Out%20of%201162%20search%20results%2C%2013%20studies%20were%20selected.%20The%20results%20indicate%20that%20agents%20promote%20dialogic%20learning%2C%20proficiency%20in%20knowledge%20domains%2C%20personalized%20feedback%20and%20empathic%20abilities%20as%20essential%20design%20principles.%20Most%20implementations%20employ%20a%20quantitative%20approach%2C%20and%20two%20variables%20are%20used%20for%20evaluation.%20Feedback%20types%20play%20a%20vital%20role%20in%20achieving%20positive%20results%20in%20learning%20performance%20and%20student%20perceptions.%20The%20main%20limitations%20and%20gaps%20are%20the%20time%20range%20for%20literature%20selection%2C%20the%20level%20of%20integration%20of%20the%20empathic%20field%20and%20the%20lack%20of%20a%20detailed%20development%20stage%20report.%20Moreover%2C%20future%20directions%20are%20the%20ethical%20implications%20of%20agents%20operating%20beyond%20scheduled%20learning%20times%20and%20the%20adoption%20of%20Responsible%20AI%20principles.%20In%20conclusion%2C%20this%20review%20provides%20a%20comprehensive%20framework%20of%20empathic%20PCAs%2C%20mostly%20in%20their%20evaluation.%20The%20systematic%20review%20registration%20number%20is%20osf.io%5C%2F3xk6a.Practitioner%20notesWhat%20is%20already%20known%20about%20this%20topic%20Emotions%20play%20a%20pivotal%20role%20in%20shaping%20the%20interaction%20process%2C%20making%20it%20essential%20to%20consider%20them%20when%20designing%20methodological%20strategies%20or%20learning%20tools.%20Empathic%20Pedagogical%20Conversational%20Agents%20%28PCAs%29%20have%20emerged%20as%20a%20crucial%20approach%20for%20enhancing%20and%20personalizing%20the%20learning%20experience%20%2824%5C%2F7%29%20for%20pupils%20and%20supporting%20human%20teachers%20in%20their%20teaching%20process.%20Despite%20the%20creation%20of%20numerous%20empathic%20PCAs%2C%20there%20is%20a%20scarcity%20of%20Systematic%20Literature%20Reviews%20%28SLRs%29%20on%20their%20application%20in%20the%20educational%20field%2C%20particularly%20concerning%20the%20integration%20of%20emotional%20abilities%20in%20combination%20with%20the%20competencies%20of%20each%20subject.%20What%20this%20paper%20adds%20It%20offers%20new%20insights%20into%20the%20design%20principles%20underlying%20the%20integration%20of%20the%20empathic%20field.%20It%20reviews%20different%20approaches%20for%20incorporating%20students%26%23039%3B%20prior%20knowledge%20in%20real%20time.%20It%20provides%20a%20comprehensive%20and%20up-to-date%20overview%20of%20the%20research%20designs%20used%20for%20implementation%2C%20including%20quantitative%2C%20qualitative%20and%20mixed%20methods.%20It%20examines%20the%20factors%20that%20influence%20the%20effectiveness%20of%20empathic%20PCA%20in%20teaching%20and%20learning.%20It%20evaluates%20the%20types%20of%20feedback%20that%20enhance%20the%20impact%20of%20the%20empathic%20field%20on%20learning%20outcomes.%20Implications%20for%20practice%20and%5C%2For%20policy%20It%20is%20crucial%20to%20grasp%20the%20topics%20that%20this%20paper%20introduces%20in%20order%20to%20effectively%20integrate%20new%20learning%20tools%20into%20any%20context.%20Techno-pedagogical%20designers%20seeking%20to%20gain%20insights%20into%20empathic%20PCAs%20will%20find%20immense%20value%20in%20this%20SLR%2C%20as%20it%20comprehensively%20covers%20each%20stage%20of%20the%20process.%20For%20future%20research%20endeavours%2C%20this%20study%20offers%20a%20wealth%20of%20ideas%20to%20draw%20upon%2C%20enabling%20researchers%20to%20address%20the%20challenges%20outlined%20and%20explore%20new%20avenues%20of%20investigation.%22%2C%22date%22%3A%222023-12-6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Fbjet.13413%22%2C%22ISSN%22%3A%221467-8535%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%22JIGNPJL6%22%2C%22XNETGP47%22%5D%2C%22dateModified%22%3A%222025-04-01T17%3A11%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22ZCZGX2AY%22%2C%22library%22%3A%7B%22id%22%3A5189002%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Meta%22%2C%22parsedDate%22%3A%222024-08-19%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMeta.%20%282024%2C%20August%2019%29.%20%26lt%3Bi%26gt%3BVoice%20SDK%20Overview%26lt%3B%5C%2Fi%26gt%3B.%20Meta%20Horizon%20OS%20Developers.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdevelopers.meta.com%5C%2Fhorizon%5C%2Fdocumentation%5C%2Funity%5C%2Fvoice-sdk-overview%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdevelopers.meta.com%5C%2Fhorizon%5C%2Fdocumentation%5C%2Funity%5C%2Fvoice-sdk-overview%5C%2F%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22Voice%20SDK%20Overview%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22Meta%22%7D%5D%2C%22abstractNote%22%3A%22This%20section%20provides%20an%20overview%20of%20the%20Voice%20SDK.%22%2C%22date%22%3A%2219%5C%2F08%5C%2F2024%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdevelopers.meta.com%5C%2Fhorizon%5C%2Fdocumentation%5C%2Funity%5C%2Fvoice-sdk-overview%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%227ME8Z5N3%22%5D%2C%22dateModified%22%3A%222025-04-01T16%3A35%3A32Z%22%7D%7D%5D%7D
Ding, S., & Chen, Y. (2025). RAG-VR: Leveraging retrieval-augmented generation for 3D question answering in VR environments. https://www.semanticscholar.org/paper/af54e8314d03df54d1e1857096b053692e325cbc
Izquierdo-Domenech, J., Linares-Pellicer, J., & Ferri-Molla, I. (2024). Virtual Reality and Language Models, a New Frontier in Learning. International Journal of Interactive Multimedia and Artificial Intelligence, 8(5), 46. https://doi.org/10.9781/ijimai.2024.02.007
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Proceedings of the 34th International Conference on Neural Information Processing Systems, 9459–9474.
Marquardt, A., Golchinfar, D., & Vaziri, D. (2025). RAGatar: Enhancing LLM-driven Avatars with RAG for Knowledge-Adaptive Conversations in Virtual Reality. 1604–1605. https://doi.org/10.1109/VRW66409.2025.00447
Németh, R., Tátrai, A., Szabó, M., & Tamási, Á. (2024). Using a RAG-enhanced large language model  in a virtual teaching assistant role: Experiences from a pilot project in statistics education. Hungarian Statistical Review, 7(2), 3–27. Crossref. https://doi.org/10.35618/hsr2024.02.en003
Prasongpongchai, T., Pataranutaporn, P., Kanapornchai, C., Lapapirojn, A., Ouppaphan, P., Winson, K., Lertsutthiwong, M., & Maes, P. (2024). Interactive AI-Generated Virtual Instructors Enhance Learning Motivation and Engagement in Financial Education. In A. M. Olney, I.-A. Chounta, Z. Liu, O. C. Santos, & I. I. Bittencourt (Eds.), Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky (Vol. 2151, pp. 217–225). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-64312-5_26
Maslych, M., Pumarada, C., Ghasemaghaei, A., & LaViola, J. J. (2024). Takeaways from Applying LLM Capabilities to Multiple Conversational Avatars in a VR Pilot Study. ArXiv, abs/2501.00168, null. https://doi.org/10.48550/arXiv.2501.00168
Zhang, R., Zou, D., & Cheng, G. (2023). A review of chatbot-assisted learning: pedagogical approaches, implementations, factors leading to effectiveness, theories, and future directions. Interactive Learning Environments, 32(8), 4529–4557. Crossref. https://doi.org/10.1080/10494820.2023.2202704
Thway, M., Recatala-Gomez, J., Lim, F. S., Hippalgaonkar, K., & Ng, L. W. T. (2024). Battling Botpoop using GenAI for Higher Education: A Study of a Retrieval Augmented Generation Chatbots Impact on Learning. ArXiv, abs/2406.07796, null. https://doi.org/10.48550/arXiv.2406.07796
edenai. (2025). edenai/unity-plugin. Eden AI. https://github.com/edenai/unity-plugin (Original work published 2023)
Ortega‐Ochoa, E., Arguedas, M., & Daradoumis, T. (2023). Empathic pedagogical conversational agents: A systematic literature review. British Journal of Educational Technology, 55(3), 886–909. https://doi.org/10.1111/bjet.13413
Meta. (2024, August 19). Voice SDK Overview. Meta Horizon OS Developers. https://developers.meta.com/horizon/documentation/unity/voice-sdk-overview/
CC BY-SA

Diagram content is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license.

Posts

  • VR-1 DevLog #6
  • VR-1 DevLog #5
  • VR-1 DevLog #4
  • VR-1 DevLog #3
  • VR-1 DevLog #2

Categories

Game development project managed with Codecks
Made with Codecks (Project Management for Game Devs)
©2025 Nicola Fern