M.S. AAI Capstone Chronicles 2024

EDUCATION ASSISTANCE THROUGH A.I.

6

is to have the model highlight errors and problems that the instructor can focus on. The model can also provide feedback on why things need to be corrected and suggestions on aspects like grammatical changes. Javaid et al. explored the use of ChatGPT for automatically grading essays (Javaid et al., 2023). ChatGPT requires a lot of specialized direction for these types of tasks but can generally perform language translation, text summarizing, and many other useful features (Javaid et al., 2023). There is currently no one refined solution that attempts to do everything we would hope our program to do. The closest solution comes from the director of the MS Applied Artificial Intelligence program at the University of San Diego, Ebrahim Tarshizi (Tarshizi, 2023). Dr. Tarshizi created a model based on the A.I. assistant feature of OpenAI which uses the GPT 4.0 model in the background. This model is capable of ingesting various files in PDF format that are passed to it, and using those as specialized references while performing tasks. Although many of the various technologies address issues stated within this paper, by implementing a RAG system, explained further on, the bot has a focused and specialized knowledge set that allows it to answer questions more adequately and effectively. Combined with the power of OpenAI’s GPT model this assistant does have a significant amount of ability and versatility. RAG is a method developed to boost model’s abilities to respond to questions more in-depth. It is known that LLMS store information in their weights, but the stored information is a bit more surface level than what is needed in knowledge-intensive tasks. This makes the performance inadequate when it comes to queries that require in-depth and accurate responses (Lewis et al., 2020). The way RAG works is by storing additional relevant information in a database of some sorts and fetching relevant information from that store that are similar to user queries. The fetched documents are fed to the LLM along with the query. That way the LLM is able to use the information provided to respond to the query with information. This also helps prevent hallucinations which is where the model responds with unseen information in its training set.

79

Made with FlippingBook - professional solution for displaying marketing and sales documents online