M.S. AAI Capstone Chronicles 2024

A.S.LINGUIST

7

Figure 4

Top 20 most common words in the questions and answers of the conversations dataset

Background Information

Other groups working in both academic research (Berkley School of Information, n.d.;

Huang et al., 2021; Parsdani et al., 2018; Voinea, 2017) and commercial projects (Aquino, 2023;

Lenovo StoryHub, 2023) already developed an AI application similar to ours. The main

differences among the different projects mainly depend on (i) the type of gesture recognition the

system is able to perform, (ii) the format of the outputs returned from the chatbot, (iii) the kind of

topics the chatbot can hold a conversation on, and (iv) the machine learning methods adopted to

develop the sign language interpreter and the chatbot.

Concerning gesture recognition, it can be static (Aquino, 2023) or dynamic (Lenovo

StoryHub, 2023). While in static gesture recognition hand gestures are given as a sequence of

images each showing a specific sign, in dynamic gesture recognition hand gestures, and in some

cases facial expressions, have to be captured and recognized from a live video (Huang et al.,

2021). Also, the signals returned from the chatbot can be just texts (Voinea, 207) or both audio

187

Made with FlippingBook - professional solution for displaying marketing and sales documents online