M.S. AAI Capstone Chronicles 2024
Detecting Fake News Using Natural Language Processing to place these values into a vector, y, which contains output values of the past, present, and future data (Li et al., 2020). To enhance the interpretability of our Bidirectional LSTM, we applied LIME (Local Interpretable Model-Agnostic Explanations), aiding in understanding black box model predictions. LIME facilitates users in comprehending why a text may be misleading, fostering learning and awareness of potential red flags. By employing a family of small interpretable linear models, LIME approximates the complex model's output, simplifying explanations while regularizing model complexity (DeepFindr, 2021). The explanation of LIME is made by the following loss function: . The variables are the complex model, the ξ( ) = ∈ ( , , π ) + Ω( ) simple model, and the input. gives an approximation of the complex model, using ( , , π ) the simple model in the general area of the input ( ). Regularizes the complexity of the π Ω( ) simple model used to keep the explanation simple. The segment minimizes the two ∈ loss functions: approximating the complex model in a local area and the complexity measure (DeepFindr, 2021).
10
35
Made with FlippingBook - professional solution for displaying marketing and sales documents online