AAI_2025_Capstone_Chronicles_Combined
11
Figure6 Training test results, confusion matrix of the CNN model
Although the overall accuracy is quite high, it can be seen on this report that the model tends to be more aggressive when flagging images as “Fake”. Only 8% of the “Fakes” are misclassified as “Real”, but on the other hand, 18% of “Real” images are misclassified as “Fake”. This overly-cautious model could be used for many non-critical applications, but users should be aware of its bias. By looking at some samples of misclassified images we can clearly see that some of them are problems with the dataset, for example, some of them may be completely unintelligible. On the other hand, some other examples look like they should have been correctly classified, as they do not have any obvious defects or present some artifacts that are clearly the product of a generative AI.
368
Made with FlippingBook - Share PDF online