Hubungan Literasi Digital, Paparan Video Deepfake yang Dihasilkan AI, dan Kemampuan untuk Mengidentifikasi Deepfake pada Generasi X

Isi Artikel Utama

Darman Fauzan Dhahir
Ndoheba Kenda
Dida Dirgahayu

Abstrak

Studi ini mengeksplorasi hubungan literasi digital, paparan video deepfake yang dihasilkan AI, dan kemampuan untuk mengidentifikasi deepfake oleh Generasi X di Indonesia yang saat ini berusia antara 43 hingga 58 tahun. Penelitian ini juga menganalisis dampak kemampuan identifikasi deepfake pada aspek kognitif, afektif, dan perilaku pengguna internet. Melalui survei yang melibatkan 199 responden yang diambil dari total populasi 42 juta pengguna internet Generasi X di Indonesia, studi ini menggunakan metode sampling acak. Ukuran sampel ditentukan dengan Rumus Slovin dengan tingkat kepercayaan 90% dan margin of error sebesar 7,1%. Analisis deskriptif menunjukkan tingkat literasi digital yang moderat dan paparan deepfake yang relatif rendah. Namun, kemampuan untuk mengidentifikasi deepfake ditemukan rendah. Hasil analisis statistik inferensial menunjukkan bahwa literasi digital dan paparan deepfake tidak memiliki pengaruh yang signifikan terhadap kemampuan mengidentifikasi deepfake. Selain itu, kemampuan untuk mengidentifikasi deepfake tidak secara signifikan memengaruhi kognisi, kasih sayang, atau perilaku. Meskipun literasi digital itu penting, temuan ini menguatkan asumsi Teori Generasi dan Teori Ketergantungan Media. Hasil ini juga menunjukkan bahwa pelatihan khusus tentang teknologi manipulasi media diperlukan untuk meningkatkan kemampuan deteksi deepfake. Penelitian ini menyiratkan bahwa upaya peningkatan literasi digital harus diperluas, termasuk keterampilan teknis dan pemikiran kritis yang relevan dengan media manipulatif seperti deepfakes

Rincian Artikel

Bagian
Komunikasi
Biografi Penulis

Darman Fauzan Dhahir, National Research and Innovation Agency of Indonesia

He is a researcher in the Digital Society Research Group at the National Research and Innovation Agency of Indonesia. He received a master’s in communication science from the University of Hasanuddin in Makassar, Indonesia. He works in applied communication study fields, such as journalism, media, public relations, educational, healthcare, & environmental communication. 

Referensi

Ahmed, S. (2021). Fooled by the fakes: Cognitive differences in perceived claim accuracy and sharing intention of non-political deepfakes. Personality and Individual Differences, 182, 111074. https://doi.org/10.1016/j.paid.2021.111074

Ahmed, S. (2023). Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism. New Media & Society, 25(5), 1108–1129. https://doi.org/10.1177/14614448211019198

Ahmed, S., Ng, S. W. T., & Bee, A. W. T. (2023). Understanding the role of fear of missing out and deficient self-regulation in sharing of deepfakes on social media: Evidence from eight countries. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1127507

Ameen, N., Hosany, S., & Taheri, B. (2023). Generation Z’s psychology and new‐age technologies: Implications for future research. Psychology & Marketing, 40(10), 2029–2040. https://doi.org/10.1002/mar.21868

Barari, S., Munger, K., & Lucas, C. (2024). Political Deepfakes are as Credible as Other Fake Media and (Sometimes) Real Media. The Journal of Politics, 732990. https://doi.org/10.1086/732990

Blancaflor, E., Anave, D. M., Cunanan, T. S., Frias, T., & Velarde, L. N. (2023). A Literature Review of the Legislation and Regulation of Deepfakes in the Philippines. Proceedings of the 2023 14th International Conference on E-Business, Management and Economics, 392–397. https://doi.org/10.1145/3616712.3616722

Burnham, S. L. F., & Arbeit, M. R. (2023). Social Media Literacy to Confront Far-Right Content: Saying “No” to Neutrality. Human Development, 67(3), 117–134. https://doi.org/10.1159/000531765

Caldwell, M., Andrews, J. T. A., Tanay, T., & Griffin, L. D. (2020). AI-enabled future crime. Crime Science, 9(1), 14. https://doi.org/10.1186/s40163-020-00123-8

Cetindamar, D., Abedin, B., & Shirahada, K. (2024). The Role of Employees in Digital Transformation: A Preliminary Study on How Employees’ Digital Literacy Impacts Use of Digital Technologies. IEEE Transactions on Engineering Management, 71, 7837–7848. https://doi.org/10.1109/TEM.2021.3087724

Chadwick, A., & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1–24. https://doi.org/10.1093/ct/qtab019

Chen, H., Lin, Y., Li, B., & Tan, S. (2023). Learning Features of Intra-Consistency and Inter-Diversity: Keys Toward Generalizable Deepfake Detection. IEEE Transactions on Circuits and Systems for Video Technology, 33(3), 1468–1480. IEEE Transactions on Circuits and Systems for Video Technology. https://doi.org/10.1109/TCSVT.2022.3209336

Cinelli, M., Etta, G., Avalle, M., Quattrociocchi, A., Di Marco, N., Valensise, C., Galeazzi, A., & Quattrociocchi, W. (2022). Conspiracy theories and social media platforms. Current Opinion in Psychology, 47, 101407. https://doi.org/10.1016/j.copsyc.2022.101407

Diakopoulos, N., & Johnson, D. (2021). Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society, 23(7), 2072–2098. https://doi.org/10.1177/1461444820925811

Dobber, T., Metoui, N., Trilling, D., Helberger, N., & De Vreese, C. (2021). Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? The International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364

Dourado, T. (2023). Who Posts Fake News? Authentic and Inauthentic Spreaders of Fabricated News on Facebook and Twitter. Journalism Practice, 17(10), 2103–2122. https://doi.org/10.1080/17512786.2023.2176352

Eiserbeck, A., Maier, M., Baum, J., & Abdel Rahman, R. (2023). Deepfake smiles matter less—The psychological and neural impact of presumed AI-generated faces. Scientific Reports, 13(1), 16111. https://doi.org/10.1038/s41598-023-42802-x

Fangming Dai, & Li, Z. (2024). Research on 2D Animation Simulation Based on Artificial Intelligence and Biomechanical Modeling. EAI Endorsed Transactions on Pervasive Health and Technology, 10. https://doi.org/10.4108/eetpht.10.5907

Federspiel, F., Mitchell, R., Asokan, A., Umana, C., & McCoy, D. (2023). Threats by artificial intelligence to human health and human existence. BMJ Global Health, 8(5), e010435. https://doi.org/10.1136/bmjgh-2022-010435

Fosco, C., Josephs, E., Andonian, A., & Oliva, A. (2022). Deepfake Caricatures: Human-guided Motion Magnification Improves Deepfake Detection by Humans and Machines. Journal of Vision, 22(14), 4079. https://doi.org/10.1167/jov.22.14.4079

Godulla, A., Hoffmann, C. P., & Seibert, D. M. A. (2021). Dealing with deepfakes—An interdisciplinary examination of the state of research and implications for communication studies. Studies in Communication and Media, 10(1), 73–96. Scopus. https://doi.org/10.5771/2192-4007-2021-1-72

Goh, D. H. (2024). “He looks very real”: Media, knowledge, and search‐based strategies for deepfake identification. Journal of the Association for Information Science and Technology, 75(6), 643–654. https://doi.org/10.1002/asi.24867

Guess, A. M., & Munger, K. (2023). Digital literacy and online political behavior. Political Science Research and Methods, 11(1), 110–128. https://doi.org/10.1017/psrm.2022.17

Hameleers, M., Van Der Meer, T. G. L. A., & Dobber, T. (2022). You Won’t Believe What They Just Said! The Effects of Political Deepfakes Embedded as Vox Populi on Social Media. Social Media + Society, 8(3), 205630512211163. https://doi.org/10.1177/20563051221116346

Hameleers, M., Van Der Meer, T. G. L. A., & Dobber, T. (2024). They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes. European Journal of Communication, 39(1), 56–70. https://doi.org/10.1177/02673231231184703

Hancock, J. T., & Bailenson, J. N. (2021). The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking, 24(3), 149–152. https://doi.org/10.1089/cyber.2021.29208.jth

Harris, K. R. (2021). Video on demand: What deepfakes do and how they harm. Synthese, 199(5–6), 13373–13391. https://doi.org/10.1007/s11229-021-03379-y

Ienca, M. (2023). On Artificial Intelligence and Manipulation. Topoi, 42(3), 833–842. https://doi.org/10.1007/s11245-023-09940-3

Ismail, A., Elpeltagy, M., Zaki, M. S., & Eldahshan, K. (2022). An integrated spatiotemporal-based methodology for deepfake detection. Neural Computing and Applications, 34(24), 21777–21791. https://doi.org/10.1007/s00521-022-07633-3

Karpinska-Krakowiak, M. (Mags), & Eisend, M. (2024). Realistic Portrayals of Untrue Information: The Effects of Deepfaked Ads and Different Types of Disclosures. Journal of Advertising, 1–11. https://doi.org/10.1080/00913367.2024.2306415

Law, N., Woo, D., De La Torre, J., & Wong, G. (2018). A global framework of reference on digital literacy skills for indicator 4.4.2 (UIS/2018/ICT/IP/51). UNESCO Institute for Statistics.

Li, M., & Wan, Y. (2023). Norms or fun? The influence of ethical concerns and perceived enjoyment on the regulation of deepfake information. Internet Research, 33(5), 1750–1773. https://doi.org/10.1108/INTR-07-2022-0561

Lissitsa, S. (2024). Generations X, Y, Z: The effects of personal and positional inequalities on critical thinking digital skills. Online Information Review. https://doi.org/10.1108/OIR-09-2023-0453

Liu, Y., Chen, W., Liu, L., & Lew, M. S. (2019). SwapGAN: A Multistage Generative Approach for Person-to-Person Fashion Style Transfer. IEEE Transactions on Multimedia, 21(9), 2209–2222. https://doi.org/10.1109/TMM.2019.2897897

Long, T. Q., Hoang, T. C., & Simkins, B. (2023). Gender gap in digital literacy across generations: Evidence from Indonesia. Finance Research Letters, 58, 104588. https://doi.org/10.1016/j.frl.2023.104588

Lorenz-Spreen, P., Geers, M., Pachur, T., Hertwig, R., Lewandowsky, S., & Herzog, S. M. (2021). Boosting people’s ability to detect microtargeted advertising. Scientific Reports, 11(1), 15541. https://doi.org/10.1038/s41598-021-94796-z

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4(11), 1102–1109. https://doi.org/10.1038/s41562-020-0889-7

Lu, W., Liu, L., Zhang, B., Luo, J., Zhao, X., Zhou, Y., & Huang, J. (2023). Detection of Deepfake Videos Using Long-Distance Attention. IEEE Transactions on Neural Networks and Learning Systems, 1–14. IEEE Transactions on Neural Networks and Learning Systems. https://doi.org/10.1109/TNNLS.2022.3233063

MacLean, P., Cahillane, M., & Smy, V. (2024). Lifting the lid on manipulative website contents: A framework mapping contextual and informational feature combinations against associated social cognitive vulnerabilities. Social and Personality Psychology Compass, 18(2), e12947. https://doi.org/10.1111/spc3.12947

Malik, A., Kuribayashi, M., Abdullahi, S. M., & Khan, A. N. (2022). DeepFake Detection for Human Face Images and Videos: A Survey. IEEE Access, 10, 18757–18775. https://doi.org/10.1109/ACCESS.2022.3151186

Marron, M. B. (2015). New Generations Require Changes Beyond the Digital. Journalism & Mass Communication Educator, 70(2), 123–124. https://doi.org/10.1177/1077695815588912

McCosker, A. (2022). Making sense of deepfakes: Socializing AI and building data literacy on GitHub and YouTube. New Media and Society. Scopus. https://doi.org/10.1177/14614448221093943

Millière, R. (2022). Deep learning and synthetic media. Synthese, 200(3), 231. https://doi.org/10.1007/s11229-022-03739-2

Mo, S., Lu, P., & Liu, X. (2022). AI-Generated Face Image Identification with Different Color Space Channel Combinations. Sensors, 22(21), Article 21. https://doi.org/10.3390/s22218228

Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A., & Dwivedi, Y. K. (2023). Deepfakes: Deceptions, mitigations, and opportunities. Journal of Business Research, 154, 113368. https://doi.org/10.1016/j.jbusres.2022.113368

Nas, E., & De Kleijn, R. (2024). Conspiracy thinking and social media use are associated with ability to detect deepfakes. Telematics and Informatics, 87, 102093. https://doi.org/10.1016/j.tele.2023.102093

Naskar, G., Mohiuddin, S., Malakar, S., Cuevas, E., & Sarkar, R. (2024). Deepfake detection using deep feature stacking and meta-learning. Heliyon, 10(4), e25933. https://doi.org/10.1016/j.heliyon.2024.e25933

Neethirajan, S. (2021). Is Seeing Still Believing? Leveraging Deepfake Technology for Livestock Farming. Frontiers in Veterinary Science, 8, 740253. https://doi.org/10.3389/fvets.2021.740253

Nieweglowska, M., Stellato, C., & Sloman, S. A. (2023). Deepfakes: Vehicles for Radicalization, Not Persuasion. Current Directions in Psychological Science, 32(3), 236–241. https://doi.org/10.1177/09637214231161321

Patel, Y., Tanwar, S., Bhattacharya, P., Gupta, R., Alsuwian, T., Davidson, I. E., & Mazibuko, T. F. (2023). An Improved Dense CNN Architecture for Deepfake Image Detection. IEEE Access, 11, 22081–22095. IEEE Access. https://doi.org/10.1109/ACCESS.2023.3251417

Prezja, F., Paloneva, J., Pölönen, I., Niinimäki, E., & Äyrämö, S. (2022). DeepFake knee osteoarthritis X-rays from generative adversarial neural networks deceive medical experts and offer augmentation potential to automatic classification. Scientific Reports, 12(1), 18573. https://doi.org/10.1038/s41598-022-23081-4

Qureshi, J., & Khan, S. (2024). Deciphering Deception—The Impact of AI Deepfakes on Human Cognition and Emotion. Journal of Advances in Artificial Intelligence, 2(1). https://doi.org/10.18178/JAAI.2024.2.1.101-107

Schmitt, K. L., & Woolf, K. D. (2018). Cognitive development in digital contexts. Journal of Children and Media, 1–3. https://doi.org/10.1080/17482798.2018.1522116

Shahzad, H. F., Rustam, F., Flores, E. S., Luís Vidal Mazón, J., De La Torre Diez, I., & Ashraf, I. (2022). A Review of Image Processing Techniques for Deepfakes. Sensors, 22(12), 4556. https://doi.org/10.3390/s22124556

Shin, S. Y., & Lee, J. (2022). The Effect of Deepfake Video on News Credibility and Corrective Influence of Cost-Based Knowledge about Deepfakes. Digital Journalism, 10(3), 412–432. https://doi.org/10.1080/21670811.2022.2026797

Sivathanu, B., Pillai, R., & Metri, B. (2023). Customers’ online shopping intention by watching AI-based deepfake advertisements. International Journal of Retail & Distribution Management, 51(1), 124–145. https://doi.org/10.1108/IJRDM-12-2021-0583

Tahir, R., Batool, B., Jamshed, H., Jameel, M., Anwar, M., Ahmed, F., Zaffar, M. A., & Zaffar, M. F. (2021). Seeing is Believing: Exploring Perceptual Differences in DeepFake Videos. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3411764.3445699

Tinmaz, H., Lee, Y.-T., Fanea-Ivanovici, M., & Baber, H. (2022). A systematic review on digital literacy. Smart Learning Environments, 9(1), 21. https://doi.org/10.1186/s40561-022-00204-y

Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. (2020). Deepfakes and beyond: A Survey of face manipulation and fake detection. Information Fusion, 64, 131–148. https://doi.org/10.1016/j.inffus.2020.06.014

Trinh, L., Tsang, M., Rambhatla, S., & Liu, Y. (2021). Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 1972–1982. https://doi.org/10.1109/WACV48630.2021.00202

Twomey, J., Ching, D., Aylett, M. P., Quayle, M., Linehan, C., & Murphy, G. (2023). Do deepfake videos undermine our epistemic trust? A thematic analysis of tweets that discuss deepfakes in the Russian invasion of Ukraine. PLOS ONE, 18(10), e0291668. https://doi.org/10.1371/journal.pone.0291668

Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1), 205630512090340. https://doi.org/10.1177/2056305120903408

Van Der Sloot, B., & Wagensveld, Y. (2022). Deepfakes: Regulatory challenges for the synthetic society. Computer Law & Security Review, 46, 105716. https://doi.org/10.1016/j.clsr.2022.105716

Vasist, P. N., & Krishnan, S. (2023). Engaging with deepfakes: A meta-synthesis from the perspective of social shaping of technology theory. Internet Research, 33(5), 1670–1726. https://doi.org/10.1108/INTR-06-2022-0465

Waqas, N., Safie, S. I., Kadir, K. A., Khan, S., & Kaka Khel, M. H. (2022). DEEPFAKE Image Synthesis for Data Augmentation. IEEE Access, 10, 80847–80857. IEEE Access. https://doi.org/10.1109/ACCESS.2022.3193668

Xia, C., & Johnson, N. F. (2024). Nonlinear spreading behavior across multi-platform social media universe. Chaos: An Interdisciplinary Journal of Nonlinear Science, 34(4), 043149. https://doi.org/10.1063/5.0199655

Xiao, S., Zhang, Z., Yang, J., Wen, J., & Li, Y. (2023). Forgery Detection by Weighted Complementarity between Significant Invariance and Detail Enhancement. ACM Transactions on Multimedia Computing, Communications, and Applications. https://doi.org/10.1145/3605893

Yang, Z., Liang, J., Xu, Y., Zhang, X.-Y., & He, R. (2023). Masked Relation Learning for DeepFake Detection. IEEE Transactions on Information Forensics and Security, 18, 1696–1708. IEEE Transactions on Information Forensics and Security. https://doi.org/10.1109/TIFS.2023.3249566

Zhao, C., Wang, C., Hu, G., Chen, H., Liu, C., & Tang, J. (2023). ISTVT: Interpretable Spatial-Temporal Video Transformer for Deepfake Detection. IEEE Transactions on Information Forensics and Security, 18, 1335–1348. IEEE Transactions on Information Forensics and Security. https://doi.org/10.1109/TIFS.2023.3239223