A Taxonomy of Risks in Model Use and Data Integrity for Emerging Threats in AI

Authors :Burcu SAKIZ
Pages :179-196
Abstract :Artificial Intelligence (AI) has revolutionized sectors such as healthcare, education, smart cities, and finance by enabling advanced data processing and decision-making through deep learning and large language models (LLMs). However, the inherent complexities of AI architectures, including neural networks and transformer models, introduce significant vulnerabilities that threaten privacy, security, and trust. This chapter presents a comprehensive taxonomy of AI threats, categorized into model-based, data-based, and usage scenario risks, to address these challenges systematically. Model-based threats, such as prompt injection and model poisoning, exploit architectural flaws, allowing malicious inputs or backdoors to compromise system integrity. Data-based threats, including training data leakage, model inversion, and model extraction, endanger sensitive information and intellectual property, violating regulations like GDPR. Usage scenario risks, such as hallucinations, automation bias, and AI-enhanced cyber threats, emerge during deployment, impacting reliability in critical applications. Drawing on scholarly literature and real-world incidents, the chapter examines these risks across the AI lifecycle, from data collection to operational use. To counter these vulnerabilities, the chapter proposes a Responsible AI Mitigation Framework, integrating technical, governance, and human-centered strategies. Technical defenses, like adversarial training and differential privacy, reduce attack success, though computational costs pose barriers for small organizations. Governance measures, aligned with the EU AI Act and NIST guidelines, emphasize audits and compliance to ensure ethical deployment. Human-centered approaches, including staff training and transparent interfaces, mitigate automation bias, enhancing decision-making accuracy. The framework advocates continuous monitoring and red-teaming to detect threats in real-time, fostering resilience in dynamic environments. Despite these strategies, challenges such as regulatory silos and high implementation costs persist, necessitating scalable, cost-effective solutions. This chapter contributes to AI governance by offering a structured approach to secure, ethical AI development, addressing emerging threats like quantum computing risks. Future research priorities include scalable defenses, automated governance, and global regulatory harmonization to ensure AI’s equitable and trustworthy integration across industries, safeguarding its transformative potential.
Keywords :Artificial Intelligence Security, Cybersecurity Threats, Model-Based Risks, Data Integrity, Responsible AI, Mitigation Framework
Doi:10.5281/zenodo.16010141
Pdf URL :https://www.izmirakademi.org/books/The_Age_of_Generative_Artificial_Intelligence/cp8/pdf/cp8.pdf