Submitted successfully

Special Session Ⅰ

Submission Deadline: June 10, 2026
Data Security and Privacy Protection in Artificial Intelligence
人工智能数据安全与隐私保护

 

Chair: Co-chair:
Yijing Lin Qing Fan
Beijing University of Posts and Telecommunications, China North China Electric Power University, China
   
Topics (Include but are not limited to):  
  • Theories and methods for data security in artificial intelligence (人工智能中的数据安全理论与方法)
  • Access control and permission management for artificial intelligence (面向人工智能的数据访问控制与权限管理)
  • Security and privacy protection mechanisms for multi-source heterogeneous data fusion (多源异构数据融合中的安全与隐私保护机制)
  • Data security and privacy protection in federated learning (联邦学习中的数据安全与隐私保护)
  • Federated unlearning, machine unlearning, and the right to data deletion (联邦遗忘、机器遗忘与数据删除权保护)
  • Data security risks and protection mechanisms in generative artificial intelligence (生成式人工智能中的数据安全风险与防护机制)
  • Trustworthy artificial intelligence and explainable security mechanisms (可信人工智能与可解释安全机制)
  • Data ownership, circulation, and security governance for artificial intelligence (面向人工智能的数据确权、流通与安全治理)
  • Cross-domain data sharing and collaborative security in artificial intelligence systems (人工智能系统中的跨域数据共享与协同安全)
  • Data security and privacy protection in multi-agent systems (多智能体系统中的数据安全与隐私保护)
   
Summary:  
  • With the rapid development of artificial intelligence, new intelligent paradigms such as large models, generative artificial intelligence, multi-agent systems, and embodied intelligence are continuously emerging. Artificial intelligence is being increasingly integrated into key fields such as intelligent transportation, smart manufacturing, smart healthcare, financial technology, the low-altitude economy, and social governance. As the core foundation of artificial intelligence, data plays a crucial role in data collection, storage, processing, sharing, training, inference, and application. However, while artificial intelligence accelerates the release of data value, it also introduces a series of new challenges, including data leakage, privacy infringement, data misuse, model theft, adversarial attacks, insufficient compliance of training data, and unclear accountability in cross-domain data circulation, which seriously hinder the secure and trustworthy development of artificial intelligence.
    This forum focuses on data security and privacy protection in artificial intelligence. It aims to discuss theoretical foundations, methodological innovations, system implementations, and representative applications regarding data security risks, privacy-preserving mechanisms, trustworthy governance methods, and key enabling technologies throughout the full lifecycle of artificial intelligence. The forum is intended to provide a high-level platform for academic and industrial exchanges, promote interdisciplinary integration among artificial intelligence, security, communications, computing, control, and governance, and facilitate the development of AI paradigms that jointly consider security, privacy, trustworthiness, and usability. It is expected to provide both theoretical support and practical references for the healthy development and industrial deployment of artificial intelligence technologies.
   
  • 随着人工智能技术的快速发展,大模型、生成式人工智能、多智能体系统以及具身智能等新型智能范式不断涌现,人工智能正加速融入智慧交通、智能制造、智慧医疗、金融科技、低空经济和社会治理等关键领域。数据作为人工智能发展的核心基础,在采集、存储、处理、共享、训练、推理和应用等环节中发挥着关键作用。然而,人工智能在推动数据价值释放的同时,也带来了数据泄露、隐私侵犯、数据滥用、模型窃取、对抗攻击、训练数据合规性不足以及跨域流通过程中责任边界不清等一系列新挑战,严重制约了人工智能技术的安全可信发展。
    本论坛聚焦人工智能中的数据安全与隐私保护问题,围绕人工智能全生命周期中的数据安全风险、隐私保护机制、可信治理方法与关键支撑技术,探讨相关理论基础、方法创新、系统实现与典型应用。论坛旨在为相关领域专家学者和产业界研究人员提供高水平交流平台,促进人工智能、安全、通信、计算、控制与治理等多学科交叉融合,推动形成兼顾安全、隐私、可信与可用的人工智能发展路径,为人工智能技术的健康发展和产业落地提供理论支撑与实践参考。

Subscribe for Secretary Consultation Services