This page shows all events ABCP (co-)organised and will be (co-)organising. Some events (co-)sponsored or supported by ABCP may also be included.
This hybrid webinar is jointly organised by the Institute of Cyber Security for Society (iCSS) at the University of Kent, Specialty Committee on AI and Cyber Security of ABCP (Association of British Chinese Professors), and CBAIA (China-Britain Artificial Intelligence Association). The in-person part of the event will take place at JLT, Jennison Building, University of Kent, Canterbury, Kent, CT2 7NZ, UK.
It will lay the foundation of a follow-up joint ABCP-CBAIA whole-day Workshop on AI Safety at UCL in London on Saturday 3 May 2025. More details about the workshop will be released after the talk.
Dr Jindong Gu, Senior Research Scientist, Google & Senior Research Fellow, University of Oxford, UK
In recent years, generative AI (GenAI), like large language models and text-to-image models, has received significant attention across various domains. Ensuring responsible generation is critical for real-world deployment. In this talk, I will present five key and practical considerations for responsible GenAI: generating truthful content, avoiding toxicity, refusing harmful instructions, preventing training data leakage, and ensuring content identifiability. By providing a unified perspective on textual and visual generative models, this talk offers insights into best practices, emerging research directions, and challenges in building safe and ethical GenAI systems.
This talk is based on the following preprint authored by the speaker:
Jindong Gu (2024) A Survey on Responsible Generative AI: What to Generate and What Not. arXiv:2404.05783v2 [cs.CY], 77 pages. https://doi.org/10.48550/arXiv.2404.05783
Dr Jindong Gu is a Senior Research Scientist at Google, specialising in advancing the reliability and safety of AI technologies. He is also affiliated with the University of Oxford as a Senior Research Fellow in Torr Vision Group. Prior to that, He received his PhD degree from University of Munich in Tresp Lab. His research goal is to build responsible AI. Specifically, he is interested in the interpretability, robustness, privacy, and safety of AI models. In this area, He has regularly published papers, served as Reviewer and Area Chair, and organised workshops at leading conferences, including NeurIPS, ICLR, and CVPR. More information about him can be found at https://jindonggu.github.io/.
Meeting ID: 397 111 226 771
Passcode: vk93e5Ni