Prof. Zhi-Hua Zhou (ACM, AAAI, IEEE, Fellow)
Nanjing University, China
Zhi-Hua Zhou is Professor of Computer Science and Artificial Intelligence at Nanjing University. His research interests are mainly in machine learning and data mining, with significant contributions to ensemble learning, weakly supervised learning, multi-label learning, etc. He has authored the books "Ensemble Methods: Foundations and Algorithms", "Machine Learning", etc., and published more than 200 papers in top-tier journals or conferences. Many of his inventions have been successfully transferred to industry. He founded ACML (Asian Conference on Machine Learning), served as Program Chair for AAAI-19, IJCAI-21, etc., General Chair for ICDM'16, SDM’22, etc., and Senior Area Chair for NeurIPS and ICML. He is series editor of Springer LNAI, on the advisory board of AI Magazine, and serves as associate editor of AIJ, MLJ, IEEE TPAMI, ACM TKDD, etc. He is a Fellow of the ACM, AAAI, AAAS, IEEE, and recipient of the National Natural Science Award of China, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, etc.
Speech Title: The Long March of Theoretical Exploration of Boosting
Abstract: AdaBoost is a famous mainstream ensemble learning approach that has greatly influenced machine learning and related areas. A fundamentally fascinating mystery of Adaboost lies in the phenomenon that it seems resistant to overfitting, which has inspired a lot of theoretical investigations. In this talk, we will briefly introduce the long history of learning theory studies and debates about Boosting, where the recently concluding result discloses the importance of minimizing margin variance when maximizing margin mean during learning process, which provides new inspiration for the design of powerful learning algorithms such as ODMs (Optimal margin Distribution Machines).
Prof. Ivor W Tsang (IEEE Fellow)
Prof Ivor W Tsang is Director of A*STAR Centre for Frontier AI Research (CFAR) since Jan 2022. Previously, he was a Professor of Artificial Intelligence, at University of Technology Sydney (UTS), and Research Director of the Australian Artificial Intelligence Institute (AAII), the largest AI institute in Australia, which is the key player to drive the University of Technology Sydney to rank 10th globally and 1st in Australia for AI research, in the latest AI Research Index. Prof Tsang is working at the forefront of big data analytics and Artificial Intelligence. His research focuses on transfer learning, deep generative models, learning with weakly supervision, big data analytics for data with extremely high dimensions in features, samples and labels. His work is recognised internationally for its outstanding contributions to those fields.
Prof Tsang serves as the Editorial Board for the Journal of Machine Learning Research, Machine Learning, Journal of Artificial Intelligence Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Artificial Intelligence, IEEE Transactions on Big Data, and IEEE Transactions on Emerging Topics in Computational Intelligence. He serves as a Senior Area Chair/Area Chair for NeurIPS, ICML, AAAI and IJCAI, and the steering committee of ACML. Recently, Prof Tsang was conferred the IEEE Fellow for his outstanding contributions to large-scale machine learning and transfer learning.
Speech Title: Robust Rank Aggregation and Its Application
Abstract: In rank aggregation (RA), a collection of preferences from different users are summarized into a total order under the assumption of homogeneity of users. Model misspecification in RA arises since the homogeneity assumption fails to be satisfied in the complex real-world situation. Existing robust RAs usually resort to an augmentation of the ranking model to account for additional noises, where the collected preferences can be treated as a noisy perturbation of idealized preferences. Since the majority of robust RAs rely on certain perturbation assumptions, they cannot generalize well to agnostic noise-corrupted preferences in the real world. In this talk, I first summarize the literature of Robust RA methods, and I present CoarsenRank, which possesses robustness against model misspecification. Specifically, the properties of our CoarsenRank are summarized as follows: (1) CoarsenRank is designed for mild model misspecification, which assumes there exist the ideal preferences (consistent with model assumption) that locate in a neighborhood of the actual preferences. (2) CoarsenRank then performs regular RAs over a neighborhood of the preferences instead of the original data set directly. Therefore, CoarsenRank enjoys robustness against model misspecification within a neighborhood. (3) The neighborhood of the data set is defined via their empirical data distributions. (4) CoarsenRank is further instantiated to Coarsened Thurstone, Coarsened Bradly-Terry, and Coarsened Plackett-Luce with three popular probability ranking models. Meanwhile, tractable optimization strategies are introduced with regards to each instantiation respectively. Finally, I present the applications of RAs in Neuroscience, Deep Generative Models, and Contrastive learning.