Tutorial #1: Neural Network Design and Large Language Models (NASL2M)

Presenter: Nelishia Pillay (University of Pretoria)

Email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: Neural architecture search, large language models

Description: Neural architecture Search (NAS) has proven to be essential for the generation of neural network architectures to solve image classification, segmentation and language translation problems. With the rapid development of the area of large language models (LLMs) a synergistic relationship has developed between NAS and LLMs. NAS has been effective in developing efficient architectures for easier deployment of LLMs while LLMs have been used for NAS. This tutorial examines this synergistic relationship.

The tutorial firstly gives an overview of NAS including the purpose of NAS, approaches used, performance evaluation, including performance estimation using proxies, surrogates and predictors and efficient NAS (ENAS) and NAS benchmarks. The tutorial will then provide an overview of LLMs including descriptions of the different LLMs and related challenges. The use of NAS for the design of LLMs including LLM distillation, LLM compression, hardware-efficient LLMs and fair LLMs will be presented. The tutorial will then look at how LLMs can be used to improve NAS. The topics that will be examined include architecture generation, parameter tuning, knowledge transfer, performance prediction and LLM hybrids.


Tutorial #2: Blockchain, Semantic Web and Decentralized AI (BCSWDAI)

Presenter: Abdullah Uz Tansel (Baruch College and The Graduate Center, City University of New York)

Email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: Agentic AI, Blockchain, Databases, Decentralized Autonomous Organizations (DAO), Decentralized Identity, Decentralized AI, Intelligent Agents, Agent Communication Protocols, Trust

Description: Exchange of trusted information/knowledge among people plays an essential role in every aspect of their lives: socially, economically, and politically. Also, networks in the past 70 years, by making digital information ubiquitous, transformed our lives in unimaginable ways. This trend with new innovative technologies such Blockchain, Web3, Semantic Web and AI surely will continue at an accelerating pace in the years ahead. The tutorial covers Blockchain, Sematic Web and Decentralized AI, which is a synergistic combination for innovative applications.

Blockchain is a foundational innovation for keeping temper proof (trusted) data in a permanent, immutable, decentralized, global, and trustless ledger. It is a new field that combines distributed computing, databases, networks, cryptography, and economics, and is also rapidly evolving. It allows people, organizations, and machines to digitize their current relationships as well as forming new secure digital ones since data is securely recorded and shared in a blockchain database system. Moreover, new advances in WEB3.0 is taking place where individuals, organizations and machines are being empowered in a superior system of digital identity and trust in new services and products in many domains.

Semantic Web enables the explicit representation of knowledge in ontologies and deducing implicitly available knowledge by the machines, thus paving the way for the machines to process the knowledge and make decisions.


Tutorial #3: Adaptive Machine Learning

Presenters: Heitor Murilo Gomes (Victoria University of Wellington), Anton Lee (Victoria University of Wellington), Yibin Sun (University of Waikato)

Email: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: Artificial Intelligence, Machine Learning, Data Streams, Online Continual Learning, Concept Drifts

Description: Adaptive Machine Learning (AML) is a hands-on tutorial that introduces real-time, incremental learning techniques for streaming and continually evolving data. Using CapyMOA, an open-source Python library, participants will explore practical tools and algorithms that adapt to changing data distributions, enabling robust, low-latency learning in dynamic environments. Ideal for researchers and practitioners aiming to build scalable, adaptive solutions.


Tutorial #4: Neural Network Reprogrammability: Towards A Unified Framework for Parameter-Efficient Model Adaptation (NNR-PRICAI2025)

Presenters: Dr. Feng Liu (The University of Melbourne / RIKEN AIP), Dr. Zesheng Ye (The University of Melbourne)

Email: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: Machine Learning, Deep Learning, Foundation Models, Model Adaptation, Parameter-Efficient Fine-Tuning (PEFT), Prompt Engineering, Transfer Learning, Trustworthy AI, Neural Network Reprogrammability, Model Reprogramming (MR), Prompt Tuning (PT), In-Context Learning (ICL), Chain-of-Thought Reasoning (CoT), large-language models (LLMs), Vision-Language Models (VLMs)

Description: The era of large-scale foundation models (e.g., LLMs, VLMs) presents a critical challenge: adapting them via traditional fine-tuning to new tasks becomes prohibitively expensive, which creates barriers for researchers and practitioners with limited resources. This tutorial introduces Neural Network Reprogrammability, a new perspective and paradigm for reusing pre-trained models without modifying model parameters and thus costly retraining. With this concept, we demonstrate how seemingly disparate parameter-efficient fine-tuning techniques, namely model reprogramming (MR), prompt tuning (PT), and in-context learning (ICL)—previously studied in isolation—share fundamental principles that can be unified under a coherent framework.

The tutorial provides both theoretical foundations and practical insights, illustrating how to repurpose pre-trained models for new tasks by harnessing reprogrammability, while reducing computational costs by orders of magnitude compared to traditional fine-tuning. Attendees will learn to leverage the inherent input sensitivity of neural networks for constructive model adaptation, master concrete techniques for aligning pre-trained models' outputs to new tasks, and explore applications beyond visual recognition and text generation to diverse domains like healthcare and time-series analysis.


Tutorial #5: Evolutionary Neural Architecture Search: Methods and Theory

Presenters: Yanan Sun (Sichuan University), Zeqiong Lv (Sichuan University)

Email: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: Machine Learning, Deep Learning, Evolutionary Computation, Neural Architecture Search

Description: Deep Neural Networks (DNNs), as the cornerstone of deep learning, have demonstrated their great success in diverse real-world applications, such as image classification, natural language processing, speech recognition, to name a few. The architectures of DNNs play a crucial role in their performance, which is usually manually designed with rich expertise. However, such a design process is labor-intensive because of the trial-and-error process, and not easy to realize due to the rare expertise in practice.

Neural Architecture Search (NAS) is a kind of technique that could automatically designing promising DNN architectures by formulating the design process as optimization problems. Among existing optimizers for solving NAS, the Evolutionary Computation (EC) methods have demonstrated their powerful ability and have drawn increasing attention.

This tutorial will provide a comprehensive introduction to NAS techniques based on EC, i.e., evolutionary neural architecture search (ENAS), for automatically designing the architectures of DNNs. Specifically, this tutorial will cover the ENAS algorithms over 200 papers of most recent ENAS methods in light of the core components, to systematically show their design principles as well as justifications on the design. From this tutorial, the audiences are expected to get familiar with ENAS in four aspects.


Tutorial #6: Science in the Fifth Paradigm: A Tutorial on Science Discovery with Artificial Intelligence (S5-TDAI)

Presenters: Tri Minh Nguyen (Deakin University), Truyen Tran (Deakin University), Sherif Abdulkader Tawfik Abbas (Deakin University)

Email: This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it., This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: Scientific Discovery, AI for Science (AI4Science)

Description: This tutorial provides a comprehensive overview of the emerging fifth paradigm of scientific discovery, driven by artificial intelligence. We will cover the landscape from foundational AI methodologies—including geometric deep learning, self-supervised learning, and generative models—to their application across the scientific workflow. The tutorial will detail how AI is used to generate hypotheses, design and steer experiments, and interpret vast datasets. State-of-the-art breakthroughs will be showcased through case studies in materials science, drug discovery, and climate science. We will also address grand challenges such as data quality, model generalizability, and causality. Attendees will gain a principled understanding of the opportunities and pitfalls of AI for Science.