Research Webzine of the KAIST College of Engineering since 2014
Fall 2025 Vol. 25The KAIST Data Intelligence Lab proposes Fairness-aware Sample Weighting (FSW), a new learning framework that computes sample weights to prevent unfair forgetting in class-incremental learning, while jointly considering both accuracy and fairness.
PhD students Jaeyoung Park and Minsu Kim from the KAIST Data Intelligence Lab, led by Prof. Steven Euijong Whang, have developed a new learning framework that enables training accurate and fair AI models in class-incremental learning environments where unfair forgetting can occur. The proposed approach aims to reduce performance degradation during class-incremental learning, particularly when such degradation disproportionately affects specific groups.
Continual learning refers to a learning paradigm in which models are trained on data that arrive sequentially over time, rather than being trained on all data at once. Among various continual learning settings, class-incremental learning addresses scenarios where new classes are introduced at each learning stage, making it highly relevant to real-world applications. For example, consider a smartphone photo recognition system that incrementally learns to identify new people over time. The system may first recognize family members and later be updated to recognize friends. During this process, the model must not only learn the new class but also retain its knowledge of previously learned individuals.
However, most prior studies have primarily focused on maintaining overall accuracy, while the way fairness issues emerge during the learning process has received relatively little attention. Returning to the example above, when the system is updated using conventional methods without fairness considerations, it may disproportionately forget how to recognize certain previously learned individuals, even though the overall recognition accuracy remains high.
The researchers observed that forgetting in class-incremental learning does not affect all groups equally. When newly introduced data conflict with previously learned representations, prediction performance can degrade more severely for certain classes or sensitive groups. The team defines this phenomenon as unfair forgetting.
Figure 1: (a) A toy dataset for class-incremental learning. (b) Learning a new class (Class 2) causes unfair forgetting only on a specific previous class (Class 1). (c) The gradient of the new class (g2) conflicts with that of the previous class (g1). Fairness-aware Sample Weighting (FSW) adjusts g2 to reduce the conflict. (d) Unfair forgetting is mitigated with minimal loss in new-class accuracy.
To mitigate unfair forgetting in class-incremental learning, the researchers proposed a new learning framework called Fairness-aware Sample Weighting (FSW). This framework adjusts the learning weights of training samples so that both accuracy and fairness are considered during training. It computes sample weights by jointly evaluating each sample’s impact on model accuracy and fairness, using linear programming to downweight samples that may harm disadvantaged groups. Through this mechanism, models can learn new classes while avoiding training dynamics that unfairly harm existing groups.
Figure 2: Across diverse datasets, FSW achieves a better balance between accuracy and fairness compared to existing class-incremental learning methods.
The effectiveness of FSW was validated through experiments on datasets from diverse domains, including image recognition, natural language processing, and tabular data analysis. These experiments demonstrate that FSW is not limited to a specific task or data modality. Across a wide range of datasets, FSW consistently achieved substantial improvements in fairness metrics compared to existing class-incremental learning methods, while maintaining overall prediction accuracy at a comparable level.
This study highlights that fairness in class-incremental learning should not be treated as a post-hoc concern, but rather as a core component of the learning process itself. The proposed framework provides an important foundation for building trustworthy AI systems in continual learning environments.
This research will be published under the title "Fair Class-Incremental Learning using Sample Weighting" at the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2026, the top conference in data science and data mining.
Wearable Haptics of Orthotropic Actuation for 3D Spatial Perception in Low-visibility Environment
Read moreLighting the Lunar Night: KAIST Develops First Electrostatic Power Generator for the Moon
Read moreHow AI Thinks: Understanding Visual Concept Formations in Deep Learning Models
Read moreSoft Airless Wheel for A Lunar Exploration Rover Inspired by Origami and Da Vinci Bridge Principles
Read moreTwinSpin: A Novel VR Controller Enabling In-Hand Rotation
Read more