增量学习——Maintaining Discrimination and Fairness in Class Incremental Learning——CVPR2020

Abstract

knowledge distillation; 造成灾难性遗忘的很大一个原因是the weights in the last fully connected layer are highly biased in class-incremental learning;

Introduction

增量学习——Maintaining Discrimination and Fairness in Class Incremental Learning——CVPR2020

Conclusion

maintain the discrimination via knowledge distillation and maintains the fairness via a method called weight aligning

Key points: code开源;这篇文章的思路是基于《large scale incremental learning,CVPR2019》和《Learning a unified classifier via rebalancing》做的;实验性文章;也是基于rehearsal strategy;找到一个切入点做一个工作,做出好的实验结果

相关文章:

  • 2021-10-12
  • 2021-05-02
  • 2021-10-21
  • 2022-12-23
  • 2021-09-02
  • 2021-04-06
  • 2021-09-06
  • 2021-04-18
猜你喜欢
  • 2021-10-05
  • 2021-05-06
  • 2021-10-17
  • 2021-04-23
  • 2021-08-13
  • 2021-06-21
相关资源
相似解决方案