CUB-200-2011 Leaderboard
Update this leaderboard
Reference
[1] Wang, Yaming, et al. Davis. "Learning a discriminative filter bank within a cnn for fine-grained recognition." In CVPR. 2018.
[2] Yang, Ze, et al. "Learning to navigate for fine-grained classification." In ECCV. 2018.
[3] Dubey, Abhimanyu, et al. "Pairwise confusion for fine-grained visual classification." In ECCV. 2018.
[4] Chen, Yue, et al. "Destruction and construction learning for fine-grained image recognition." In CVPR. 2019.
[5] Luo, Wei, et al. "Cross-x learning for fine-grained visual categorization." In CVPR. 2019.
[6] Rao, Yongming, et al. "Counterfactual attention learning for fine-grained visual categorization and re-identification." In CVPR. 2021.
[7] Du, Ruoyi, et al. "Progressive learning of category-consistent multi-granularity features for fine-grained visual classification." IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).
[8] Touvron, Hugo, et al. "Training data-efficient image transformers & distillation through attention." In ICML. 2021.
[9] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." In SIGKDD. 2016.
[10] Fong, Ruth C., and Andrea Vedaldi. "Interpretable explanations of black boxes by meaningful perturbation." In ICCV. 2017.
[11] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for deep networks." In ICML. 2017.
[12] Smilkov, Daniel, et al. "Smoothgrad: removing noise by adding noise." arXiv preprint arXiv:1706.03825 (2017).
[13] Schulz, Karl, et al. "Restricting the Flow: Information Bottlenecks for Attribution." In ICLR. 2019.
[14] Goyal, Yash, et al. "Counterfactual visual explanations." In ICML. 2019.
[15] Huang, Zixuan, and Yin Li. "Interpretable and accurate fine-grained recognition via region grouping." In CVPR. 2020.
[16] Khakzar, Ashkan, et al. "Neural response interpretation through the lens of critical pathways." In CVPR. 2021.
[17] Chang, Dongliang, et al. "Making a Bird AI Expert Work for You and Me." arXiv preprint arXiv:2112.02747 (2021).
Contact
changdongliang@bupt.edu.cn