AI/논문 리뷰 [논문리뷰] Learning Attention from Attention: Efficient Self-Refinement Transformer for Face Super-Resolution by 미뇽도리 2023. 12. 26. 논문 링크 : https://www.ijcai.org/proceedings/2023/0115.pdf 선정 이유 : Attention 기법을 사용하는 Face Super-Resolution [발표 PPT] 공유하기 게시글 관리 민도리의 공부 저작자표시 비영리 변경금지 (새창열림) 'AI > 논문 리뷰' 카테고리의 다른 글 [논문리뷰] CodeFormer : Towards Robust Blind Face Restoration with Codebook Lookup Transformer (0) 2025.01.20 [논문리뷰] SPARNet : Learning Spatial Attention for Face Super-Resolution (0) 2024.05.27 [논문 공부] RPF: Reference-Based Progressive FaceSuper-Resolution Without LosingDetails and Identity (2) 2023.11.22 [논문리뷰] U-Net: Convolutional Networks for Biomedical Image Segmentation (0) 2023.11.12 [논문리뷰] Vit : An Image is Worth 16x16 Words Transformers for Image Recognition at Scale (2) 2023.10.09 관련글 [논문리뷰] CodeFormer : Towards Robust Blind Face Restoration with Codebook Lookup Transformer [논문리뷰] SPARNet : Learning Spatial Attention for Face Super-Resolution [논문 공부] RPF: Reference-Based Progressive FaceSuper-Resolution Without LosingDetails and Identity [논문리뷰] U-Net: Convolutional Networks for Biomedical Image Segmentation