博客专栏

EEPW首页 > 博客 > 为何Transformer在计算机视觉中如此受欢迎?(2)

为何Transformer在计算机视觉中如此受欢迎?(2)

发布人:MSRAsia 时间:2021-10-21 来源:工程师 发布文章

理由2:和卷积形成互补

卷积是一种局部操作,一个卷积层通常只会建模邻域像素之间的关系。Transformer 则是全局操作,一个 Transformer 层能建模所有像素之间的关系,双方可以很好地进行互补。最早将这种互补性联系起来的是非局部网络 [19],在这个工作中,少量 Transformer 自注意单元被插入到了原始网络的几个地方,作为卷积网络的补充,并被证明其在物体检测、语义分割和视频动作识别等问题中广泛有效。

此后,也有工作发现非局部网络在视觉中很难真正学到像素和像素之间的二阶关系 [28],为此,有研究员们也提出了一些针对这一模型的改进,例如解耦非局部网络 [29]。

理由3:更强的建模能力

卷积可以看作是一种模板匹配,图像中不同位置采用相同的模板进行滤波。而 Transformer 中的注意力单元则是一种自适应滤波,模板权重由两个像素的可组合性来决定,这种自适应计算模块具有更强的建模能力。

最早将 Transformer 这样一种自适应计算模块应用于视觉骨干网络建模的方法是局部关系网络 LR-Net [30] 和 SASA [31],它们都将自注意的计算限制在一个局部的滑动窗口内,在相同理论计算复杂度的情况下取得了相比于 ResNet 更好的性能。然而,虽然理论上与 ResNet 的计算复杂度相同,但在实际使用中它们却要慢得多。一个主要原因是不同的查询(query)使用不同的关键字(key)集合,如图2(左)所示,对内存访问不太友好。 

Swin Transformer 提出了一种新的局部窗口设计——移位窗口(shifted windows)。这一局部窗口方法将图像划分成不重叠的窗口,这样在同一个窗口内部,不同查询使用的关键字集合将是相同的,进而可以拥有更好的实际计算速度。在下一层中,窗口的配置会往右下移动半个窗口,从而构造了前一层中不同窗口像素间的联系。

理由4:对大模型和大数据的可扩展性

在 NLP 领域,Transformer 模型在大模型和大数据方面展示了强大的可扩展性。图6中,蓝色曲线显示近年来 NLP 的模型大小迅速增加。大家都见证了大模型的惊人能力,例如微软的 Turing 模型、谷歌的 T5 模型以及 OpenAI 的 GPT-3 模型。

视觉 Transformer 的出现为视觉模型的扩大提供了重要的基础,目前最大的视觉模型是谷歌的150亿参数 ViT-MoE 模型 [32],这些大模型在 ImageNet-1K 分类上刷新了新的纪录。

6.png

图6:NLP 领域和计算机视觉领域模型大小的变迁

理由5:更好地连接视觉和语言

在以前的视觉问题中,科研人员通常只会处理几十类或几百类物体类别。例如 COCO 检测任务中包含了80个物体类别,而 ADE20K 语义分割任务包含了150个类别。视觉 Transformer 模型的发明和发展,使视觉领域和 NLP 领域的模型趋同,有利于联合视觉和 NLP 建模,从而将视觉任务与其所有概念联系起来。这方面的先驱性工作主要有 OpenAI 的 CLIP [33] 和 DALL-E 模型 [34]。

考虑到上述的诸多优点,相信视觉 Transformer 将开启计算机视觉建模的新时代,我们也期待学术界和产业界共同努力,进一步挖掘和探索这一新的建模方法给视觉领域带来的全新机遇和挑战。

参考文献:

[1] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR 2021

[2] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. ICCV 2021

[3] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu. Video Swin Transformer. Tech report 2021

[4] Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, Han Hu. Self-Supervised Learning with Swin Transformers. Tech report 2021

[5] Chunyuan Li, Jianwei Yang, Pengchuan Zhang, Mei Gao, Bin Xiao, Xiyang Dai, Lu Yuan, Jianfeng Gao. Efficient Self-supervised Vision Transformers for Representation Learning. Tech report 2021

[6] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, Radu Timofte. SwinIR: Image Restoration Using Swin Transformer. Tech report 2021

[7] https://github.com/layumi/Person_reID_baseline_pytorch

[8] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, Manning Wang. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. Tech report 2021

[9] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. Training data-efficient image transformers & distillation through attention. Tech report 2021

[10] Yawei Li, Kai Zhang, Jiezhang Cao, Radu Timofte, Luc Van Gool. LocalViT: Bringing Locality to Vision Transformers. Tech report 2021

[11] Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, Chunhua Shen. Twins: Revisiting the Design of Spatial Attention in Vision Transformers. Tech report 2021

[12] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. ICCV 2021

[13] Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. Tech report 2021

[14] Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao. Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding. Tech report 2021

[15] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. CvT: Introducing Convolutions to Vision Transformers. ICCV 2021

[16] Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, Baining Guo. CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows. Tech report 2021

[17] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, Jianfeng Gao. Focal Self-attention for Local-Global Interactions in Vision Transformers. Tech report 2021

[18] Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu. Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer. Tech report 2021

[19] Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He. Non-local Neural Networks. CVPR 2018

[20] Yuhui Yuan, Lang Huang, Jianyuan Guo, Chao Zhang, Xilin Chen, Jingdong Wang. OCNet: Object Context for Semantic Segmentation. IJCV 2021

[21] Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, Yichen Wei. Relation Networks for Object Detection. CVPR 2018

[22] Jiarui Xu, Yue Cao, Zheng Zhang, Han Hu. Spatial-Temporal Relation Networks for Multi-Object Tracking. ICCV 2019

[23] Yihong Chen, Yue Cao, Han Hu, Liwei Wang. Memory Enhanced Global-Local Aggregation for Video Object Detection. CVPR 2020

[24] Jiajun Deng, Yingwei Pan, Ting Yao, Wengang Zhou, Houqiang Li, and Tao Mei. Relation distillation networks for video object detection. ICCV 2019

[25] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. End-to-End Object Detection with Transformers. ECCV 2020

[26] Jiayuan Gu, Han Hu, Liwei Wang, Yichen Wei, Jifeng Dai. Learning Region Features for Object Detection. ECCV 2018

[27] Cheng Chi, Fangyun Wei, Han Hu. RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder. NeurIPS 2020

[28] Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, Han Hu. GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond. ICCV workshop 2019

[29] Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, Han Hu. Disentangled Non-Local Neural Networks. ECCV 2020

[30] Han Hu, Zheng Zhang, Zhenda Xie, Stephen Lin. Local Relation Networks for Image Recognition. ICCV 2019

[31] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens. Stand-Alone Self-Attention in Vision Models. NeurIPS 2019

[32] Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, Neil Houlsby. Scaling Vision with Sparse Mixture of Experts. Tech report 2021

[33] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. Learning Transferable Visual Models from Natural Language Supervision. Tech report 2021

[34] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever. Zero-Shot Text-to-Image Generation. Tech report 2021

*博客内容为网友个人发布,仅代表博主个人观点,如有侵权请联系工作人员删除。

电接点压力表相关文章:电接点压力表原理


关键词: AI

相关推荐

技术专区

关闭