Title | Journal | Journal Categories | Citations | Publication Date |
---|---|---|---|---|
MixMAE: Mixed and masked autoencoder for efficient pretraining of hierarchical vision transformers | 2022 | |||
ImageNet-21K pretraining for the masses | 2021 | |||
A large-scale study of representation learning with the visual task adaptation benchmark | 2019 | |||
Neural architecture search with reinforcement learning | 2016 | |||
Unsupervised representation learning with deep convolutional generative adversarial networks | 2015 |