OneHuster Meta Learning Papers Save

A classified list of meta learning papers based on realm.

Project README

Awesome Meta-Learning Papers Awesome

A summary of meta learning papers based on realm. Sorted by submission date on arXiv.

Topics

Survey

Meta-Learning in Neural Networks: A Survey [paper]

  • Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey

Meta-Learning[paper]

  • Joaquin Vanschoren

Meta-Learning: A Survey [paper]

  • Joaquin Vanschoren

Meta-learners’ learning dynamics are unlike learners’ [paper]

  • Neil C. Rabinowitz

Few-shot learning

Joint Distribution Matters: Deep Brownian Distance Covariance for Few-Shot Classification [paper]

  • Jiangtao Xie, Fei Long, Jiaming Lv, Qilong Wang, Peihua Li --CVPR 2022

Learning Prototype-oriented Set Representations for Meta-Learning [paper]

  • Dan dan Guo, Long Tian, Minghe Zhang, Mingyuan Zhou, Hongyuan Zha --ICLR 2022

On the Role of Pre-training for Meta Few-Shot Learning [paper]

  • Chia-You Chen, Hsuan-Tien Lin, Gang Niu, Masashi Sugiyama, --arXiv 2021

BOIL: Towards Representation Change for Few-shot Learning [paper]

  • Jaehoon Oh, Hyungjun Yoo, ChangHwan Kim, Se-Young Yun --ICLR 2021

On Episodes, Prototypical Networks, and Few-Shot Learning [paper]

  • Steinar Laenen, Luca Bertinetto --NeurIPS 2021

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels [paper]

  • Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey --NeurIPS 2020

Laplacian Regularized Few-Shot Learning [paper]

  • Imtiaz Masud Ziko, Jose Dolz, Eric Granger, Ismail Ben Ayed --ICML 2020

Few-shot Sequence Learning with Transformer

  • Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc´Aurelio Ranzato, Arthur Szlam --NeurIPS 2020 #Meta-Learning

Prototype Rectification for Few-Shot Learning [paper]

  • Jinlu Liu, Liang Song, Yongqiang Qin --ECCV 2020

When Does Self-supervision Improve Few-shot Learning? [paper]

  • Jong-Chyi Su, Subhransu Maji, Bharath Hariharan --ECCV 2020

Cross Attention Network for Few-shot Classification [paper]

  • Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan, Xilin Chen --NeurIPS 2019

Learning to Learn via Self-Critique [paper]

  • Antreas Antoniou, Amos Storkey --NeurIPS 2019

Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks [paper]

  • Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh, Yang Yang --AAAI 2020

Few-Shot Learning with Global Class Representations [paper]

  • Tiange Luo, Aoxue Li, Tao Xiang, Weiran Huang, Liwei Wang --ICCV 2019

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning [paper]

  • Sung Whan Yoon, Jun Seo, Jaekyun Moon --ICML 2019

Learning to Learn with Conditional Class Dependencies [paper]

  • Xiang Jiang, Mohammad Havaei, Farshid Varno, Gabriel Chartrand, Nicolas Chapados, Stan Matwin --ICLR 2019

Finding Task-Relevant Features for Few-Shot Learning by Category Traversal [paper]

  • Hongyang Li, David Eigen, Samuel Dodge, Matthew Zeiler, Xiaogang Wang --CVPR 2019

TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning [paper]

  • Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, Joseph E. Gonzalez --CVPR 2019

Variational Prototyping-Encoder: One-Shot Learning with Prototypical Images [paper]

  • Junsik Kim, Tae-Hyun Oh, Seokju Lee, Fei Pan, In So Kweon --CVPR 2019

LCC: Learning to Customize and Combine Neural Networks for Few-Shot Learning [paper]

  • Yaoyao Liu, Qianru Sun, An-An Liu, Yuting Su, Bernt Schiele, Tat-Seng Chua --CVPR 2019

Meta-Learning with Differentiable Convex Optimization [paper]

  • Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto --CVPR 2019

Dense Classification and Implanting for Few-Shot Learning [paper]

  • Yann Lifchitz, Yannis Avrithis, Sylvaine Picard, Andrei Bursuc --CVPR 2019

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples

  • Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, Hugo Larochelle -- arXiv 2019

Adaptive Cross-Modal Few-Shot Learning [paper]

  • Chen Xing, Negar Rostamzadeh, Boris N. Oreshkin, Pedro O. Pinheiro --arXiv 2019

Meta-Learning with Latent Embedding Optimization [paper]

  • Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, Raia Hadsell -- ICLR 2019

A Closer Look at Few-shot Classification [paper]

  • Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang -- ICLR 2019

Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning [paper]

  • Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang -- ICLR 2019

Dynamic Few-Shot Visual Learning without Forgetting [paper]

  • Spyros Gidaris, Nikos Komodakis --arXiv 2019

Meta Learning with Lantent Embedding Optimization [paper]

  • Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero & Raia Hadsell --ICLR 2019

Adaptive Posterior Learning: few-shot learning with a surprise-based memory module

  • Tiago Ramalho, Marta Garnelo --ICLR 2019

How To Train Your MAML [paper]

  • Antreas Antoniou, Harrison Edwards, Amos Storkey -- ICLR 2019

TADAM: Task dependent adaptive metric for improved few-shot learning [paper]

  • Boris N. Oreshkin, Pau Rodriguez, Alexandre Lacoste --arXiv 2019

Few-shot Learning with Meta Metric Learners

  • Yu Cheng, Mo Yu, Xiaoxiao Guo, Bowen Zhou --NIPS 2017 workshop on Meta-Learning

Learning Embedding Adaptation for Few-Shot Learning [paper]

  • Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, Fei Sha --arXiv 2018

Meta-Transfer Learning for Few-Shot Learning [paper]

  • Qianru Sun, Yaoyao Liu, Tat-Seng Chu, Bernt Schiele -- arXiv 2018

Task-Agnostic Meta-Learning for Few-shot Learning

  • Muhammad Abdullah Jamal, Guo-Jun Qi, and Mubarak Shah --arXiv 2018

Few-Shot Learning with Graph Neural Networks [paper]

  • Victor Garcia, Joan Bruna -- ICLR 2018

Prototypical Networks for Few-shot Learning [paper]

  • Jake Snell, Kevin Swersky, Richard S. Zemel -- NIPS 2017

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks [paper]

  • Chelsea Finn, Pieter Abbeel, Sergey Levine -- ICML 2016

Large scale dataset

Image Deformation Meta-Networks for One-Shot Learning [paper]

  • Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, Martial Hebert --CVPR 2019

Imbalance class

Balanced Meta-Softmax for Long-Tailed Visual Recognition [paper]

  • Jiawei Ren, Cunjun Yu, Shunan Sheng, Xiao Ma, Haiyu Zhao, Shuai Yi, Hongsheng Li --NeurIPS 2020

MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler [paper]

  • Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang --NeurIPS 2019

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks [paper]

  • Donghyun Na, Hae Beom Lee, Hayeon Lee, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang --ICLR 2020

Meta-weight-net: Learning an explicit mapping for sample weighting [paper]

  • Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, Deyu Meng --NeurIPS 2019

Learning to Reweight Examples for Robust Deep Learning [paper]

  • Mengye Ren, Wenyuan Zeng, Bin Yang, Raquel Urtasun --ICML 2018

Learning to Model the Tail [paper]

  • Yu-Xiong Wang, Deva Ramanan, Martial Hebert --NeurIPS 2017

Video retargeting

MetaPix: Few-Shot Video Retargeting [paper]

  • Jessica Lee, Deva Ramanan, Rohit Girdhar --ICLR 2020

Object detection

Few-shot Object Detection via Feature Reweighting [paper]

  • Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, Trevor Darrell --ICCV 2019

Segmentation

PANet: Few-Shot Image Semantic Segmentation with Prototype Alignment [paper]

  • Kaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, Jiashi Feng --ICCV 2019

NLP

Meta-Learning for Few-Shot NMT Adaptation [paper]

  • Amr Sharaf, Hany Hassan, Hal Daumé III --arXiv 2020

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks [paper]

  • Trapit Bansal, Rishikesh Jha, Andrew McCallum --arXiv 2020

Compositional generalization through meta sequence-to-sequence learning [paper]

  • Brenden M. Lake --NeurIPS 2019

Few-Shot Representation Learning for Out-Of-Vocabulary Words [paper]

  • Ziniu Hu, Ting Chen, Kai-Wei Chang, Yizhou Sun --ACL 2019

Reinforcement learning

Offline Meta-Reinforcement Learning with Online Self-Supervision [paper]

  • Vitchyr Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine --ICML 2022

System-Agnostic Meta-Learning for MDP-based Dynamic Scheduling via Descriptive Policy [paper]

  • Lee, Hyun-Suk --AISTATS 2022

Meta Learning MDPs with Linear Transition Models [paper]

  • Müller, Robert ; Pacchiano, Aldo --AISTATS 2022

CoMPS: Continual Meta Policy Search [paper]

  • Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine --ICLR 2022

Modeling and Optimization Trade-off in Meta-learning [paper]

  • Katelyn Gao, Ozan Sener --NeurIPS 2020

Information-theoretic Task Selection for Meta-Reinforcement Learning [paper]

  • Ricardo Luna Gutierrez, Matteo Leonetti --NeurIPS 2020

On the Global Optimality of Model-Agnostic Meta-Learning: Reinforcement Learning and Supervised Learning [paper]

  • Lingxiao Wang, Qi Cai, Zhuoyan Yang, Zhaoran Wang --PMLR 2020

Generalized Reinforcement Meta Learning for Few-Shot Optimization [paper]

  • Raviteja Anantha, Stephen Pulman, Srinivas Chappidi --ICML 2020

VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning [paper]

  • Luisa Zintgraf, Kyriacos Shiarlis, Maximilian Igl, Sebastian Schulze, Yarin Gal, Katja Hofmann, Shimon Whiteson --ICLR 2020

Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives [paper]

  • Anirudh Goyal, Shagun Sodhani, Jonathan Binas, Xue Bin Peng, Sergey Levine, Yoshua Bengio --ICLR 2020

Meta-learning curiosity algorithms [paper]

  • Ferran Alet*, Martin F. Schneider*, Tomas Lozano-Perez, Leslie Pack Kaelbling --ICLR 2020

Meta-Q-Learning [paper]

  • Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola --ICLR 2020

Guided Meta-Policy Search [paper]

  • Russell Mendonca, Abhishek Gupta, Rosen Kralev, Pieter Abbeel, Sergey Levine, Chelsea Finn

AutoML

Learning meta-features for AutoML [paper]

  • Herilalaina Rakotoarison, Louisot Milijaona, Andry RASOANAIVO, Michele Sebag, Marc Schoenauer --ICLR 2022

Towards Fast Adaptation of Neural Architectures with Meta Learning [paper]

  • Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang, Shenghua Gao --ICLR 2020

Graph HyperNetworks for Neural Architecture Search [paper]

  • Chris Zhang, Mengye Ren, Raquel Urtasun --ICLR 2019

Fast Task-Aware Architecture Inference

  • Efi Kokiopoulou, Anja Hauth, Luciano Sbaiz, Andrea Gesmundo, Gabor Bartok, Jesse Berent --arXiv 2019

Bayesian Meta-network Architecture Learning

  • Albert Shaw, Bo Dai, Weiyang Liu, Le Song --arXiv 2018

Task-dependent

Meta-Learning with Fewer Tasks through Task Interpolation [paper]

  • Huaxiu Yao, Linjun Zhang, Chelsea Finn --ICLR 2022

Meta-Regularization by Enforcing Mutual-Exclusiveness [paper]

  • Edwin Pan, Pankaj Rajak, Shubham Shrivastava --arXiv 2021

Task-Robust Model-Agnostic Meta-Learning [paper]

  • Liam Collins, Aryan Mokhtari, Sanjay Shakkottai --NeurIPS 2020

Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation [paper]

  • Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim --NeurIPS 2019

Meta-Learning with Warped Gradient Descent [paper]

  • Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Hujun Yin, Raia Hadsell --arXiv 2019

TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning [paper]

  • Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, Joseph E. Gonzalez --CVPR 2019

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning [paper]

  • Sung Whan Yoon, Jun Seo, Jaekyun Moon --ICML 2019

Meta-Learning with Latent Embedding Optimization [paper]

  • Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, Raia Hadsell -- ICLR 2019

Fast Task-Aware Architecture Inference

  • Efi Kokiopoulou, Anja Hauth, Luciano Sbaiz, Andrea Gesmundo, Gabor Bartok, Jesse Berent --arXiv 2019

Task2Vec: Task Embedding for Meta-Learning

  • Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Stefano Soatto, Pietro Perona--arXiv 2019

TADAM: Task dependent adaptive metric for improved few-shot learning

  • Boris N. Oreshkin, Pau Rodriguez, Alexandre Lacoste --arXiv 2019

MetaReg: Towards Domain Generalization using Meta-Regularization [paper]

  • Yogesh Balaji, Swami Sankaranarayanan -- NIPS 2018

Heterogeneous task

Statistical Model Aggregation via Parameter Matching [paper]

  • Mikhail Yurochkin, Mayank Agarwal, Soumya Ghosh, Kristjan Greenewald, Trong Nghia Hoang --NeurIPS 2019

Hierarchically Structured Meta-learning [paper]

  • Huaxiu Yao, Ying Wei, Junzhou Huang, Zhenhui Li --ICML 2019

Hierarchical Meta Learning [paper]

  • Yingtian Zou, Jiashi Feng --arXiv 2019

Data Aug & Reg

MetAug: Contrastive Learning via Meta Feature Augmentation [paper]

  • Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Hui Xiong --ICML 2022

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting [paper]

  • Hongxin Wei, Lei Feng, Rundong Wang, Bo An --arXiv 2020

Meta Dropout: Learning to Perturb Latent Features for Generalization [paper]

  • Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang --ICLR 2020

Learning to Reweight Examples for Robust Deep Learning [paper]

  • Mengye Ren, Wenyuan Zeng, Bin Yang, Raquel Urtasun --ICML 2018

Lifelong learning

Optimizing Reusable Knowledge for Continual Learning via Metalearning [paper]

  • Julio Hurtado, Alain Raymond-Saez, Alvaro Soto --NeurIPS 2021

Learning where to learn: Gradient sparsity in meta and continual learning [paper]

  • Johannes von Oswald, Dominic Zhao, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, João Sacramento --NeurIPS 2021

Online-Within-Online Meta-Learning [paper]

  • Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, Massimiliano Pontil

Reconciling meta-learning and continual learning with online mixtures of tasks [paper]

  • Ghassen Jerfel, Erin Grant, Thomas L. Griffiths, Katherine Heller --NeurIPS 2019

Meta-Learning Representations for Continual Learning [paper]

  • Khurram Javed, Martha White --NeurIPS 2019

Online Meta-Learning [paper]

  • Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine --ICML 2019

Hierarchically Structured Meta-learning [paper]

  • Huaxiu Yao, Ying Wei, Junzhou Huang, Zhenhui Li --ICML 2019

A Neural-Symbolic Architecture for Inverse Graphics Improved by Lifelong Meta-Learning [paper]

  • Michael Kissner, Helmut Mayer --arXiv 2019

Incremental Learning-to-Learn with Statistical Guarantees [paper]

  • Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil --arXiv 2018

Domain generalization

Meta-learning curiosity algorithms [paper]

  • Ferran Alet*, Martin F. Schneider*, Tomas Lozano-Perez, Leslie Pack Kaelbling --ICLR 2020

Domain Generalization via Model-Agnostic Learning of Semantic Features [paper]

  • Qi Dou, Daniel C. Castro, Konstantinos Kamnitsas, Ben Glocker

Learning to Generalize: Meta-Learning for Domain Generalization [paper]

  • Da Li, Yongxin Yang, Yi-Zhe Song, Timothy M. Hospedales --AAAI 2018

Bayesian inference

Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning [paper]

  • Konstantinos Ι. Kalais, Sotirios Chatzis --ICML 2022

Meta-Learning with Variational Bayes [paper]

  • Lucas D. Lingle --arXiv 2021

Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization [paper]

  • Michael Volpp, Lukas Froehlich, Kirsten Fischer, Andreas Doerr, Stefan Falkner, Frank Hutter, Christian Daniel --ICLR 2020

Bayesian Meta Sampling for Fast Uncertainty Adaptation [paper]

  • Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, Changyou Chen --ICLR 2020

Meta-Learning Mean Functions for Gaussian Processes [paper]

  • Vincent Fortuin, Heiko Strathmann, and Gunnar Rätsch --NeurIPS 2019 workshop

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks [paper]

  • Donghyun Na, Hae Beom Lee, Hayeon Lee, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang --ICLR 2020

Meta-Learning without Memorization [paper]

  • Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, Chelsea Finn --ICLR 2020

Meta-Amortized Variational Inference and Learning [paper]

  • Mike Wu, Kristy Choi, Noah Goodman, Stefano Ermon --arXiv 2019

Amortized Bayesian Meta-Learning [paper]

  • Sachin Ravi, Alex Beatson --ICLR 2019

Neural Processes [paper]

  • Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S.M. Ali Eslami, Yee Whye Teh

Meta-Learning Probabilistic Inference For Prediction [paper]

  • Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, Richard E. Turner --ICLR 2019

Meta-Learning Priors for Efficient Online Bayesian Regression [paper]

  • James Harrison, Apoorva Sharma, Marco Pavone --WAFR 2018

Probabilistic Model-Agnostic Meta-Learning [paper]

  • Chelsea Finn, Kelvin Xu, Sergey Levine --arXiv 2018

Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions [paper]

  • Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas --ICLR 2018

Bayesian Model-Agnostic Meta-Learning [paper]

  • Taesup Kim, Jaesik Yoon, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, Sungjin Ahn -- NIPS 2018

Meta-learning by adjusting priors based on extended PAC-Bayes theory [paper]

  • Ron Amit , Ron Meir --ICML 2018

Neural process

Neural Variational Dropout Processes [paper]

  • Insu Jeon, Youngjin Park, Gunhee Kim --ICLR 2022

Neural ODE Processes [paper]

  • Alexander Norcliffe, Cristian Bodnar, Ben Day, Jacob Moss, Pietro Liò --ICLR 2021

Convolutional Conditional Neural Processes [paper]

  • Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner --ICLR 2020

Bootstrapping Neural Processes [paper]

  • Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh --NeurIPS 2020

MetaFun: Meta-Learning with Iterative Functional Updates [paper]

  • Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh --ICML 2020

Sequential Neural Processes [paper]

  • Gautam Singh, Jaesik Yoon, Youngsung Son, Sungjin Ahn --NeurIPS 2019

Neural Processes [paper]

  • Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S.M. Ali Eslami, Yee Whye Teh --arXiv 2018

Conditional Neural Processes [paper]

  • Marta Garnelo, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, S. M. Ali Eslami --ICML 2018

Configuration transfer

Online Hyperparameter Meta-Learning with Hypergradient Distillation [paper]

  • Hae Beom Lee, Hayeon Lee, JaeWoong Shin, Eunho Yang, Timothy Hospedales, Sung Ju Hwang --ICLR 2022

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels [paper]

  • Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey --NeurIPS 2020

Meta-Learning for Few-Shot NMT Adaptation [paper]

  • Amr Sharaf, Hany Hassan, Hal Daumé III --arXiv 2020

Fast Context Adaptation via Meta-Learning [paper]

  • Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, Shimon Whiteson --ICML 2019

Zero-Shot Knowledge Distillation in Deep Networks [paper]

  • Gaurav Kumar Nayak *, Konda Reddy Mopuri, Vaisakh Shaj, R. Venkatesh Babu, Anirban Chakraborty --ICML 2019

Toward Multimodal Model-Agnostic Meta-Learning [paper]

  • Risto Vuorio, Shao-Hua Sun, Hexiang Hu, Joseph J. Lim --arXiv 2019

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks [paper]

  • Chelsea Finn, Pieter Abbeel, Sergey Levine -- ICML 2016

Semi/Unsupervised learning

Unsupervised Learning via Meta-Learning [paper]

  • Kyle Hsu, Sergey Levine, Chelsea Finn -- ICLR 2019

Meta-Learning Update Rules for Unsupervised Representation Learning [paper]

  • Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein --ICLR 2019

Meta-Learning for Semi-Supervised Few-Shot Classification [paper]

  • Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, Richard S. Zemel --ICLR 2018

Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace [paper]

  • Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine --ICML 2018

Self-supervised learning

MAML is a Noisy Contrastive Learner in Classification [paper]

  • Chia Hsiang Kao, Wei-Chen Chiu, Pin-Yu Chen --ICLR 2022

Contrastive Learning is Just Meta-Learning [paper]

  • Renkun Ni, Manli Shu, Hossein Souri, Micah Goldblum, Tom Goldstein --ICLR 2022

Learning curves

Transferring Knowledge across Learning Processes [paper]

  • Sebastian Flennerhag, Pablo G. Moreno, Neil D. Lawrence, Andreas Damianou --ICLR 2019

Meta-Curvature [paper]

  • Eunbyung Park, Junier B. Oliva --NeurIPS 2019

Hyperparameter

LCC: Learning to Customize and Combine Neural Networks for Few-Shot Learning [paper]

  • Yaoyao Liu, Qianru Sun, An-An Liu, Yuting Su, Bernt Schiele, Tat-Seng Chua --CVPR 2019

Gradient-based Hyperparameter Optimization through Reversible Learning [paper]

  • Dougal Maclaurin, David Duvenaud, Ryan P. Adams --ICML 2016

Model compression

N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning

  • Anubhav Ashok, Nicholas Rhinehart, Fares Beainy, Kris M. Kitani --ICLR 2018

Kernel learning

Deep Kernel Transfer in Gaussian Processes for Few-shot Learning [paper]

  • Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O’Boyle, Amos Storkey --arXiv 2020

Deep Mean Functions for Meta-Learning in Gaussian Processes [paper]

  • Vincent Fortuin, Gunnar Rätsch --arXiv 2019

Kernel Learning and Meta Kernels for Transfer Learning [paper]

  • Ulrich Ruckert

Robustness

A Closer Look at the Training Strategy for Modern Meta-Learning [paper]

  • JIAXIN CHEN, Xiao-Ming Wu, Yanke Li, Qimai LI, Li-Ming Zhan, Fu-lai Chung --NeurIPS 2020

Task-Robust Model-Agnostic Meta-Learning [paper]

  • Liam Collins, Aryan Mokhtari, Sanjay Shakkottai --NeurIPS 2020

FeatureBoost: A Meta-Learning Algorithm that Improves Model Robustness [paper]

  • Joseph O'Sullivan, John Langford, Rich Caruana, Avrim Blum --ICML 2000

Optimization

Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning [paper]

  • Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, Tianyi Chen --ICML 2022

Bootstrapped Meta-Learning [paper]

  • Sebastian Flennerhag, Yannick Schroecker, Tom Zahavy, Hado van Hasselt, David Silver, Satinder Singh --ICLR 2022

Learning where to learn: Gradient sparsity in meta and continual learning [paper]

  • Johannes von Oswald, Dominic Zhao, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, João Sacramento --NeurIPS 2021

Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML [paper]

  • Aniruddh Raghu, Maithra Raghu, Samy Bengio, Oriol Vinyals --ICLR 2020

Empirical Bayes Transductive Meta-Learning with Synthetic Gradients [paper]

  • Shell Xu Hu, Pablo G. Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D. Lawrence, Andreas Damianou --ICLR 2020

Transferring Knowledge across Learning Processes [paper]

  • Sebastian Flennerhag, Pablo G. Moreno, Neil D. Lawrence, Andreas Damianou --ICLR 2019

MetaInit: Initializing learning by learning to initialize [paper]

  • Yann N. Dauphin, Samuel Schoenholz --NeurIPS 2019

Meta-Learning with Implicit Gradients [paper]

  • Aravind Rajeswaran*, Chelsea Finn*, Sham Kakade, Sergey Levine --NeurIPS 2019

Model-Agnostic Meta-Learning using Runge-Kutta Methods [paper]

  • Daniel Jiwoong Im, Yibo Jiang, Nakul Verma --arXiv

Learning to Optimize in Swarms [paper]

  • Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen --arXiv 2019

Meta-Learning with Warped Gradient Descent [paper]

  • Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Hujun Yin, Raia Hadsell --ICLR 2020

Learning to Generalize to Unseen Tasks with Bilevel Optimization [paper]

  • Hayeon Lee, Donghyun Na, Hae Beom Lee, Sung Ju Hwang --arXiv 2019

Learning to Optimize [paper]

  • Ke Li Jitendra Malik --ICLR 2017

Gradient-based Hyperparameter Optimization through Reversible Learning [paper]

  • Dougal Maclaurin, David Duvenaud, Ryan P. Adams --ICML 2016

Continuous time

Continuous-Time Meta-Learning with Forward Mode Differentiation [paper]

  • Tristan Deleu, David Kanaa, Leo Feng, Giancarlo Kerg, Yoshua Bengio, Guillaume Lajoie, Pierre-Luc Bacon --ICLR 2022

Meta-learning using privileged information for dynamics [paper]

  • Ben Day, Alexander Norcliffe, Jacob Moss, Pietro Liò --ICLR 2020 #Learning to Learn and SimDL

Theory

Near-Optimal Task Selection with Mutual Information for Meta-Learning [paper]

  • Chen, Yizhou; Zhang, Shizhuo; Low, Bryan Kian Hsiang --AISTATS 2022

Learning Tensor Representations for Meta-Learning [paper]

  • Samuel Deng, Yilin Guo, Daniel Hsu, Debmalya Mandal --AISTATS 2022

Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably? [paper]

  • Lisha Chen, Tianyi Chen --AISTATS 2022

Unraveling Model-Agnostic Meta-Learning via The Adaptation Learning Rate [paper]

  • Yingtian Zou, Fusheng Liu, Qianxiao Li --ICLR 2022

Task Relatedness-Based Generalization Bounds for Meta Learning [paper]

  • Jiechao Guan, Zhiwu Lu --ICLR 2022

How Tight Can PAC-Bayes be in the Small Data Regime? [paper]

  • Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt, Richard E. Turner --NeurIPS 2021

A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning [paper]

  • Nikunj Saunshi, Arushi Gupta, and Wei Hu --ICML 2021

Bilevel Optimization: Convergence Analysis and Enhanced Design [paper]

  • Kaiyi Ji, Junjie Yang, Yingbin Liang --ICML 2021

How Important is the Train-Validation Split in Meta-Learning? [paper]

  • Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang, Caiming Xiong --ICML 2021

Information-Theoretic Generalization Bounds for Meta-Learning and Applications [paper]

  • Sharu Theresa Jose, Osvaldo Simeone --arXiv 2021

Modeling and Optimization Trade-off in Meta-learning [paper]

  • Katelyn Gao, Ozan Sener --NeurIPS 2020

A Closer Look at the Training Strategy for Modern Meta-Learning [paper]

  • JIAXIN CHEN, Xiao-Ming Wu, Yanke Li, Qimai LI, Li-Ming Zhan, Fu-lai Chung --NeurIPS 2020

Why Does MAML Outperform ERM? An Optimization Perspective [paper]

  • Liam Collins, Aryan Mokhtari, Sanjay Shakkottai --arXiv 2020

Transfer Meta-Learning: Information-Theoretic Bounds and Information Meta-Risk Minimization [paper]

  • Sharu Theresa Jose, Osvaldo Simeone, Giuseppe Durisi --arXiv 2020

The Advantage of Conditional Meta-Learning for Biased Regularization and Fine-Tuning [paper]

  • Giulia Denevi, Massimiliano Pontil, Carlo Ciliberto --NeurIPS 2020

Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters [paper]

  • Kaiyi Ji, Jason D. Lee, Yingbin Liang, H. Vincent Poor --NeurIPS 2020

Meta-learning for mixed linear regression [paper]

  • Weihao Kong, Raghav Somani, Zhao Song, Sham Kakade, Sewoong Oh --ICML 2020

Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time

  • Ferran Alet, Kenji Kawaguchi, Maria Bauza, Nurallah Giray Kuru, Tomás Lozano-Pérez, Leslie Pack Kaelbling --NeurIPS 2020 #Meta-Learning

A Theoretical Analysis of the Number of Shots in Few-Shot Learning [paper]

  • Tianshi Cao, Marc T Law, Sanja Fidler --ICLR 2020

Efficient Meta Learning via Minibatch Proximal Update [paper]

  • Pan Zhou, Xiaotong Yuan, Huan Xu, Shuicheng Yan, Jiashi Feng --NeurIPS 2019

On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms [paper]

  • Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar --arXiv 2019

Meta-learners' learning dynamics are unlike learners' [paper]

  • Neil C. Rabinowitz --arXiv 2019

Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior [paper]

  • Zi Wang, Beomjoon Kim, Leslie Pack Kaelbling --NeurIPS 2018

Incremental Learning-to-Learn with Statistical Guarantees [paper]

  • Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, Massimiliano Pontil --UAI 2018

Meta-learning by adjusting priors based on extended PAC-Bayes theory [paper]

  • Ron Amit , Ron Meir --ICML 2018

Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm [paper]

  • Chelsea Finn, Sergey Levine --ICLR 2018

On the Convergence of Model-Agnostic Meta-Learning [paper]

  • Noah Golmant

Fast Rates by Transferring from Auxiliary Hypotheses [paper]

  • Ilja Kuzborskij, Francesco Orabona --arXiv 2014

Algorithmic Stability and Meta-Learning [paper]

  • Andreas Maurer --JMLR 2005

Online convex optimization

PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [paper]

  • Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, Andreas Krause --ICML 2021

Meta-learning with Stochastic Linear Bandits [paper]

  • Leonardo Cella, Alessandro Lazaric, Massimiliano Pontil --arXiv 2020

Bayesian Online Meta-Learning with Laplace Approximation [paper]

  • Pau Ching Yap, Hippolyt Ritter, David Barber --arXiv 2020

Online Meta-Learning on Non-convex Setting [paper]

  • Zhenxun Zhuang, Yunlong Wang, Kezi Yu, Songtao Lu --arXiv 2019

Adaptive Gradient-Based Meta-Learning Methods [paper]

  • Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar --NeurIPS 2019

Learning-to-Learn Stochastic Gradient Descent with Biased Regularization [paper]

  • Giulia Denevi, Carlo Ciliberto, Riccardo Grazzi, Massimiliano Pontil --NeurIPS 2019

Provable Guarantees for Gradient-Based Meta-Learning

  • Mikhail Khodak Maria-Florina Balcan Ameet Talwalkar --arXiv 2019
Open Source Agenda is not affiliated with "OneHuster Meta Learning Papers" Project. README Source: oneHuster/Meta-Learning-Papers

Open Source Agenda Badge

Open Source Agenda Rating