In-depth and nuanced coverage of leading trends in AI One
Latest updates in the world of AI
Information repositories on AI for your reference
A collection of the most relevant and critical research in AI today
Read the latest case studies in the field of AI
Curated sets of data to aid research initiatives
The best of AI brought to you in bite-sized videos
World-class policy developments and accepted standards in AI development
Roles spanning various verticals and domains in big data and AI
Latest events in AI locally and internationally
Pieces covering the most current and interesting topics
VCs, PEs and other investors in AI today
Top educational institutions offering courses in AI
Profiles of visionary companies leading AI research and innovation
India’s brightest and most successful minds in AI research and development
A glimpse into research, development & initiatives in AI shaping up in countries round the world
Read all about the various AI initiatives spearheaded by the Government of India
Latest initiatives, missions & developments by GoI to drive AI adoption
Follow INDIAai
About INDIAai
Subscribe to our emails
Home
By Dr Nivash Jeevanandam
Few-Shot Learning is an example of meta-learning. During the meta-testing phase, a learner is trained on several related tasks to generalize well to unseen functions with only a few examples.
Humans can identify new object classes from a small number of examples. However, most machine learning techniques require thousands of samples to produce comparable results.
Over the past ten years, computer vision researchers have mainly used millions of images to solve general problems. But this has led to a strong link between the amount of data and how well a model works. To solve this problem, researchers came up with Few-Shot Learning. It is on training models with fewer data without hurting how well they work.
Meta-learning
The most common few-shot learning algorithm uses meta-learning, also known as learning to learn. Meta-learning, a branch of machine learning, is driven by human development theory and focuses on learning priors from past experiences that can facilitate efficient downstream learning of new tasks. For instance, a simple learner only understands how to complete a single classification task, whereas a meta-learner learns how to achieve multiple related classification tasks. Consequently, the meta-learner could meet a similar but new job more quickly and effectively than a simple learner without prior experience completing the task.
A meta-learning procedure generally involves learning both within and across tasks on two levels. For instance, learning to classify accurately within a specific dataset is an example of rapid learning occurring within a job. The knowledge acquired gradually across tasks, which captures how task structure differs across target domains, is then used to guide this learning. Comparable strategies like transfer, multitask, and ensemble learning can vary from meta-learning.
Transfer learning
In the transfer learning method, a model is trained on a single task, referred to as the source task, in the source domain where there is enough training data. This trained model is then further improved or retrained on the target task, which is a single task within the target domain. Knowledge is from the source task to the target task. Therefore, the more alike the two parts are, the better it functions. Learning multiple lessons at once is known as multitasking. It begins with no prior knowledge and makes an effort to maximize efficiency when handling several tasks concurrently.
However, multiple models, such as classifiers or experts, are strategically generated and combined to solve a specific problem through the ensemble learning process. In contrast, a meta-learner gathers experience by completing several related tasks before applying that knowledge to new situations. These methods can, and frequently are, combined in a meaningful way with meta-learning systems.
Significance
Applications
Computer vision
Few-shot learning is primarily utilized in computer vision to address issues like:
NLP
Natural language processing (NLP) applications can complete tasks with little text data using few-shot learning.
Audio Processing
Acoustic signal processing can analyze data containing information about voices or sounds, and few-shot learning enables the deployment of tasks like voice cloning and conversion.
Robotics
Robots need to be able to extrapolate knowledge from a few examples to act more human-like. Therefore, few-shot learning is essential for teaching robots to perform specific tasks like:
About the author
Senior Research Writer at INDIAai
Share via
Tamil Nadu to use AI to reduce elephant deaths on railway tracks
Interesting AI research labs in India
Join our newsletter to know about important developments in AI space