Ask INDIAai: Why are GPUs used for deep learning model training? – INDIAai

In-depth and nuanced coverage of leading trends in AI One
Latest updates in the world of AI
Information repositories on AI for your reference
A collection of the most relevant and critical research in AI today
Read the latest case studies in the field of AI
Curated sets of data to aid research initiatives
The best of AI brought to you in bite-sized videos
World-class policy developments and accepted standards in AI development
Roles spanning various verticals and domains in big data and AI
Latest events in AI locally and internationally
Pieces covering the most current and interesting topics
VCs, PEs and other investors in AI today
Top educational institutions offering courses in AI
Profiles of visionary companies leading AI research and innovation
India’s brightest and most successful minds in AI research and development
A glimpse into research, development & initiatives in AI shaping up in countries round the world
Read all about the various AI initiatives spearheaded by the Government of India
Latest initiatives, missions & developments by GoI to drive AI adoption
Follow INDIAai
About INDIAai
Subscribe to our emails

By Dr Nivash Jeevanandam
Ask INDIAai all your technical questions. This series consists of questions and answers for students, professionals, and the general public. In addition, it contains questions and answers regarding many elements of AI and technological advancement.
In this series, you can ask questions and get your doubts resolved. The expert team at INDIAai will answer your inquiries.
Send your queries to:
Why are GPUs used for deep learning model training? – Akansha, Pune.
Generally, GPUs are three times as fast as CPUs. GPUs are fast because they can multiply and combine matrices quickly and satisfactorily. However, the real reason for this is how fast the memory is. In short, in order of importance:
Researchers made a GPU from the ground up to render high-resolution graphics and images, which doesn’t require much switching between tasks. GPUs instead focus on concurrency, which means breaking up complex tasks (like the same calculations used to make effects for lighting, shading, and textures) into smaller tasks that they can do in parallel. This support for parallel computing is more than just a power boost. Moore’s Law says that CPU power will double every two years. Conversely, GPUs get around Moore’s Law using different hardware and computing configurations for each problem.
GPUs are very important to the progress of deep learning and parallel computing. With all of these changes, Nvidia is a leader and a pioneer in the field. It gives creators both the hardware and the software they need. Furthermore, it is outstanding to start making neural networks with only a CPU. Modern GPUs, however, can speed up the process and make learning much more fun.
Will quantum computing be beneficial to AI? – Prasanna, Chennai.
One of the biggest problems with AI right now is that it is hard to teach machines to do useful things. For example, we might have a model that can tell when a picture shows a dog. But it will take tens of thousands of images to teach the model to distinguish between a beagle, a poodle, and a Great Dane. It is what people who study AI call “training.” They use it to teach AI programmes how to predict what will happen in new situations.
The training process can be done faster and more accurately with the help of quantum computing. It is because researchers in AI will be able to use more data than ever. It can handle many data in the form of 1s and 0s and any combination of the two. It means that quantum computers can come to more accurate conclusions than regular computers. In other words, researchers can train AI models to be more accurate and better at making decisions by giving them more data.
What are the most significant problems still to be solved in computer vision? – Sai Krishna, Hyderabad.
Autonomous cars’ most important issue in computer vision is how to run most of the algorithms in real-time and in a complex, cluttered environment. Deep learning has made it so that it is no longer hard to make a visual model that is very accurate. However, making it work in real-time and complex environments like Indian roads is still hard.
Similarly, Weather and lighting are the obvious vital concerns for cameras. Many of the possible capabilities during the day are unavailable during the night. Likewise, optical landmark localization fails miserably in the snow. These are significant open issues in computer vision.
Which country is using AI in agriculture? – Nisha, Bengaluru
The increasing implementation of data generation drives the growth of artificial intelligence (AI) in the agriculture market through sensors and aerial images for crops, the growing crop productivity through deep-learning technology, and the government’s encouragement of adopting modern agricultural techniques. However, the expensive expense of collecting accurate field data restrains industry expansion. Nevertheless, due to the increasing use of uncrewed aerial vehicles/drones on agricultural farms in developing nations such as China, Brazil, and India, artificial intelligence (AI) in the agriculture industry is projected to experience growth.
Regarding the geographical distribution of literary works, around 54.8% of publications originate from China, placing it first among the top 20 countries. On the other hand, the U.S., India, Iran, and France are the countries that produce the most knowledge regarding the application of AI in sustainable agriculture after the U.S. The remainder of the list consists of highly developed Anglo-American-European nations, including Italy, the United Kingdom, Germany, Spain, Australia, the Netherlands, Turkey, Canada, Switzerland, and Portugal, as well as developing Asian economies like Malaysia, Indonesia, and Pakistan.
About the author
Senior Research Writer at INDIAai
Share via
Understanding satisfiability modulo theories
ManaliSwing: India’s step to safe adventure sports
Join our newsletter to know about important developments in AI space


Leave a Comment