Skip to Content
LogoA
  1. About
  2. Experience
  3. Projects
  4. Blog
  5. Contact
Resume
  1. About
  2. Experience
  3. Projects
  4. Blog
  5. Contact
Resume
  • GitHub
  • LinkedIn
  • Google Scholar
  • YouTube
  • Twitter
avinashkumarsingh1986@gmail.com

Blog Posts

a collection of articles

Attention Please!
Attention in machine learning is like the brain's spotlight, highlighting important parts of data. Introduced in 2014, it focuses on relevant information using "queries," "keys," and "values." Imagine a model yelling, "Hey, brain, remember that detail!" to handle complex tasks like translating long sentences.
7/22/2024
  • #CNN
  • #Attention
  • #TensorFlow
1x1 Convolution
Why does someone use a 1x1 convolution? And why is it called 1x1? It sounds like a pixel-sized joke! When it first showed up, people probably thought it was a prank. But hold on, let's dive deep into this mysterious little guy and unravel the math magic behind it.
7/11/2024
  • #CNN
  • #TensorFlow
  • GitHub
  • LinkedIn
  • Google Scholar
  • YouTube
  • Twitter
Modified by Dr. Avinash Kumar Singh Designed by Brittany Chiang