Seminar on Transformers – Revolutionizing Automatic Text Understanding

Transformers are among the go-to models in Artificial Intelligence these days. The capability of Transformers to swiftly adapt to new tasks, often known as transfer learning, is one of the primary reasons they were able to quickly overtake most AI leaderboards and revolutionise the AI sector. In this talk, we highlight the capability of Transformers and how they quickly became helpful for other tasks ranging from vision or audio and music applications all the way to playing chess or doing math!

Machine Learning

Event Info:

  • 14th September, 2022
  • 10:00 am – 12:00 pm
  • SEECS, NUST, H-12
  • Organizer(s): Dr. Faisal Shafait and Dr. Imran Malik
Prof. Dr. Adrian Ulges

Speaker

Prof. Dr. Adrian Ulges

Prof. Dr. Adrian Ulges graduated from TU Kaiserslautern. He received a diploma in computer science in 2005. In 2009, he completed his Ph.D in computer science. He has remained an active researcher at the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern Germany from 2005 till 2011. His research interests are in machine learning, computer vision, and multimedia analysis. He worked with Google as an intern in 2005, at Mountain View and as a visiting scientist in 2011 in Zurich. Since 2013, he has been a full professor at RheinMain University of Applied Sciences (HSRM), with interests in applied mathematics and machine learning. Adrian’s work has been awarded a Google Research Award in 2010, and his publication record includes over 30 peer-reviewed papers.