Skip Navigation
Search

AMS 691  Topics in Applied Mathematics

Varying topics selected from the list below if sufficient interest is shown. Several topics may be taught concurrently in different sections: Advanced Operational Methods in Applied Mathematics, Approximate Methods in Boundary Value Problems in Applied Mathematics, Control Theory and Optimization Foundations of Passive Systems Theory, Game Theory, Mixed Boundary Value Problems in Elasticity, Partial Differential Equations, Quantitative Genetics, Stochastic Modeling, Topics in Quantitative Finance.

 

AMS 691.01:   Recent Progress in AI/ML:  Applications, Architectures, and Systems 

This course will cover recent progress in AI/ML in applications, architectures, and
systems. The course will be self-contained as much as possible. If you are unsure about your background, please send the instructor an email inquiry with your background.

0-3 credits
ABCF Grading

No required course materials

Topics (subject to change):
• Overview of recent AI/ML applications
• ChatGPT overview
• Techniques behind ChatGPT: transformer
• Systems behind ChatGPT: GPU clusters, accelerators
• Algorithms behind ChatGPT: reinforcement learning
• Other applications based on ChatGPT
• Survey of competitive models vs transformer
• Survey of key systems development
• Survey of algorithmic innovations
• Sustainable AI and AI for sustainability
• Other topics: responsible AI, secure AI, edge AI (depends on time)

Learning outcomes:
• Understand the current trend in recent AI/ML applications
• Understand the basic concepts and popular applications of ChatGPT
• Understand the challenges and opportunities in the development of next generation AI/ML
applications
• Conduct a course project on related topics

 

*************

AMS 691.02:  Natural Language Processing

This course will introduce fundamental concepts in natural language processing (NLP). NLP includes a range of research problems that involve computing with natural language. Some are user-facing applications, such as spam classification, question answering, summarization, and machine translation. Others serve supporting roles, such as part-of-speech tagging and syntactic parsing. Solutions to these challenges are derived from a combination of machine learning (especially deep learning) techniques, algorithms, and principles from linguistics. NLP also provides fundamental building blocks to large language models (LLMs), which powers an even more diverse set of applications (including but not limited to language) in the era of generative AI.

Prerequisites:

Basic knowledge of calculus, linear algebra, and probability; programming proficiency (no specific language required but Python is preferred); a machine learning course is recommended but not required.

Topics

      • Words: definition, tokenization, morphology, word senses
      • Lexical semantics: distributional semantics, word embeddings, word clustering
      • Text Classification: classifiers, linear models, features, naive Bayes, training linear classifiers via loss function optimization, loss functions, stochastic gradient descent
      • Neural networks: MLP, CNN, RNN and Transformers, fine-tuning
      • Language Modeling: n-gram models, smoothing, neural network--based language modeling
      • Sequence Labeling: part-of-speech tagging, named entity recognition, hidden Markov models, conditional random fields, dynamic programming, Viterbi
      • Syntax: weighted context-free grammars, dependency syntax, inference algorithms
      • Semantics: compositionality, semantic role labeling, frame semantics, lambda calculus, semantic parsing, grounded semantics
      • Pragmatics: phenomena, rational speech act model
      • Cross-lingual NLP: translation, decoding, lexicon induction, unsupervised translation
      • Large language models (LLMs): background, challenges, prompting

 Learning Outcomes:

  • Understand key challenges of computing with natural language
  • Understand and apply solutions to standard NLP tasks
  • Be able to implement basic neural network architectures for core NLP tasks using deep learning toolkits
  • Be able to derive dynamic programming algorithms to perform inference in structured output spaces, and to analyze their computational properties
  • Understand common types of syntactic and semantic analysis, and how they are used in downstream applications
  • Recognize and characterize the errors made by NLP systems
  • Understand how modern large language models (LLMs) work, along with how to use them effectively

 

******************************

AMS 691.03:   Fundamentals of Reinforcement Learning

Deep understanding of reinforcement learning (RL) is essential for machine learning researchers, data scientists and practicing engineers working in areas such as artificial intelligence, machine learning, data/network science, natural language processing, computer vision, among others. RL has found its applications in our everyday life such as AlphaGo, AlphaFold, autonomous driving, healthcare, etc. This course will provide an introduction to the field of RL, and emphasize on hands-on experiences. Students are expected to become well versed in key ideas and techniques for RL through a combination of lectures, written and coding assignments. Students will advance their understanding and the field of RL through a course project. The topics that will be covered (time permitting) include but not limited to
• Markov Decision Processes (MDPs);
• Value Functions;
• Policy Iteration and Value Iteration;
• Monte Carlo Methods;
• Temporal Difference (TD) Learning;
• SARSA and Q-Learning;
• TD(𝜆);
• (Linear) Function Approximation;
• Policy Gradient Algorithms;
• Other topics (e.g., Multi-Agent RL, RL Theory; Deep RL).

Prerequisites
• Calculus and Linear Algebra (You should be comfortable taking derivatives and understanding matrix vector operations and notation.)
• Basic Probability and Statistics (You should know basics of probabilities, mean, standard deviation, etc)
• Python: All programming in the assignments and the project will be in Python (e.g., using numpy and Tensorflow). There will be roughly two programming problems in the assignments. You are expected to be efficient in Python or eager to learn it by yourself. This course will NOT teach programming.
• We will be formulating cost functions, taking derivatives and performing optimization with gradient descent.
• Have heard of Markov decision process and RL before in an AI or ML course, but we will quickly cover the basics.

ABCF grading
3 credits

No course materials required.

 

******************************

AMS 691.04 & 05:   Seminar on Internship and Professional Development for DS Students


AMS 691 is a zero-credit seminar course designed to support DS students in meeting internship requirements, fostering a sense of community, and providing opportunities for interaction and professional development. The course will feature seminars led by invited speakers and student presentations on their internship projects. It also serves as a platform for students to engage in discussions and activities together.

Prerequisites:  None

0 Credits

Learning Outcomes:

  • Understand and meet internship requirements for DS students.
  • Gain insights from professional seminars and peer presentations.
  • Develop a sense of community and collaboration among DS students.
  • Participate in professional development activities.