Teaching     Home

CSE6392
Advanced Topics in Scalable Learning
Dept. Computer Science and Engineering
Dr. Junzhou Huang


[ Administrative Basics | Course Description | Outline of Lectures ]

Administrative Basics

Lecture

NH 111 | Friday 1:00-3:50 PM
Instructor

Junzhou Huang | ERB 650 | Office hours: Friday 3:50-6:00 PM
Request

Basic math and programming background; Basic learning and vision background preferred
Textbook

None

Course Description

This course will provide an overview of the current state-of-the-art of machine learning techniques in computer vision, data mining and bioinformatics by studying a set of cutting-edge advanced topics in these areas. Several selected research topics reflect the current state in these fields. The main objective of this course is to review cutting-edge learning research in big data through lectures covering the underlying statistical & mathematical concepts and deep learning algorithms, paper reading, and implementation. The instructor will work with students on building ideas, performing experiments, and writing papers. Students can decide to submit his/her results to a learning/mining/vision related conference, or just play with funs.

The course is application-driven and includes advacnced topics in machine learning, computer vision and bioinformatics, such as different learning techniques and advanced vision tools in different applications. It will also include selected topics relating to the machine learning theory and techniques. The course will provide the participants with a thorough background in current research in these areas, as well as to promote greater awareness and interaction between multiple research groups within the university. The course material is well suited for students in computer science, computer engineering, electrical engineering and biomedical engineering.


Outline of Lectures

Week 1.

Fri Jan 19: Introduction

Course Objectives and Administration (Slides)

Week 2.

Fri Jan 26: Graph Neural Neural Networks (Slides) (Slides)

Week 3.

Fri Feb 2:

"Attention is All You Need", NIPS 2017, presented by Wenqi Jia and Haotian Ma

"Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine", presented by Neeharika Katragadda

Week 4.

Fri Feb 9:

"Semi-Supervised Classification with Graph Convolutional Networks", presented by Tong Chen and Junqi Qu

"The Unreasonable Effectiveness of Easy Training Data for Hard Tasks", presented by Sai Shreyashwi Admala and Vindhya Meghana Lutukurthy

Week 5.

Fri Feb 16:

"Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction", presented by Pranav Reddy Gudipati and Netra Palnati

"Deep Convolutional Networks on Graph-Structured Data", presented by Sai Sree Rachamalla and Shuaihua Zhao

Week 6.

Fri Feb 23:

"WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia", presented by Akshay Basavalingaiah and Pushpak Reddy Peram.

"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", presented by Jithin Krishna Kongara and Likhitha Balay

Week 7.

Fri Mar 1:

"LoRA: Low-Rank Adaptation of Large Language Models", presented by Niharika Gaddam and Kurma Teja Sambasivarao

"Graph Attention Networks", presented by Nikhil Yadav and Srishti Madan Raikar

Week 8.

Fri Mar 8:

"LaMDA: Language Models for Dialog Applications", presented by Jayaram Sivalanka Venkata and Manoj Kumar Nallamala

"Adversarial Attacks on Neural Networks for Graph Data", presented by Skandan Pubbi Setty Sathish Kumar and Anusha Komarlu Pradeep Kumar

Week 9.

Fri Mar 15: Spring Break

Week 10.

Fri Mar 22:

"Mixtral of Experts", presented by Raghavendra Koganti and Srinivasa Sai Satwik Chakkirala

"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", presented by Prem Kumar Rohan and Ankith Reddy Avula

Week 11.

Fri Mar 29:

"Training Language Models to Follow Instructions with Human Feedback", presented by Sampath Kumar Gottummukkala and Venkata Sai Devendranath Ganji

"Efficient and Effective Training of Language and Graph Neural Network Model", presented by Manogna Shadhidhara and Varaha Krishna Arangi

Week 12.

Fri Apr 5:

"Graph Structure of Neural Networks", presented by Nandini Nidumolu and Himani Nizam

"GPT-3: Its Nature, Scope, Limits, and Consequences", presented by Teja Sri Mallu and Mary Pranavi Allam

Week 13.

Fri Apr 12:

"BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", presented by Thao Dang and Xulin Chen

"Strategies for Pre-Training Graph Neural Networks", presented by Sneha Gande and Meghana katraju

Week 14.

Fri Apr 19:

"Graph structure learning for Robust Graph Neural Network", presented by Lavanya Chapulmadgollu Ramu and Prudhvi Sola

"GNNExplainer: Generating Explanations for Graph Neural Networks", presented by Tejaswini Seeram and Srilekhya Bukkapatnam

"Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering", presented by Prathima Jagini and Venkata Sai Hemanth Narayana

Week 15.

Fri Apr 26:

"Woodpecker: Hallucination Correction for Multimodal Large Language Models", presented by Umesh Chandra Karagatla and Nikhita Allada

"Neural Discrete Representation Learning", presented by Parisa Boodaghimalidarreh and Bhanu Prakash Lingamaneni

 

 

Each group has two members at most. Each group will select at least one paper from the following paper list and then be scheduled to present their selected papers in our class. Her/his final grade in this class will be mainly related with the peformance of her/his presentation.

Paper List:

Deep Graph Learning

  1. M. Henaff, et al., "Deep Convolutional Networks on Graph-Structured Data", 2015
  2. M. Defferrard, et. al., "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering", NIPS 2016
  3. T. Kipf and M. Welling, "Semi-Supervised Classification with Graph Convolutional Networks", ICLR 2017
  4. J. Gilmer et. al., "Neural Message Passing for Quantum Chemistry", ICML 2017
  5. W. Hamilton, et. al., "Inductive Representation Learning on Large Graphs", NIPS 2017
  6. R. Li, et. al., "Adaptive Graph Convolutional Neural Networks", AAAI 2018
  7. P. Veličković, et. al., "Graph Attention Networks", ICLR 2018
  8. W. Huang, et. al. , "Adaptive Sampling Towards Fast Graph Representation Learning", NeurIPS 2018.
  9. J. You, et. al., "Graph Structure of Neural Networks", ICML 2020
  10. Y. Rong, et. al., "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification", ICLR 2020

Reliability, Explainability, and Privacy Protection

  1. D. Zügner, et., al., "Adversarial Attacks on Neural Networks for Graph Data", KDD 2018
  2. H. Chang, et. al., "A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models", AAAI 2020
  3. D. Zhu, et. al., "Robust Graph Convolutional Networks Against Adversarial Attacks", KDD 2019
  4. W Jing, et. al., "Graph Structure Learning for Robust Graph Neural Network", KDD 2020
  5. H. Chang, et. al., "Not All Low-Pass Filters are Robust in Graph Convolutional Networks", NeurIPS 2021
  6. R. Ying, et. al., "GNNExplainer: Generating Explanations for Graph Neural Networks", NeurIPS 2019
  7. D. Luo, et. al., "Parameterized Explainer for Graph Neural Network", NeurIPS 2020
  8. J.i Yu, et. al., "Graph Information Bottleneck for Subgraph Recognition", ICLR 2021
  9. W. Lin, et. al., "Generative Causal Explanations for Graph Neural Networks", ICML 2021
  10. Y. Wu, et. al., "Discovering Invariant Rationales for Graph Neural Networks", ICLR 2022
  11. J. Yu, et. al., "Improving Subgraph Recognition with Variational Graph Information Bottleneck", CVPR 2022
  12. C. Chen, et. al., "FedGL: federated graph learning framework with global self-supervision". arXiv preprint arXiv:2105.03170, 2021.
  13. C. Wu, et. al., "Fedgnn: Federated graph neural network for privacy-preserving recommendation. arXiv preprint arXiv:2102.04925, 2021.
  14. Z. Zhang, et. al., "Inference attacks against graph neural networks". In USENIX Security, 2022
  15. S. Sajadmanesh, et. al., "Locally private graph neural networks", ACM SIGSAC 2021
  16. H. Peng, et. al., "Differentially Private Federated Knowledge Graphs Embedding", CIKM 2021
  17. Z. Xiang, Z. Xiong and B. Li, "CBD: A Certified Backdoor Detector Based on Local Dominant Probability", NeurIPS 2023

Training and Pre-training

  1. W. Hu, et. al, "Strategies for Pre-Training Graph Neural Networks", ICLR 2020
  2. Y. Rong, et. al., "GROVER: Self-Supervised Message Passing Transformer on Large-scale Molecular Graphs", NeurIPS 2020
  3. C. Ying, et. al., "Do Transformers Really Perform Bad for Graph Representation?", NeurIPS 2021
  4. C. Zheng and et al., "ByteGNN: Efficient Graph Neural Network Training at Large Scale", VLDB 2022
  5. D. Chen, et. al., "Structure-Aware Transformer for Graph Representation Learning", ICML 2022
  6. E. Chien, et. al., "Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction", ICLR 2022
  7. V. Ioannidis, et. al., "Efficient and Effective Training of Language and Graph Neural Network Models", arXiv:2206.10781
  8. K. Duan, et. al., "A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking", NeurIPS 2022
  9. Z. Liu, et. al., "RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations", arXiv:2210.10737
  10. Y. Xie, et al., "Self-Supervised Learning of Graph Neural Networks: A Unified Review", TPAMI 2023
  11. Y. Chebotar, et. al., "Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions", arXiv:2309.10150

LLMs

  1. Ashish Vaswani, et. al., "Attention is All You Need", NIPS 2017
  2. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
    NAACL-HLT, 2019.
  3. Luciano Floridi & Massimo Chiriatti, "GPT-3: Its Nature, Scope, Limits, and Consequences", Minds and Machines, 2020
  4. Patrick Lewis et. al., "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", NeurIPS 2020
  5. Romal Thoppilan et. al., "LaMDA: Language Models for Dialog Applications", 2022
  6. Long Ouyang et. al., "Training Language Models to Follow Instructions with Human Feedback", NeurIPS 2022
  7. Edward J. Hu et. al., "LoRA: Low-Rank Adaptation of Large Language Models", ICLR 2022
  8. Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi, "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", ICML 2023
  9. Harsha Nori et. al., "Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine", 2023
  10. Peter Hase, Mohit Bansal, Peter Clark, Sarah Wiegreffe, "The Unreasonable Effectiveness of Easy Training Data for Hard Tasks", arXiv:2401.06751, January 2024
  11. Albert Jiang et al., "Mixtral of Experts", arXiv:2401.04088, January 2024
  12. Mahdi Nikdan, Soroush Tabesh, Dan Alistarh, "RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation", arXiv:2401.04679, January 2024

Hallucination

  1. Sina Semnani, Violet Yao, Heidi Zhang and Monica Lam, "WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia", EMNLP 2023
  2. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen, "Evaluating Object Hallucination in Large Vision-Language Models", EMNLP 2023
  3. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang, "Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning", arXiv:2306.14565
  4. Shukang Yin, et. al., "Woodpecker: Hallucination Correction for Multimodal Large Language Models", arXiv:2310.16045
  5. Yiyang Zhou, et. al., "Analyzing and Mitigating Object Hallucination in Large Vision-Language Models", 2310.00754
  6. Qifan Yu, et. al., "HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data", arXiv:2311.13614

 


Other Information

Americans with Disabilities Act

The University of Texas at Arlington is on record as being committed to both the spirit and letter of federal equal opportunity legislation; reference Public Law 93112 -- The Rehabilitation Act of 1973 as amended. With the passage of new federal legislation entitled Americans With Disabilities Act - (ADA), pursuant to section 504 of The Rehabilitation Act, there is renewed focus on providing this population with the same opportunities enjoyed by all citizens. As a faculty member, I am required by law to provide "reasonable accommodation" to students with disabilities, so as not to discriminate on the basis of that disability. Student responsibility primarily rests with informing faculty at the beginning of the semester and in providing authorized documentation through designated administrative channels.

Academic Integrity

It is the philosophy of The University of Texas at Arlington that academic dishonesty is a completely unacceptable mode of conduct and will not be tolerated in any form. All persons involved in academic dishonesty will be disciplined in accordance with University regulations and procedures. Discipline may include suspension or expulsion from the University. "Scholastic dishonesty includes but is not limited to cheating, plagiarism, collusion, the submission for credit of any work or materials that are attributable in whole or in part to another person, taking an examination for another person, any act designed to give unfair advantage to a student or the attempt to commit such acts." (Regents' Rules and Regulations, Part One, Chapter VI, Section 3, Subsection 3.2, Subdivision 3.22)

Grade Appeal Policy

If you do not believe a grade on a particular assignment is correct, you may appeal the grade in writing (email) within 5 class days. Grade appeals must be ppealed to the appropriate GTA firstly, then to your instructor if necessary. Please refer to the UTA Catalog for the detailed guide of grade appeals.

Student Support Services Available

The University of Texas at Arlington provides a variety of resources and programs to help you develop academic skills, deal with personal situations, better understand concepts and information related to their courses, and achieve academic success. These programs include major-based learning centers, developmental education, advising and mentoring, personal couneling, admission and transition, and federally funded programs. Students requiring assistance academically, personally, or socially should contact the Office of Student Success Programs at 817-272-6107 or visit www.uta.edu/resources for more information and appropriate referrals.

Academic Integrity