Teaching     Home

CSE6392
Advanced Topics in Scalable Learning
Dept. Computer Science and Engineering
Dr. Junzhou Huang


[ Administrative Basics | Course Description | Outline of Lectures ]

Administrative Basics

Lecture

NH 109 | Friday 1:00-3:50 PM
Instructor

Junzhou Huang | ERB 650 | Office hours: Friday 3:50-6:00 PM
Request

Basic math and programming background; Basic learning and vision background preferred
Textbook

None

Course Description

This course will provide an overview of the current state-of-the-art of machine learning techniques in computer vision, data mining and bioinformatics by studying a set of cutting-edge advanced topics in these areas. Several selected research topics reflect the current state in these fields. The main objective of this course is to review cutting-edge learning research in big data through lectures covering the underlying statistical & mathematical concepts and deep learning algorithms, paper reading, and implementation. The instructor will work with students on building ideas, performing experiments, and writing papers. Students can decide to submit his/her results to a learning/mining/vision related conference, or just play with funs.

The course is application-driven and includes advacnced topics in machine learning, computer vision and bioinformatics, such as different learning techniques and advanced vision tools in different applications. It will also include selected topics relating to the machine learning theory and techniques. The course will provide the participants with a thorough background in current research in these areas, as well as to promote greater awareness and interaction between multiple research groups within the university. The course material is well suited for students in computer science, computer engineering, electrical engineering and biomedical engineering.


Outline of Lectures

Week 1.

Fri Jan 17: Introduction

Course Objectives and Administration (Slides)

Week 2.

Fri Jan 24: Graph Neural Neural Networks (Slides) (Slides)

Week 3.

Fri Jan 31:

"xxxx", presented by xxxx

"xxxx", presented by xxxx

"xxxx", presented by xxxx

Week 4.

Fri Feb 7:

 

Week 5.

Fri Feb 14:

 

Week 6.

Fri Feb 21:

 

Week 7.

Fri Feb 28:

 

Week 8.

Fri Mar 7:

 

Week 9.

Fri Mar 14: Spring Break

Week 10.

Fri Mar 21:

 

Week 11.

Fri Mar 28:

 

Week 12.

Fri Apr 4:

 

Week 13.

Fri Apr 11:

 

Week 14.

Fri Apr 18:

 

Week 15.

Fri Apr 25:

 

 

 

Each group has two members at most. Each group will select at least one paper from the following paper list and then be scheduled to present their selected papers in our class. Her/his final grade in this class will be mainly related with the peformance of her/his presentation.

Paper List:

Deep Graph Learning

  1. M. Henaff, et al., "Deep Convolutional Networks on Graph-Structured Data", 2015
  2. M. Defferrard, et. al., "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering", NIPS 2016
  3. T. Kipf and M. Welling, "Semi-Supervised Classification with Graph Convolutional Networks", ICLR 2017
  4. J. Gilmer et. al., "Neural Message Passing for Quantum Chemistry", ICML 2017
  5. W. Hamilton, et. al., "Inductive Representation Learning on Large Graphs", NIPS 2017
  6. R. Li, et. al., "Adaptive Graph Convolutional Neural Networks", AAAI 2018
  7. P. Veličković, et. al., "Graph Attention Networks", ICLR 2018
  8. W. Huang, et. al. , "Adaptive Sampling Towards Fast Graph Representation Learning", NeurIPS 2018.
  9. J. You, et. al., "Graph Structure of Neural Networks", ICML 2020
  10. Y. Rong, et. al., "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification", ICLR 2020
  11. B. Zhang, et. al., "Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness", ICLR 2024

Reliability, Explainability, and Privacy Protection

  1. D. Zügner, et., al., "Adversarial Attacks on Neural Networks for Graph Data", KDD 2018
  2. H. Chang, et. al., "A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models", AAAI 2020
  3. D. Zhu, et. al., "Robust Graph Convolutional Networks Against Adversarial Attacks", KDD 2019
  4. W Jing, et. al., "Graph Structure Learning for Robust Graph Neural Network", KDD 2020
  5. H. Chang, et. al., "Not All Low-Pass Filters are Robust in Graph Convolutional Networks", NeurIPS 2021
  6. R. Ying, et. al., "GNNExplainer: Generating Explanations for Graph Neural Networks", NeurIPS 2019
  7. D. Luo, et. al., "Parameterized Explainer for Graph Neural Network", NeurIPS 2020
  8. J.i Yu, et. al., "Graph Information Bottleneck for Subgraph Recognition", ICLR 2021
  9. W. Lin, et. al., "Generative Causal Explanations for Graph Neural Networks", ICML 2021
  10. Y. Wu, et. al., "Discovering Invariant Rationales for Graph Neural Networks", ICLR 2022
  11. J. Yu, et. al., "Improving Subgraph Recognition with Variational Graph Information Bottleneck", CVPR 2022
  12. C. Chen, et. al., "FedGL: federated graph learning framework with global self-supervision". arXiv preprint arXiv:2105.03170, 2021.
  13. C. Wu, et. al., "Fedgnn: Federated graph neural network for privacy-preserving recommendation. arXiv preprint arXiv:2102.04925, 2021.
  14. Z. Zhang, et. al., "Inference attacks against graph neural networks". In USENIX Security, 2022
  15. S. Sajadmanesh, et. al., "Locally private graph neural networks", ACM SIGSAC 2021
  16. H. Peng, et. al., "Differentially Private Federated Knowledge Graphs Embedding", CIKM 2021
  17. Z. Xiang, Z. Xiong and B. Li, "CBD: A Certified Backdoor Detector Based on Local Dominant Probability", NeurIPS 2023

Training and Pre-training

  1. W. Hu, et. al, "Strategies for Pre-Training Graph Neural Networks", ICLR 2020
  2. Y. Rong, et. al., "GROVER: Self-Supervised Message Passing Transformer on Large-scale Molecular Graphs", NeurIPS 2020
  3. C. Ying, et. al., "Do Transformers Really Perform Bad for Graph Representation?", NeurIPS 2021
  4. C. Zheng and et al., "ByteGNN: Efficient Graph Neural Network Training at Large Scale", VLDB 2022
  5. D. Chen, et. al., "Structure-Aware Transformer for Graph Representation Learning", ICML 2022
  6. E. Chien, et. al., "Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction", ICLR 2022
  7. V. Ioannidis, et. al., "Efficient and Effective Training of Language and Graph Neural Network Models", arXiv:2206.10781
  8. K. Duan, et. al., "A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking", NeurIPS 2022
  9. Z. Liu, et. al., "RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations", arXiv:2210.10737
  10. Y. Xie, et al., "Self-Supervised Learning of Graph Neural Networks: A Unified Review", TPAMI 2023
  11. Y. Chebotar, et. al., "Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions", arXiv:2309.10150

LLMs

  1. A. Vaswani, et. al., "Attention is All You Need", NIPS 2017
  2. J. Devlin, et. al., "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
    NAACL-HLT, 2019.
  3. L. Floridi and M. Chiriatti, "GPT-3: Its Nature, Scope, Limits, and Consequences", Minds and Machines, 2020
  4. P. Lewis et. al., "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", NeurIPS 2020
  5. R. Thoppilan et. al., "LaMDA: Language Models for Dialog Applications", 2022
  6. L. Ouyang et. al., "Training Language Models to Follow Instructions with Human Feedback", NeurIPS 2022
  7. E. Hu et. al., "LoRA: Low-Rank Adaptation of Large Language Models", ICLR 2022
  8. J. Li, et. al., "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", ICML 2023
  9. H. Nori et. al., "Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine", 2023
  10. P. Hase, et. al., "The Unreasonable Effectiveness of Easy Training Data for Hard Tasks", arXiv:2401.06751, January 2024
  11. A. Jiang et al., "Mixtral of Experts", arXiv:2401.04088, January 2024
  12. M. Nikdan, et. al., "RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation", arXiv:2401.04679, January 2024
  13. T. Jiang, et. al., "E5-V: Universal Embeddings with Multimodal Large Language Models", arXiv:2407.12580
  14. D. Kondratyuk et al., "VideoPoet: A Large Language Model for Zero-Shot Video Generation", ICML 2024
  15. S. Zhao et al., "Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo", ICML 2024
  16. I. Amos, et. al., "Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors", ICLR 2024.
  17. E. Hu, et. al., "Amortizing Intractable Inference in Large Language Models", ICLR 2024

    ,

Hallucination

  1. S. Semnani, et. al., "WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia", EMNLP 2023
  2. Y. Li, et. al., "Evaluating Object Hallucination in Large Vision-Language Models", EMNLP 2023
  3. F. Liu, et. al., "Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning", arXiv:2306.14565
  4. S. Yin, et. al., "Woodpecker: Hallucination Correction for Multimodal Large Language Models", arXiv:2310.16045
  5. Y. Zhou, et. al., "Analyzing and Mitigating Object Hallucination in Large Vision-Language Models", 2310.00754
  6. Q. Yu, et. al., "HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data", arXiv:2311.13614
  7. Z. Xu, et. al., "Hallucination is Inevitable: An Innate Limitation of Large Language Models", arXiv:2401.11817
  8. Daniel Alexander Alber, et. al., "Medical Large Language Models are Vulnerable to Data-Poisoning Attacks", Nature Medicine, January 2025.

Other Information

Americans with Disabilities Act

The University of Texas at Arlington is on record as being committed to both the spirit and letter of federal equal opportunity legislation; reference Public Law 93112 -- The Rehabilitation Act of 1973 as amended. With the passage of new federal legislation entitled Americans With Disabilities Act - (ADA), pursuant to section 504 of The Rehabilitation Act, there is renewed focus on providing this population with the same opportunities enjoyed by all citizens. As a faculty member, I am required by law to provide "reasonable accommodation" to students with disabilities, so as not to discriminate on the basis of that disability. Student responsibility primarily rests with informing faculty at the beginning of the semester and in providing authorized documentation through designated administrative channels.

Academic Integrity

It is the philosophy of The University of Texas at Arlington that academic dishonesty is a completely unacceptable mode of conduct and will not be tolerated in any form. All persons involved in academic dishonesty will be disciplined in accordance with University regulations and procedures. Discipline may include suspension or expulsion from the University. "Scholastic dishonesty includes but is not limited to cheating, plagiarism, collusion, the submission for credit of any work or materials that are attributable in whole or in part to another person, taking an examination for another person, any act designed to give unfair advantage to a student or the attempt to commit such acts." (Regents' Rules and Regulations, Part One, Chapter VI, Section 3, Subsection 3.2, Subdivision 3.22)

Grade Appeal Policy

If you do not believe a grade on a particular assignment is correct, you may appeal the grade in writing (email) within 5 class days. Grade appeals must be ppealed to the appropriate GTA firstly, then to your instructor if necessary. Please refer to the UTA Catalog for the detailed guide of grade appeals.

Student Support Services Available

The University of Texas at Arlington provides a variety of resources and programs to help you develop academic skills, deal with personal situations, better understand concepts and information related to their courses, and achieve academic success. These programs include major-based learning centers, developmental education, advising and mentoring, personal couneling, admission and transition, and federally funded programs. Students requiring assistance academically, personally, or socially should contact the Office of Student Success Programs at 817-272-6107 or visit www.uta.edu/resources for more information and appropriate referrals.

Academic Integrity