WWW 2025 Tutorial:
Rethink Deep Learning with
Invariance in Data Representation

1The Chinese University of Hong Kong, 2Cornell University, 3City University of Hong Kong

Date: 13:30 - 15:00, Tuesday, April 29, 2025
Location: Room C3.4, ICC Sydney, Australia

Overview

Integrating invariance into data representations is a principled design in intelligent systems and web applications. Representations play a fundamental role, where systems and applications are both built on meaningful representations of digital inputs (rather than the raw data). In fact, the proper design/learning of such representations relies on priors w.r.t. the task of interest. Here, the concept of symmetry from the Erlangen Program may be the most fruitful prior — informally, a symmetry of a system is a transformation that leaves a certain property of the system invariant. Symmetry priors are ubiquitous, e.g., translation as a symmetry of the object classification, where object category is invariant under translation.

The quest for invariance is as old as pattern recognition and data mining itself. Invariant design has been the cornerstone of various representations in the era before deep learning, such as the SIFT. As we enter the early era of deep learning, the invariance principle is largely ignored and replaced by a data-driven paradigm, such as the CNN. However, this neglect did not last long before they encountered bottlenecks regarding robustness, interpretability, efficiency, and so on. The invariance principle has returned in the era of rethinking deep learning, forming a new field known as Geometric Deep Learning (GDL).

In this tutorial, we will give a historical perspective of the invariance in data representations. More importantly, we will identify those research dilemmas, promising works, future directions, and web applications.

Schedule

Materials: [Full Slides] [Tutorial Proposal] [Video Teaser]

Time Part Slides
5 min Part 0: Opening remarks [Part 0 Slides]
20 min Part 1: Background and challenges [Part 1 Slides]
20 min Part 2: Preliminaries of invariance [Part 2 Slides]
10 min Q&A / Break
30 min Part 3: Invariance in the era before deep learning [Part 3 Slides]
10 min Part 4: Invariance in the early era of deep learning [Part 4 Slides]
30 min Q&A / Coffee Break
50 min Part 5: Invariance in the era of rethinking deep learning [Part 5 Slides]
20 min Part 6: Conclusions and discussions [Part 6 Slides]
10 min Q&A

Reading List

Bold papers are the focus of the reading.


Part 1: Background and challenges

  1. C Buckner. Understanding adversarial examples requires a theory of artefacts for deep learning. Nature Machine Intelligence, 2020.
  2. X Li, C Cao, Y Shi, et al. A survey of data-driven and knowledge-aware explainable AI. TKDE, 2020.
  3. E Strubell, A Ganesh, A McCallum, et al. Energy and policy considerations deep learning research. AAAI, 2020.
  4. H Liu, M Chaudhary, H Wang. Towards trustworthy and aligned machine learning: A data-centric survey with causality perspectives. arXiv preprint arXiv:2307.16851, 2023.
  5. F Klein. A comparative review of recent researches in geometry. Bulletin of the American Mathematical Society, 1893.
  6. H Weyl. Symmetry. Princeton University Press, 2015.
  7. Y LeCun, Y Bengio, G Hinton. Deep learning. Nature, 2015.
  8. Y Bengio, A Courville, P Vincent. Representation learning: A review and new perspectives. TPAMI, 2013.

Part 2: Preliminaries of invariance

  1. K Lenc, A Vedaldi. Understanding image representations by measuring their equivariance and equivalence. CVPR, 2015.
  2. MM Bronstein, J Bruna, Y LeCun, et al. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 2017.

Part 3: Invariance in the era before deep learning

  1. K Mikolajczyk, C Schmid. A performance evaluation of local descriptors. TPAMI, 2005.
  2. J. Flusser, B. Zitova, T. Suk. Moments and Moment Invariants in Pattern Recognition. John Wiley & Sons, 2009.
  3. MK Hu. Visual pattern recognition by moment invariants. TIT, 1962.
  4. A Khotanzad, YH Hong. Invariant image recognition by Zernike moments. TPAMI, 1990.
  5. S Qi, Y Zhang, C Wang, et al. A survey of orthogonal moments for image representation: Theory, implementation, and evaluation. ACM Computing Surveys, 2023.
  6. S Qi, Y Zhang, C Wang, et al. Representing noisy image without denoising. TPAMI, 2024.
  7. AV Oppenheim, JS Lim. The importance of phase in signals. Proceedings of the IEEE, 1981.
  8. S Mallat. A Wavelet Tour of Signal Processing. Elsevier, 1999.
  9. DG Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
  10. T Lindeberg. Scale-space Theory in Computer Vision. Springer Science & Business Media, 1993.
  11. A Iscen, G Tolias, PH Gosselin, et al. A comparison of dense region detectors for image search and fine-grained classification. TIP, 2015.
  12. E Tola, V Lepetit, P Fua. Daisy: An efficient dense descriptor applied to wide-baseline stereo. TPAMI, 2009.
  13. S Qi, Y Zhang, C Wang, et al. A principled design of image representation: Towards forensic tasks. TPAMI, 2023.

Part 4: Invariance in the early era of deep learning

  1. A Krizhevsky, I Sutskever, GE Hinton. ImageNet classification with deep convolutional neural networks. NIPS , 2012.
  2. F Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 1958.
  3. K Fukushima, S Miyake. Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recognition, 1982.
  4. Y LeCun, B Boser, J Denker, et al. Handwritten digit recognition with a back-propagation network. NIPS, 1989.

Part 5: Invariance in the era of rethinking deep learning

  1. MM Bronstein, J Bruna, T Cohen, et al. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021.
  2. J Bruna, S Mallat. Invariant scattering convolution networks. TPAMI, 2013.
  3. J Bruna, S Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. CVPR, 2013.
  4. T Cohen, M Welling. Group equivariant convolutional networks. ICML, 2018.
  5. M Weiler, FA Hamprecht, M Storath. Learning steerable filters for rotation equivariant CNNs. CVPR, 2018.
  6. EJ Bekkers. B-spline CNNs on lie groups. ICLR, 2020.
  7. M Zaheer, S Kottur, S Ravanbakhsh. Deep sets. NIPS, 2017.
  8. CR Qi, H Su, K Mo, et al. PointNet: Deep learning on point sets for 3D classification and segmentation. CVPR, 2017.
  9. TN Kipf, M Welling. Semi-supervised classification with graph convolutional networks. ICLR, 2017.
  10. P Veličković, G Cucurull, A Casanova, et al. Graph attention networks. ICLR, 2018.
  11. J Gilmer, SS Schoenholz, PF Riley, et al. Neural message passing for quantum chemistry. ICML, 2017.
  12. CK Joshi. https://thegradient.pub/transformers-are-graph-neural-networks/
  13. S Qi, Y Zhang, C Wang, et al. Hierarchical invariance for robust and interpretable vision tasks at larger scales. arXiv preprint arXiv:2402.15430, 2025.
  14. T Wang, Y Zhang, S Qi, et al. Security and privacy on generative data in AIGC: A survey. ACM Computing Surveys, 2024.
  15. Y Zhang, Y Sun, S Qi, et al. Atkscopes: Multiresolution adversarial perturbation as a unified attack on perceptual hashing and beyond. USENIX Security, 2025.

BibTeX

@inproceedings{www25-invariance-tutorial,
  title={Rethink Deep Learning with Invariance in Data Representation},
  author={Qi, Shuren and Wang, Fei and Zeng, Tieyong and Fan, Fenglei},
  booktitle={In Companion Proceedings of the ACM Web Conference},
  year={2025}
}