GitHub - CurryTang/Towards-graph-foundation-models

Despite the success of foundation models in NLP, there seems no clear breakthrough for foundation models on graphs. Why foundation models for graphs is important to research?

In this blog post, I briefly share some recent readings about the exploration of the foundation model for graphs.

We summarize these attempts into several pipelines. For each pipeline, we ask the following question:

Pipeline 1: Graph transformer

Transformers are the backbone of foundation models in NLP, and their natural extension graph transformers are candidates for foundation models for graphs. Compared to GNNs with strong structural inductive biases, transformers do not incorporate any inductive biases (if no positional encoding). On one hand, transformers may achieve better generalizability and scaling ability. On the other hand, it’s much harder to train transformers with limited labeled data. As a result, designing graph transformers is like finding a balanced point for inductive biases in the range of pure self-attentions and GNNs.

For a more detailed list of GT-related papers, you may check awesome graph transformers