1 A crawler collects links from pre-defined domains.
2 A job collects social activity (shares, likes, etc.) for each link.
3 The activity is aggregated, stored, and used as a training dataset.
4 An ML model is created with features extracted from the links' titles.
5 The ML model is used to predict a given title's sharing likelihood.