1 A web crawler collects links from top sites.
2 A job collects social activity (shares, likes, etc.) for each link.
3 The activity is aggregated, stored, and used as a training dataset.
4 An ML model is created with features extracted from the links' titles.
5 The ML model is used to predict a title's sharing likelihood.