Update README.md
Compare changes
+ 36
− 2
- Then the ***Tweet Colector*** (in python as for the rest of the project) collects tweets from the generator. It organize them in partial cascades and send them to the estimator at different time windows. When a cascade is over, it send its final information to the predictor for it to compare the true size of a cascade with its prediction.
- The ***Predictor*** receives the estimated popularities of a cascade from the Hawkes Estimator and its real final size from the Colector. Thanks to a random forest model, it ajusts the prediction of the estimator and send it to an alerts topic. It then sends the prediction error on a stats topic, and a new sample on a samples topic to train the random forest model. There is on random forest model for every time window.
\ No newline at end of file