Résumé

Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.

Détails