https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
SHAP (SHapley Additive exPlanations) – explaining AI models
“SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods [1-7] and representing the only possible consistent and locally accurate additive feature attribution method based on expectations (see our papers for details and citations).”
https://github.com/slundberg/shap
Silo.AI Academy Webinar ‘AI Model Interpretability’ (on youtube)
ENISA: Good Practices for Security of IoT – Secure Software Development Lifecycle
Real-time voice cloning
“SV2TTS is a three-stage deep learning framework that allows to create a numerical representation of a voice from a few seconds of audio, and to use it to condition a text-to-speech model trained to generalize to new voices.”
https://github.com/CorentinJ/Real-Time-Voice-Cloning