Safe deployment of Deep Neural Networks in automotive engineering
|Title||Safe deployment of Deep Neural Networks in automotive engineering|
|Summary||Evaluation of DNN factors with respect to robustness in safety critical systems.|
|References|| [[References:: K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015. [Online]. Available: http://arxiv.org/abs/1512.03385
 C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” CoRR, vol. abs/1409.4842, 2014. [Online]. Available: http://arxiv.org/abs/1409.4842
 A. H. Abdelaziz, S. Watanabe, J. R. Hershey, E. Vincent, and D. Kolossa, “Uncertainty propagation through deep neural networks,” in Interspeech 2015, Dresden, Germany, Sep. 2015. [Online]. Available: https://hal.inria.fr/hal-01162550]]
|Prerequisites||AI and learning system course|
|Supervisor||Cristofer Englund, Sepideh Pashami|
The overall goal of this project is to create a framework for deploying deep neural networks (DNN) in safety critical systems. Indeed DNN are popular and efficient, used with great success to solve pattern recognition tasks in various environments [1, 2]. However, to deploy such technology in safety critical systems, such as vehicles, deeper understanding on how and when they work as expected is needed. Deeper understanding makes it possible to detect weaknesses and take accurate precautions for improved reliability and safety.
Robustness of the results produced by DNN is important for safety critical systems. This project will investigate the robustness of the results from a DNN by adding noise to the input data set, removing some data points or inserting inaccurate data and calculating the propagation of the uncertainty through the network .
The thesis will focus on evaluation of DNN factors w.r.t. robustness.