Home | Call For Papers | Organization | KeyNote | Accepted Papers | Schedule |
Deep neural networks (DNNs) have received tremendous attention and achieved great success in various applications, such as image and video analysis, natural language processing, recommendation systems, and drug discovery. However, inherent uncertainties derived from different root causes have been serious hurdles for DNNs to find robust and trustworthy solutions for real-world problems. A lack of consideration of such uncertainties may lead to unnecessary risk. For example, a self-driving autonomous car can misclassify a human on the road. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor. Uncertainty has become increasingly important, and it has been attracting attention from academia and industry due to its increased popularity in real-world applications with uncertain concerns. It also emphasizes decision-making problems, such as autonomous driving and diagnosis systems. Therefore, the wave of research at the intersection of uncertainty reasoning and quantification in data mining and machine learning has also influenced other fields of science, including computer vision, natural language processing, reinforcement learning, and social science.