Convolutional neural networks (CNNs) are popularly used in a wide range of applications, such as computer vision, natural language processing, and human-computer interaction. However, testing and understanding a trained model is difficult and very time-consuming. This is because their inner mechanisms are often considered as a 'black-box' due to difficulty in understanding the causal relationships between processes and results. To help the testing and understanding of such models, we present a user-interactive visual analytics system, VATUN, to analyze a CNN-based image classification model. Users can accomplish the following four tasks in our integrated system: (1) detect data instances in which the model confuses classification, (2) compare outcomes of the model by manipulating the conditions of the image, (3) understand reasons for the prediction of the model by highlighting highly influential parts from the image, and (4) analyze the overall what-if scenarios when augmenting the instances for each class. Moreover, by combining multiple techniques, our system lets users analyze behavior of the model from various perspectives. We conduct a user study of an image classification scenario with three domain experts. Our study will contribute to reducing the time cost for testing and understanding the CNN-based models in several industrial areas.