Fake news detection gained an interest in recent years. This made researchers try to find models that can classify text in the direction of fake news detection. While new models are developed, researchers mostly focus on the accuracy of a model. There is little research done in the subject of explainability of Neural Network (NN) models constructed for text classification and fake news detection. When trying to add a level of explainability to a Neural Network model, allot of different aspects have to be taken under consideration. Text length, pre-processing, and complexity play an important role in achieving successfully classification. Model’s architecture has to be taken under consideration as well. All these aspects are analyzed in this thesis. In this work, an analysis of attention weights is performed to give an insight into NN reasoning about texts. Visualizations are used to show how 2 models, Bidirectional Long-Short term memory Convolution Neural Network (BIDir-LSTM-CNN), and Bidirectional Encoder Representations from Transformers (BERT), distribute their attentions while training and classifying texts. In addition, statistical data is gathered to deepen the analysis. After the analysis, it is concluded that explainability can positively influence the decisions made while constructing a NN model for text classification and fake news detection. Although explainability is useful, it is not a definitive answer to the problem. Architects should test, and experiment with different solutions, to be successful in effective model construction.