Ageing and sexing birds require specialist knowledge and training concerning which characteristics to focus on for different species. An expert can formulate an explanation for a classification using these characteristics and, additionally, identify anomalies. Some characteristics require practical training, for example, the difference between moulted and non-moulted feathers, while some knowledge, like feather taxonomy and moulting patterns, can be learned without extensive practical training. An explanation formulated for a classification, by a human, stands in sharp contrast to an explanation produced by a trained neural network. These machine explanations are more an answer to a how-question, related to the inner workings of the neural network, not an answer to a why-question, presenting domain-related characteristics useful for a domain expert. For machine-created explanations to be trustworthy neural networks require a static use context and representative independent and identically distributed training data. These prerequisites do seldom hold in real-world settings. Some challenges related to this are neural networks' inability to identify exemplars outside the training distribution and aligning internal knowledge creation with characteristics used in the target domain. These types of questions are central in the active research field of explainable artificial intelligence (XAI), but, there is a lack of hands-on experiments involving domain experts. This work aims to address the above issues with the goal of producing a prototype where domain experts can train a tool that builds on human expert knowledge in order to produce useful explanations. By using internalised domain expertise we aim at a tool that can produce useful explanations and even new insights for the domain. By working together with domain experts from Ottenby Observatory our goal is to address central XAI challenges and, at the same time, add new perspectives useful to determine age and sex on birds.