Explainability techniques are important to understanding machine learning models used in decision critical settings. We explore how pattern recognition techniques ought to couple requisite transparency with predictive power. By leveraging medical data with the task of predicting the onset of sepsis, we expose the most important features for the model prediction. We uncover how important training points and consensus feature attributions vary over the learning process of these models. We then pose a counterfactual question to explore trained predictors in the medical domain. (Best Paper Award).