We introduce a novel framework to deal with fairness, accountability and explainability of intelligent systems. This framework puts together several tools to deal with bias at the level of data, algorithms and human cognition. The framework makes use of intelligent classifiers endowed with fuzzy-grounded linguistic explainability. As a result, it facilitates the exhaustive comparison of (white/grey/black)-box modelling techniques in combination with different strategies for handling missing values and unbalanced datasets. The proposal is evaluated on a real-world dataset in the context of banking services and reported results are encouraging.