knowledge representation
The Knowledge Representation line aims to find stable, compact and interpretable representations for data coming from input sensors from different domains. For this, non-linear dimensionality reduction techniques such as manifold learning will be explored to improve the performance of learning techniques and cognitive architectures and make the results more interpretable. Interpretability aims to prevent database bias from leading cognitive systems to make unethical or even contradictory decisions to scientific theory. Manifolds learning processes will be evaluated both for their impact on the performance of cognitive architectures and for qualitative analyzes that show the interpretability of the data.