Home Objectives Participants Projects PhD Students Dissemination News Outcomes Publications

Projects

A common goal to be achieved by working together

DC Individual Research Projects

DC 1 - Learning with multiple representations of data

This DC will contribute a novel methodological approach to LMR based on the idea of “modeling data”. The idea here is to provide means for modeling and representing the original data in different ways, and to learn from these representations in parallel. Such an approach appears to be specifically interesting in the context of weak supervision, where the formal representation of training information is often not straightforward. A specific focus will be put on the representation of imprecision and uncertainty in the data, and on the idea of representing data at different levels of abstraction, ranging from fine-grained numerical to coarse-grained qualitative representations. A second goal of the project is to develop ML algorithms that are able to learn from data represented at different levels of precision, and to extend them toward LMR variants simultaneously applying several such algorithms. On the more theoretical side, effects and potential benefits of learning with multiple representations of data on the overall performance of the learning systems shall be analyzed.

DC 2 - Performance guarantees with multiple representations

This DC will contribute to extend the current theoretical understanding of learning with multiple representations: performance guarantees for learning algorithms, robustness of multiple-representation learning, the price and benefits of weakening the training information, competing with multiple representations simultaneously.

DC 3 - LMR via Neural Probabilistic Logic programming

This DC will contribute to develop an expressive language for neural symbolic models that are able to deal with relational symbolic representations (i.e. first-order logic) as well as with subsymbolic feature representations (e.g. images, audios, videos). To develop a scalable inference algorithm based on abduction, which allows for multiple heuristics to define the optimality of the abductive explanation(s). To couple the inference algorithm with a suitable learning method in weakly supervised settings. To evaluate inference and learning in real world tasks requiring integrated neural symbolic representation and reasoning (e.g. visual question answering, robotics).

DC 4 - LMR for supervised nonlinear dimensionality reduction

This DC will contribute to develop methods, which enable the embedding of information into low-dimensional vector spaces such that diverse and possibly changing objectives can be put into the focus on demand, which are tailored by auxiliary information such as functional properties or cognitive biases such as simplicity of the visualization and interpretability. To develop efficient technologies to compute such multiple embeddings efficiently and in an incremental form which is suitable for interactive exploration. To couple the inference algorithm with specific domain knowledge as given in weak-supervised settings. To evaluate efficiency and suitability in real world tasks in the medical domain, which deal with different information sources, cohorts, time scales, and attention foci.

DC 5 - Class Expression Learning with Multiple Representations

This DC will contribute to develop novel ML techniques able to exploit multiple representations to accelerate class expression learning of large knowledge graphs with rich semantics expressed in Description Logics (e.g., SROIQ(D)). Focus on knowledge graphs which change over time and address the challenge of ensuring consistency across the different representations while learning. Concurrently, ensure efficiency both of incremental solutions and of learning to ensure that the methods developed can be used in real use cases with large amounts of data.

DC 6 - Logic-based explanation of neural networks

This DC will contribute to develop a set of techniques to provide First-Order Logic (FOL) explanations of the decision processes taken by different neural network architectures. The framework will include both the case studies of explaining existing black-box models and learning explainable-by-design neural networks. In particular, it deserves special attention in the case of explainable-by-design models where there is no significant loss of performance with respect to existing state-of-the-art models. The explaining techniques will be experimentally evaluated in different application domains, such as natural language processing, computer vision, knowledge graphs, and so forth. By learning a set of explanations as FOL rules, it will be possible to use such rules to make coherent predictions on unseen data as well as improving the usability of AI algorithms for decision support tasks, especially in safety-critical application domains.

DC 7

This DC will contribute: (1) to design a set of empirically well-grounded methods for combining Large Language Models (LLMs) and Knowledge Graphs (KGs), in both directions: the use of LLMs to complement the incomplete knowledge in KGs, and the use of KGs to improve the unsound answers that are often produced by LLMs; (2) exploiting KGs and LLMs in a mutually beneficial construction cycle: using KGs to specialize pre-trained LLMs to a specific domain, and using LLMs to increase the coverage of KGs.

DC 8 - Multiple representations in search and recommendation

This DC will contribute: (1) to identify and define multiple representation models of the entities involved in the tasks of Search and Recommendation (2) to analyse the properties of different aggregation/fusion strategies of the multiple representations that have been defined, also in relation to the considered task and related contextual factors; (3) to implement and to evaluate the effectiveness of search engines and recommender systems based on the defined representation models, also with reference to domain specific applications.

DC 9 - LMR for Fault Isolation in Critical Infrastructure Systems

This DC will contribute to the following objectives: (1) Research and design LMR algorithms in the context of time-varying, non-stationary environments, and limited or weak supervision (2) Validation and development of these LMR algorithms for fault detection and isolation in Critical Infrastructures (CIs). Specifically, to apply the proposed LMR algorithms to fault (e.g., leakage, contamination, sensor failure) detection and isolation in water distribution networks using the real-world data provided by our Secondment Institution partner. Validation will also be performed using the physical KIOS CIs Testbed (e.g., Intelligent Transportation Systems Testbed).

DC 10 - Specification, traceability and evaluation of socio-ethical requirements

This DC will contribute to the following objectives: (1) extend current ML software development methods with the means to provide a continuous integration of specification, implementation and traceability of socio-ethical requirements and constraints, (2) develop novel metrics for LMR algorithms that support tradeoffs between computational and ethical requirements of LRM algorithms, (3) evaluation in concrete high-impact case studies.
Lemur projects
LEMUR Project EU Funded

HORIZON Unit Grant