¿Qué hacemos?

Introducción.

Members

Slide to see more →
Nombre 1

María Elena Buemi

UBA researcher.

Nombre 2

Daniel Acevedo

CONICET researcher.

Nombre 3

Pablo Negri

CONICET researcher.

Nombre 4

Manuel Dubinsky

Rol

Nombre 5

Nicolas Mastropasqua

Rol

Nombre 6

Julieta Goria

Rol.

Nombre 7

Juan Ignacio Bustos Gorostegui

Rol.

Nombre 8

Nicolas Carrasco

Rol.

Latest articles

Event-based facial microexpression analysis using Spiking Neural Networks

👤 Written by: Daniel Acevedo, Pablo Negri, Maria E. Buemi y Nicolas Mastropasqua. 📅 December 29, 2025

Conference: 2025 15th IEEE International Conference on Pattern Recognition Systems (ICPRS)

Microexpression analysis plays a key role in many applications such as those in Human-Robot Interaction and Driving Monitoring Systems (DMS). However, robust and fast detection of subtle facial micro-movements in-the-wild still remains a significant challenge for standard RGB cameras. Recently, bio-inspired sensors such as event cameras have emerged as a promising alternative due to their high temporal resolution, low latency and energy efficiency. Despite their potential, the public availability of event-based datasets focusing on facial analysis is still scarce. To address this limitation, we introduce a preliminary multi-resolution and multi-modal (event-based and RGB) microexpression dataset labelled according to the Facial Action Unit Coding System and recorded under mixed lighting conditions. Additionally, this paper explores the use of Spiking Neural Networks to detect these microexpressions and to perform facial recognition using the event data.


Feasibility of Kinship Search in Inverse Family Problem

👤 Written by: Daniel Acevedo, Pablo Negri y Julieta Goria 📅 October 21, 2025

Review: JAIIO, Jornadas Argentinas de Informática.

Kinship recognition from facial images is a challenging task that broadens traditional facial verification techniques by incorporating genetic and generational variations. While preexisting approaches, such as those explored in the Recognizing Families In the Wild (RFIW), focus on verifying familial relationships in a standard temporal direction (mostly comparing parents to their younger children), our reaserch addresses the inverse problem: identifying children by comparing adults with younger versions of their parents. This work’s goal is to determine whether this inverse formulation consitutes a different problem from traditional kinship recognition, or whether it can be addressed using the same approaches. To this end, we used ArcFace [Deng et al., 2019] for facial alignment and embedding extraction. We also developed a new dataset reflecting this temporal inversion, based on images extracted from IMDb (Internet Movie Database). We evaluated the model’s performance by comparing the distribution of the cosine similarity in both datasets: the traditional approach (FIW) and our newly proposed inverse dataset.


Integrated satellite precipitation: combining products, infrared brightness temperature and electrical activity by convolutional neural networks

👤 Written by: Pablo Negri, Sergio Hernán González, Juan Jose Ruiz, Luciano Vidal y Ezequiel Geslin 📅 September 30, 2025

Review: JAIIO, Jornadas Argentinas de Informática.

Precipitation monitoring is extremely crucial for agricultural activities, since it is a fundamental component of the hydrological balance that has a great impact on yields. In-situ observations through rain gauges are scarce, so it is complemented with precipitation estimates from remote sensors (i.e. satellites and meteorological radars) that increase the spatial and temporal coverage. In this work we propose to use a convolutional neural network model with a UNet type architecture, based on data provided by the GOES-16 satellite. In particular, the combined use of infrared brightness temperature (which provides information on cloud top temperature) and electrical activity (which provides information on convection intensity) will be evaluated. Model training is performed using precipitation data estimated by the GPM satellite-borne weather radar.


Enhancing precipitation detection: A multi-sensor approach using conditional GANs and recurrent networks

👤 Written by: Pablo Negri, Daniel Acevedo, Juan Ruiz, Sergio Gonzalez, Luciano Vidal, Alejo Silvarrey, Maria Gabriela Nicora 📅 June 14, 2025

Review: Pattern Recognition Letters.

The advent of automatic precipitation detection with high-frequency data at very low spatial resolution (4 km) renders the satellite infrared brightness temperature (IR-BT) sensor a promising variable. Nevertheless, this approach must confront the inherent simplicity of this variable, where there is not always a strong correlation with convective precipitation, and the very low number of rain events occurring in nature, presenting an imbalanced problem. This paper proposes a novel approach to identify rainfall that integrates the IR-BT variable with lightning activity, defined as the number of detected lightning flashes per unit of time and space. The approach utilizes a recurrent neural network to estimate a binary output and a conditional GAN (cGAN) framework, which enhances the training and performance of this imbalanced problem. Inverse Dice loss, an alternative loss function, is employed to enhance the convergence and results of our framework: PD-GAN. Tests have shown that integrating sensors and the proposed architecture leads to positive outcomes, including a reduction in false alarms and an enhancement in the overlap of positive events.


Combined use of radiomics and artificial neural networks for the three-dimensional automatic segmentation of glioblastoma multiforme

👤 Written by: María E. Buemi, Alexander Mulet de los Reyes, Victoria Hyde Lord, Daniel Gandía, Luis Gómez Déniz, Maikel Noriega Alemán y Cecilia Suárez 📅 September 09, 2025

Review: JAIIO, Jornadas Argentinas de Informática

Glioblastoma multiforme (GBM) is the most prevalent and agressive primary brain tumour that has the worst prognosis in adults. Currently, the automatic segmentation of this kind of tumour is being intensively studied. Here, the automatic three-dimensional segmentation of the GBM is achieved with its related subzones (active tumour, inner necrosis, and peripheral oedema). Preliminary segmentations were first defined based on the four basic magnetic resonance imaging modalities and classic image processing methods (multithreshold Otsu, Chan–Vese active contours, and morphological erosion). After an automatic gap-filling post processing step, these preliminary segmentations were combined and corrected by a supervised artificial neural network of multilayer perceptron type with a hidden layer of 80 neurons, fed by 30 selected radiomic features of gray intensity and texture. Network classification has an overall accuracy of 83.9%, while the complete combined algorithm achieves average Dice similarity coefficients of 89.3%, 80.7%, 79.7%, and 66.4% for the entire region of interest, active tumour, oedema, and necrosis segmentations, respectively. These values are in the range of the best reported in the present bibliography, but even with better Hausdorff distances and lower computational costs. Results presented here evidence that it is possible to achieve the automatic segmentation of this kind of tumour by traditional radiomics. This has relevant clinical potential at the time of diagnosis, precision radiotherapy planning, or post-treatment response evaluation.


Automatic Classification of Bee Pollen Types

👤 Written by:María E. Buemi, Julieta Shammah, Agustín Sanguinetti 📅 September 09, 2025

Review: JAIIO, Jornadas Argentinas de Informática

This study assesses the effectiveness of automatically classifying pollen types relevant to apiculture using convolutional neural networks. Using samples from ten bee-foraged plant species from Buenos Aires (Argentina), non-acetolyzed hydrated pollen specimens were prepared and photographed under an optical microscope at a medium magnification (10× objective lens). A segmentation algorithm was developed to extract individual pollen grain images, generating two independent datasets. Preliminary non-acetolyzed results with the ResNet18 network show 90% accuracy for grayscale images versus 63% for color images. This research aims to optimize the automatic identification of the floral origin of honeys from Buenos Aires using low-complexity equipment.


Exploring spatial-temporal dynamics in event-based facial micro-expression analysis

👤 Written by: Nicolas Mastropasqua, Ignacio Bugueno-Cordova, Rodrigo Verschae, Daniel Acevedo, Pablo Negri, Maria Elena Buemi 📅 October, 2025

Conferencia: Proceedings of the IEEE/CVF International Conference on Computer Vision

Micro-expression analysis has applications in domains such as Human-Robot Interaction and Driver Monitoring Systems. Accurately capturing subtle and fast facial movements remains difficult when relying solely on RGB cameras, due to limitations in temporal resolution and sensitivity to motion blur. Event cameras offer an alternative, with microsecond-level precision, high dynamic range, and low latency. However, public datasets featuring event-based recordings of Action Units are still scarce. In this work, we introduce a novel, preliminary multi-resolution and multi-modal micro-expression dataset recorded with synchronized RGB and event cameras under variable lighting conditions. Two baseline tasks are evaluated to explore the spatial-temporal dynamics of micro-expressions: Action Unit classification using Spiking Neural Networks (51.23% accuracy with events vs. 23.12% with RGB), and frame reconstruction using Conditional Variational Autoencoders, achieving SSIM= 0.8513 and PSNR= 26.89 dB with high-resolution event input. These promising results show that event-based data can be used for micro-expression recognition and frame reconstruction.

Latest news

In progress March 2026

Conference

Breve descripción de la conferencia.

Descripción detallada y puntos importantes.

  • Punto 1
  • Punto 2

Finish February 2026

Event

Breve descripción de lo que sucedió en el evento del laboratorio.

Como fue y los resultados obtenidos en detalle.