10th Workshop and Competition on


Affective & Behavior Analysis in-the-wild (ABAW)

in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2026

Wed June 3 - Sun June 7, 2026, Denver Colorado

About ABAW

The ABAW Workshop is a premier platform highlighting the latest advancements in multimodal analysis, generation, modeling, and understanding of human affect and behavior in real-world, unconstrained environments. It emphasizes cutting-edge systems that integrate facial expressions, body movements, gestures, natural language, voice and speech to enable impactful research and practical applications. The workshop fosters interdisciplinary collaboration across fields such as computer vision, AI, human machine interaction, psychology, robotics, ethics & healthcare. The workshop further addresses complex challenges like algorithmic fairness, demographic bias & data privacy, making it a vital forum for building equitable, generalizable & human-centered AI systems. By uniting experts from academia, industry & government, the workshop promotes innovation, drives knowledge exchange, and inspires new directions in affective computing, behavior modelling and understanding & human-computer interaction. Finally, the Workshop includes a Competition with 6 challenges.

The ABAW Workshop and Competition is a continuation of the respective events held at CVPR 2025, 2024, 2023, 2022 & 2017, ICCV 2025 & 2021, ECCV 2024 & 2022, FG 2020 (a) & (b).

Organisers



General Chair



           

Dimitrios Kollias

Queen Mary University of London, UK d.kollias@qmul.ac.uk


Program Chairs



                         

Stefanos Zafeiriou

Imperial College London, UK s.zafeiriou@imperial.ac.uk

Irene Kotsia

Cogitat Ltd, UK irene@cogitat.io

Panagiotis Tzirakis

Hume AI, USA panagiotis@hume.ai

Alan Cowen

Google DeepMind alan@hume.ai

Eric Granger

École de technologie supérieure, Canada eric.granger@etsmtl.ca

Marco Pedersoli

École de technologie supérieure, Canada marco.pedersoli@etsmtl.ca

Simon Bacon

Concordia University, Canada simon.bacon@concordia.ca

Data Chairs

                      Alice Baird,                 Hume AI, USA
                      Chris Gagne,               Hume AI, USA
                      Chunchang Shao,     Queen Mary University of London, UK
                      Damith Chamalke Senadeera,     Queen Mary University of London, UK
                      Soufiane Belharbi,     École de technologie supérieure, Canada
                      M. Haseeb Aslam,     École de technologie supérieure, Canada
                      Guanyu Hu,                 Queen Mary University of London, UK & Xi'an Jiaotong University, China
                      Kaushal Kumar Keshlal Yadav,     Queen Mary University of London, UK
                      Jianian Zheng,         University College London, UK

The Workshop



Call for Papers

Original high-quality contributions, in terms of databases, surveys, studies, foundation models, techniques and methodologies (either uni-modal or multi-modal; uni-task or multi-task ones) are solicited on -but are not limited to- the following topics:

    facial expression (basic, compound or other) or micro-expression analysis

    facial action unit detection

    valence-arousal estimation

    physiological-based (e.g.,EEG, EDA) affect analysis

    face recognition, detection or tracking

    body recognition, detection or tracking

    gesture recognition or detection

    pose estimation or tracking

    activity recognition or tracking

    lip reading and voice understanding

    face and body characterization (e.g., behavioral understanding)

    characteristic analysis (e.g., gait, age, gender, ethnicity recognition)

    group understanding via social cues (e.g., kinship, non-blood relationships, personality)

    video, action and event understanding

    digital human modeling

    characteristic analysis (e.g., gait, age, gender, ethnicity recognition)

    violence detection

    autonomous driving

    domain adaptation, domain generalisation, few- or zero-shot learning for the above cases

    fairness, explainability, interpretability, trustworthiness, privacy-awareness, bias mitigation and/or subgroup distribution shift analysis for the above cases

    editing, manipulation, image-to-image translation, style mixing, interpolation, inversion and semantic diffusion for all afore mentioned cases



Workshop Important Dates


Paper Submission Deadline:                                                             23:59:59 AoE (Anywhere on Earth) March 18, 2026

Review decisions sent to authors; Notification of acceptance:       April 7, 2026

Camera ready version:                                                                       April 10, 2026




Submission Information

The paper format should adhere to the paper submission guidelines for main CVPR 2026 proceedings style. Please have a look at the Submission Guidelines Section here.

We welcome full long paper submissions (between 5 and 8 pages, excluding references or supplementary materials; a paper submission should be at least 5 pages long to be considered for publication). All submissions must be anonymous and conform to the CVPR 2026 standards for double-blind review.

All papers should be submitted using this CMT website.

All accepted manuscripts will be part of CVPR 2026 conference proceedings.

At the day of the workshop, oral presentations will be conducted by authors who are attending in-person.


The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.



Workshop Contact Information

For any queries you may have regarding the Workshop, please contact d.kollias@qmul.ac.uk.

The Competition



The Competition is a continuation of the respective Competitions held at CVPR in 2025, 2024, 2023, 2022 & 2017, at ECCV in 2024 & 2022, at ICCV in 2025 & 2021 and at IEEE FG in 2020. It is split into the six below mentioned Challenges. Participants are invited to participate in at least one of these Challenges.



How to participate

In order to participate, teams will have to register. There is a maximum number of 8 participants in each team.



VA Estimation, EXPR Recognition and AU Detection Challenges

If you want to participate in any of these three Challenges you should follow the below procedure for registration.

The lead researcher should send an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:

i) subject "10th ABAW Competition: Team Registration";

ii) this EULA (if the team is composed of only academics) or this EULA (if the team has at least one member coming from the industry) filled in, signed and attached;

iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);

iv) the emails of each team member, each one in a separate line in the body of the email;

v) the team's name;

vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

As a reply, you will receive access to the dataset's cropped/cropped-aligned images and annotations and other important information.



Fine-Grained VD Challenge

If you want to participate in this Challenge you should follow the below procedure for registration.

The lead researcher should send an email from their official address (no personal emails will be accepted) to d.kollias@qmul.ac.uk with:

i) subject "10th ABAW Competition: Team Registration";

ii) this EULA (the team needs to be composed of only academics) filled in, signed and attached;

iii) the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.);

iv) the emails of each team member, each one in a separate line in the body of the email;

v) the team's name;

vi) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc)

As a reply, you will receive access to the dataset's videos and other important information.



EMI Estimation Challenge

If you want to participate in this Challenge please email competitions@hume.ai with the following information:

i) subject "10th ABAW Competition: Team Registration"

ii) name and email for the lead researcher's official academic/industrial website; the lead researcher cannot be a student (UG/PG/Ph.D.)

iii) the names and emails of each team member, each one in separate line in the body of the email

iv) team’s name

iv) the point of contact name and email address (which member of the team will be the main point of contact for future communications, data access etc) the team's name.

A reply to sign an EULA will be sent to all team members. When the EULA is signed by all team members a link to the data will be shared.



AH Video Recognition Challenge

To participate in this Challenge, please follow the registration procedure below:

Please fill out our form according to these steps, and submit it. It involves signing an EULA and uploading it through the same form. The form and the EULA must be completed and signed by a person holding a full-time faculty position at a university, higher education institution, or an equivalent organization. The signee cannot be a student (undergraduate, postgraduate, Ph.D., or postdoctoral).

Once the form is submitted, with the signed EULA, we will contact you to provide details for access to the BAH video dataset. The BAH dataset includes raw videos, cropped-aligned faces at each frame, video- and frame-level labels, audio transcripts with timestamps, annotators’ cues, participants meta-data, pre-defined data splits (training, validation and test sets), and documentation.



Competition Contact Information

For any queries you may have regarding the first 4 Challenges (VA Estimation/AU Detection/EXPR Recognition/Fine-Grained VD), please contact d.kollias@qmul.ac.uk.

For any queries you may have regarding the fifth Challenge (EMI Estimation), please contact competitions@hume.ai.

For any queries you may have regarding the sixth Challenge (AH Recognition), please contact soufiane.belharbi@gmail.com.


General Information

At the end of the Challenges, each team will have to send us:

i) a link to a Github repository where their solution/source code will be stored,

ii) a link to a pre-print version of a paper (e.g. published on arXiv) with 2-8 pages describing their proposed methodology, data used and results.

Each team will also need to upload their test set predictions on an evaluation server (details will be circulated when the test set is released).

After that, the winner of each Challenge, along with a leaderboard, will be announced.

There will be one winner per Challenge. The top-3 performing teams of each Challenge will have to contribute paper(s) describing their approach, methodology and results to our Workshop; the accepted papers will be part of the CVPR 2026 proceedings. All other teams are also able to submit paper(s) describing their solutions and final results; the accepted papers will be part of the CVPR 2026 proceedings.

The Competition's white paper (describing the Competition, the data, the baselines and results) will be ready at a later stage and will be distributed to the participating teams.



General Rules

1) Participants can contribute to any of the 6 Challenges.

2) In order to take part in any Challenge, participants will have to register as described above.

3) Any face detector whether commercial or academic can be used in the challenge. The paper accompanying the challenge result submission should contain clear details of the detectors/libraries used.

4) The top performing teams will have to share their solution (code, model weights, executables) with the organisers upon completion of the challenge; in this way the organisers will check so as to prevent cheating or violation of rules.



Competition Important Dates


Call for participation announced, team registration begins, data available:           January 31, 2026

Test set release:                                                                                                             March 9, 2026

Final submission deadline (Predictions, Code and ArXiv paper):                             23:59:59 AoE (Anywhere on Earth) March 15, 2026

Winners Announcement:                                                                                               March 17, 2026

Final Paper Submission Deadline:                                                                               23:59:59 AoE (Anywhere on Earth) March 18, 2026

Review decisions sent to authors; Notification of acceptance:                                 April 7, 2026

Camera ready version:                                                                                                 April 10, 2026

Valence-Arousal (VA) Estimation Challenge

Database

For this Challenge, an augmented version of the Aff-Wild2 database will be used. This database is audiovisual (A/V), in-the-wild and in total consists of 594 videos of around 3M frames of 584 subjects annotated in terms of valence and arousal.

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (valence-arousal estimation task) solutions. Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (e.g., VA estimation, Expression Recognition, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology they should not use any other annotations (expressions or AUs): the methodology should be purely uni-task, using only the VA annotations. This means that teams are allowed to use other databases' VA annotations, or generated/synthetic data, or any affine transformations, or in general data augmentation techniques (e.g., MixAugment) for increasing the size of the training dataset.

Performance Assessment

The performance measure (P) is the mean Concordance Correlation Coefficient (CCC) of valence and arousal:

CCCarousal + CCCvalence
2

Baseline Results

The baseline network is a pre-trained on ImageNet ResNet-50 and its performance on the validation set is:

CCCvalence = 0.24,     CCCarousal = 0.20

P = 0.22

Expression (EXPR) Recognition Challenge

Database

For this Challenge, the Aff-Wild2 database will be used. This database is audiovisual (A/V), in-the-wild and in total consists of 548 videos of around 2.7M frames that are annotated in terms of the 6 basic expressions (i.e., anger, disgust, fear, happiness, sadness, surprise), plus the neutral state, plus a category 'other' that denotes expressions/affective states other than the 6 basic ones.

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (expression recognition task) solutions. Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (e.g., VA estimation, Expression Recognition, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology you should not use any other annotations (VA or AUs): the methodology should be purely uni-task, using only the EXPR annotations. This means that teams are allowed to use other databases' EXPR annotations, or generated/synthetic data (e.g. the data provided in the ECCV 2022 run of the ABAW Challenge), or any affine transformations, or in general data augmentation techniques (e.g., MixAugment) for increasing the size of the training dataset.

Performance Assessment

The performance measure (P) is the average F1 Score across all 8 categories:   ∑ F1/8

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights and with MixAugment data augmentation technique) and its performance on the validation set is:

P = 0.25

Action Unit (AU) Detection Challenge

Database

For this Challenge, the Aff-Wild2 database will be used. This database is audiovisual (A/V), in-the-wild and in total consists of 547 videos of around 2.7M frames that are annotated in terms of 12 action units, namely AU1,AU2,AU4,AU6,AU7,AU10,AU12,AU15,AU23,AU24,AU25,AU26.

Rules

Only uni-task solutions will be accepted for this Challenge; this means that the teams should only develop uni-task (action unit detection task) solutions. Teams are allowed to use any -publicly or not- available pre-trained model (as long as it has not been pre-trained on Aff-Wild2). The pre-trained model can be pre-trained on any task (e.g., VA estimation, Expression Recognition, AU detection, Face Recognition). However when the teams are refining the model and developing the methodology you should not use any other annotations (VA or EXPR): the methodology should be purely uni-task, using only the AU annotations. This means that teams are allowed to use other databases' AU annotations, or generated/synthetic data, or any affine transformations, or in general data augmentation techniques (e.g., MixAugment) for increasing the size of the training dataset.

Performance Assessment

The performance measure (P) is the average F1 Score across all 12 categories:   ∑ F1/12

Baseline Results

The baseline network is a pre-trained VGGFACE (with fixed convolutional weights) and its performance on the validation set is:

P = 0.39

Fine-Grained Violence Detection (VD) Challenge

Database

For this Challenge, a part of DVD database will be used. DVD database is a large-scale (over 500 videos, 2.7M frames), frame-level annotated VD database with diverse environments, varying lighting conditions, multiple camera sources, complex social interactions, and rich metadata. DVD is designed to capture the complexities of real-world violent events.

Goal of the Challenge and Rules

Participants will be provided with a subset of the DVD Database and will be tasked with developing AI, machine learning, or deep learning models for fine-grained violence detection (VD), specifically at the frame level. Each frame in the DVD Database is annotated as either violent or non-violent. Participants are required to predict, for every frame, whether it depicts a violent event (label: 1) or a non-violent event (label: 0).

Teams are allowed to use any -publicly or not- available pre-trained model and any -publicly or not- available database.

Performance Assessment

The performance measure (P) is the macro F1 Score across the two categories:   ∑ F1/2

Baseline Results

The baseline network is a pre-trained on ImageNet ResNet-50 and its performance on the validation set is:

P = 0.73

Emotional Mimicry Intensity (EMI) Estimation Challenge

Database

For this Challenge, the multimodal Hume-Vidmimic2 dataset is used which consists of more than 15,000 videos totaling over 25 hours. In this dataset, every participant was tasked with imitating a 'seed' video that showcased an individual displaying a particular emotion. Following the mimicry, they were then asked to assess the emotional intensity of the seed video by selecting from a range of predefined emotional categories. The following emotion dimensions are targeted: 'Admiration', 'Amusement', 'Determination', 'Empathic Pain', 'Excitement', and 'Joy'. A normalized score from 0 to 1 is provided as a ground truth value.

Performance Assessment

The performance measure is the average Pearson's correlation (ρ) across the 6 emotion dimensions:   ∑ ρ/6

Baseline Results

We established baseline results using two different feature sets.

First, we employed pre-trained Vision Transformer (ViT) features, which were further processed through a three-layer Gated Recurrent Unit (GRU) network. This approach achieved a performance score of: 0.09.

Secondly, we utilized features extracted from Wav2Vec2, combined with a linear processing layer, which resulted in a performance score of: 0.24.

Additionally, we explored a multimodal approach by averaging the predictions from both unimodal methods, leading to a combined performance score of: 0.25.

Ambivalence/Hesitancy (AH) Video Recognition Challenge

Database

Upon registration for the AH video recognition challenge, teams will be granted access to a new, fully annotated at video- and frame-level version of the BAH dataset [1] that was collected for multimodal recognition of A/H in videos. It contains 1,427 videos with a total duration of 10.60 hours, captured from 300 participants across Canada, answering a predefined set of questions to elicit A/H. It is intended to mirror real-world online personalized behaviour change interventions. BAH is fully annotated by experts to provide timestamps that indicate where A/H occurs, and frame- and video-level annotations with A/H cues. Speech-to-text transcripts, their timestamps, cropped and aligned faces, and participants' metadata are also provided. Since A and H manifest similarly in practice, we provide a binary annotation indicating the presence or absence of both A and H, without distinction. Each participant in the dataset may have up to seven videos. The dataset is divided participant-wise into training, validation, and test sets. For performance evaluation, participants can train their models on the BAH training set using any type of supervision, and report the performance on its public test set. A second unlabeled private test set will be released to the teams before the end of the challenge. Teams must submit by email to the AH recognition challenge organizers a file of their predictions per-video using this private test set. They are allowed to provide multiple trials (up to 5 trials) within the week of the test period. We will compute the performance and the best trial will be used to rank teams and announce the winners. Teams can submit all 5 trials at once, or one trial at a time. This last option allows us to send teams the trial performance as feedback to adjust their approach if needed for the next trial. More details of the submission format will be communicated on the date of the test release.

Goal of the Challenge and Rules

The challenge aims at the design of innovative models to predict A/H at the video-level to indicate whether or not a video contains A/H (1: presence of A/H, 0: absence of A/H). Teams are required to develop their methods to recognize A/H at the video-level (binary task). Given a video, can we predict whether there is or not A/H? Different learning setups could be considered: supervised/self-supervised, domain adaptation and personalization, zero-/few-shot learning, etc. Standard multimodal models could be used, in addition to multimodal LLMs and other recent architectures. Teams are advised to develop solutions tailored for A/H recognition.

Teams are allowed to use any publicly available or private pre-trained model and any public or private dataset (that contains any type of annotations, e.g. valence/arousal, basic or compound emotions, action units). Other datasets for ambivalence/hesitancy, if available, could be used, in addition to the BAH dataset, but they must be disclosed in the paper.

Performance Assessment

The performance measure (P) is the average F1 score (Macro F1) at the video level across both classes (presence (1) and absence (0) of A/H) over the private test set, and will be used to rank teams. We will also report the average precision score (AP) of the positive class (1).

Baseline Results

A performance of P = 0.2827 was obtained on the BAH public test set, using a baseline model (zero-shot setup with Multimodal-LLM (M-LLM), Video-LLaVA, with a simple prompt and vision modality only (code: https://github.com/sbelharbi/zero-shot-m-llm-bah-prediction). See more details in [1]. Additionally, teams could build on top of standard multimodal models that leverage vision, audio, and text modality, such as the one used in [1], and adapt it from frame-level prediction to video-level prediction: https://github.com/sbelharbi/bah-dataset Teams can explore improving standard multimodal models, temporal modeling, multimodal alignment, and multimodal LLMs with specialized parameter-efficient fine-tuning (PEFT). Domain adaptation and personalization could also be considered.

[1]: González-González M, Belharbi S, Zeeshan MO, Sharafi M, Aslam MH, Pedersoli M, Koerich AL, Bacon SL, Granger E. “BAH Dataset for Ambivalence/Hesitancy Recognition in Videos for Behavioural Change”. https://arxiv.org/pdf/2505.19328, ICLR, 2026.

[2]: Sharafi M, Belharbi S, Salem HB, Etemad A, Koerich AL, Pedersoli M, Bacon S, Granger E. “Personalized Feature Translation for Expression Recognition: An Efficient Source-Free Domain Adaptation Method”. https://arxiv.org/pdf/2508.09202. ICLR, 2026.

[3]: Zeeshan MO, Aslam MH, Belharbi S, Koerich AL, Pedersoli M, Bacon S, Granger E. “Subject-based domain adaptation for facial expression recognition”. https://arxiv.org/pdf/2312.05632, FG conference, 2024.

References


If you use the above data, you must cite all following papers:

    D. Kollias, et. al.: "From emotions to violence: Multimodal fine-grained behavior analysis at the 9th abaw", ICCV 2025

    @inproceedings{kollias2025emotions, title={From emotions to violence: Multimodal fine-grained behavior analysis at the 9th abaw}, author={Kollias, Dimitrios and Zafeiriou, Stefanos and Kotsia, Irene and Slabaugh, Greg and Senadeera, Damith Chamalke and Zheng, Jianian and Yadav, Kaushal Kumar Keshlal and Shao, Chunchang and Hu, Guanyu}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={1--12}, year={2025} }

    D. Kollias, et. al.: "Advancements in Affective and Behavior Analysis: The 8th ABAW Workshop and Competition", CVPR 2025

    @inproceedings{kollias2025advancements, title={Advancements in Affective and Behavior Analysis: The 8th ABAW Workshop and Competition}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Cowen, Alan and Zafeiriou, Stefanos and Kotsia, Irene and Granger, Eric and Pedersoli, Marco and Bacon, Simon and Baird, Alice and Gagne, Chris and others}, booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference}, pages={5572--5583}, year={2025} }

    D. Kollias, et. al.: "DVD: A Comprehensive Dataset for Advancing Violence Detection in Real-World Scenarios", 2025

    @article{kollias2025dvd, title={DVD: A Comprehensive Dataset for Advancing Violence Detection in Real-World Scenarios}, author={Kollias, Dimitrios and Senadeera, Damith C and Zheng, Jianian and Yadav, Kaushal KK and Slabaugh, Greg and Awais, Muhammad and Yang, Xiaoyun}, journal={arXiv preprint arXiv:2506.05372}, year={2025} }

    D. Kollias, et. al.: "Behaviour4all: in-the-wild facial behaviour analysis toolkit", 2025

    @article{kollias2024behaviour4all, title={Behaviour4all: in-the-wild facial behaviour analysis toolkit}, author={Kollias, Dimitrios and Shao, Chunchang and Kaloidas, Odysseus and Patras, Ioannis}, journal={arXiv preprint arXiv:2409.17717}, year={2024} }

    D. Kollias, et. al.: "7th abaw competition: Multi-task learning and compound expression recognition", ECCV 2024

    @inproceedings{kollias20247th, title={7th abaw competition: Multi-task learning and compound expression recognition}, author={Kollias, Dimitrios and Zafeiriou, Stefanos and Kotsia, Irene and Dhall, Abhinav and Ghosh, Shreya and Shao, Chunchang and Hu, Guanyu}, booktitle={European Conference on Computer Vision}, pages={31--45}, year={2024}, organization={Springer} }

    D. Kollias, et. al.: "The 6th Affective Behavior Analysis in-the-wild (ABAW) Competition". CVPR, 2024

    @inproceedings{kollias20246th,title={The 6th affective behavior analysis in-the-wild (abaw) competition},author={Kollias, Dimitrios and Tzirakis, Panagiotis and Cowen, Alan and Zafeiriou, Stefanos and Kotsia, Irene and Baird, Alice and Gagne, Chris and Shao, Chunchang and Hu, Guanyu},booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},pages={4587--4598},year={2024}}

    D. Kollias, et. al.: "Distribution matching for multi-task learning of classification tasks: a large-scale study on faces & beyond". AAAI, 2024

    @inproceedings{kollias2024distribution,title={Distribution matching for multi-task learning of classification tasks: a large-scale study on faces \& beyond},author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos},booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},volume={38},number={3},pages={2813--2821},year={2024}}

    D. Kollias, et. al.: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Emotional Reaction Intensity Estimation Challenges". IEEE CVPR, 2023

    @inproceedings{kollias2023abaw2, title={Abaw: Valence-arousal estimation, expression recognition, action unit detection \& emotional reaction intensity estimation challenges}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Baird, Alice and Cowen, Alan and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5888--5897}, year={2023}}

    D. Kollias: "Multi-Label Compound Expression Recognition: C-EXPR Database & Network". IEEE CVPR, 2023

    @inproceedings{kollias2023multi, title={Multi-Label Compound Expression Recognition: C-EXPR Database \& Network}, author={Kollias, Dimitrios}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5589--5598}, year={2023}}

    D. Kollias: "ABAW: Learning from Synthetic Data & Multi-Task Learning Challenges". ECCV, 2022

    @inproceedings{kollias2023abaw, title={ABAW: learning from synthetic data \& multi-task learning challenges}, author={Kollias, Dimitrios}, booktitle={European Conference on Computer Vision}, pages={157--172}, year={2023}, organization={Springer} }

    D. Kollias: "ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges". IEEE CVPR, 2022

    @inproceedings{kollias2022abaw, title={Abaw: Valence-arousal estimation, expression recognition, action unit detection \& multi-task learning challenges}, author={Kollias, Dimitrios}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={2328--2336}, year={2022} }

    D. Kollias, et. al.: "Analysing Affective Behavior in the second ABAW2 Competition". ICCV, 2021

    @inproceedings{kollias2021analysing, title={Analysing affective behavior in the second abaw2 competition}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={3652--3660}, year={2021}}

    D. Kollias,S. Zafeiriou: "Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework, 2021

    @article{kollias2021affect, title={Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units and a Unified Framework}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2103.15792}, year={2021}}

    D. Kollias, et. al.: "Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study", 2021

    @article{kollias2021distribution, title={Distribution Matching for Heterogeneous Multi-Task Learning: a Large-scale Face Study}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:2105.03790}, year={2021} }

    D. Kollias, et. al.: "Analysing Affective Behavior in the First ABAW 2020 Competition". IEEE FG, 2020

    @inproceedings{kollias2020analysing, title={Analysing Affective Behavior in the First ABAW 2020 Competition}, author={Kollias, D and Schulc, A and Hajiyev, E and Zafeiriou, S}, booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)}, pages={794--800}}

    D. Kollias, S. Zafeiriou: "Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace". BMVC, 2019

    @article{kollias2019expression, title={Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace}, author={Kollias, Dimitrios and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.04855}, year={2019}}

    D. Kollias, et. al.: "Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond". International Journal of Computer Vision (IJCV), 2019

    @article{kollias2019deep, title={Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond}, author={Kollias, Dimitrios and Tzirakis, Panagiotis and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Schuller, Bj{\"o}rn and Kotsia, Irene and Zafeiriou, Stefanos}, journal={International Journal of Computer Vision}, pages={1--23}, year={2019}, publisher={Springer} }

    D. Kollias, et at.: "Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network", 2019

    @article{kollias2019face,title={Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network}, author={Kollias, Dimitrios and Sharmanska, Viktoriia and Zafeiriou, Stefanos}, journal={arXiv preprint arXiv:1910.11111}, year={2019}}

    S. Zafeiriou, et. al. "Aff-Wild: Valence and Arousal in-the-wild Challenge". IEEE CVPR, 2017

    @inproceedings{zafeiriou2017aff, title={Aff-wild: Valence and arousal ‘in-the-wild’challenge}, author={Zafeiriou, Stefanos and Kollias, Dimitrios and Nicolaou, Mihalis A and Papaioannou, Athanasios and Zhao, Guoying and Kotsia, Irene}, booktitle={Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on}, pages={1980--1987}, year={2017}, organization={IEEE} }

Sponsors


The Affective Behavior Analysis in-the-wild Workshop and Competition has been generously supported by:

    Queen Mary University of London

    QMUL

    Imperial College London

    ICL

    Hume AI

    HUME

    École de technologie supérieure

    ETS

    Concordia University

    CON