Large-Scale Selfie Video Dataset (L-SVD): A Benchmark for Emotion Recognition
L-SVD is a comprehensive and meticulously curated video dataset designed to revolutionize the field of emotion recognition. With over 20,000 short video clips, each precisely annotated to reflect a wide range of human emotions, L-SVD serves as a pivotal resource at the confluence of Cognitive Science, Psychology, Computer Science, and Medical Science. Our dataset is crafted to advance research and applications within these dynamic fields, offering an unparalleled tool for innovation and discovery.
Github Repository
HuggingFace Dataset
Papers With Code
Drawing inspiration from the transformative ImageNet, L-SVD aims to establish itself as a cornerstone in the domain of emotional AI. We provide the global research community with a dataset characterized by its detailed labeling and uniform processing standards, ensuring high-quality video data for cutting-edge research and development.
Your contributions are essential to the growth and success of L-SVD. To contribute, please follow the instructions to upload your data HERE. We will review and validate the labels within a few days of submission.
Join us in advancing the fields of Machine Learning and Deep Learning! After submitting your data, please email ME with the details of your submission, including filepaths, modalities, affiliations, and GitHub Username. We look forward to acknowledging your valuable contributions on our homepage.
Our dataset, L-SVD, is shared via Google Drive, enabling easy access and collaboration. The dataset is released in batches, ensuring ongoing updates and expansions.
To access L-SVD, please visit U-SVD and submit a request including your Contact Information and Affiliations. This process ensures a collaborative and secure environment for all users.
Thank you for your interest in L-SVD. Together, we can push the boundaries of emotion recognition research and development.
# Example code to load the L-SVD dataset
import emotionnet
# Load dataset
dataset = emotionnet.load('/path/to/emotionnet')
# Loop through the dataset
for video in dataset:
frames, emotions = video['frames'], video['emotions']
# Insert your model training or evaluation code here
If you use L-SVD in your academic or industry research, please cite it as follows:
@misc{emotionnet2023,
title={L-SVD: A Comprehensive Video Dataset for Emotion Recognition},
author={Peiran L, Linbo T, Xizheng Y. University of Wisconsin Madison},
year={2024},
publisher={\url{https://github.com/PeiranLi0930}},
journal={*},
howpublished={\url{https://github.com/PeiranLi0930/emotionnet}},
}
L-SVD is released under the BSD-3-Clause license.
For support or further inquiries, please contact us at [email protected].
We acknowledge the collective efforts of all contributors from the University of Wisconsin Madison's Computer Science Department and the global research community. Your insights and contributions are shaping the future of emotion recognition technology.