Food Recognition Benchmark Starter Kit Save

This repository is the main Food Recognition Benchmark template and Starter kit. Clone the repository to compete now!

Project README

Food-Challenge

Food Recognition Benchmark - Starter Kit

Discord

This repository is the main Food Recognition Benchmark template and Starter kit. Clone the repository to compete now!

This repository contains:

  • mmdetection and detectron2 baselines for tackling this benchmark
  • Documentation on how to submit your models to the leaderboard
  • The procedure for best practices and information on how we evaluate your agent, etc.
  • Starter code for you to get started!

NOTE: If you are resource-constrained or would not like to setup everything in your system, you can make your submission from inside Google Colab too. Check out the beta version of the Notebook.

πŸ† About the Benchmark

The goal of this benchmark is to train models which can look at images of food items and detect the individual food items present in them. This is an ongoing, multi-round benchmark. At each round, the specific tasks and / or datasets will be updated, and each round will have its own prizes. You can participate in multiple rounds, or in single rounds.

This data set has been annotated with respect to segmentation, classification (mapping the individual food items onto an ontology of Swiss Food items), and weight/volume estimation.

Table of contents

πŸ’ͺ Getting Started
πŸ‘₯ Participation
🧩 Repository Structure
πŸš€ Submission
πŸ“Ž Important Links

πŸ’ͺ Getting Started

Download Dataset

Using this repository

This repository contains prediction codebase for mmdetection, detectron2 and random agents.

# Clone the repository
git clone https://github.com/AIcrowd/food-recognition-benchmark-starter-kit
cd food-recognition-benchmark-starter-kit

# Install dependencies
pip install -r requirements.txt

# Download the dataset, and place it in `data/images/`

# Run model locally
./run.sh

This will generate predictions.json file in your data/ directory.

Using colab starter kit

Please refer this notebook for Detectron2 quick and active submission.

Please refer this notebook for MMDetection quick and active submission.

Running the code locally

Refer predict_detectron2.py for Detectron2 submission

Refer predict_mmdetection.py for MMdetection submission

πŸ‘₯ Participation

Before we do a deep dive into submissions. Check which user persona suits you the best!

Quick Participation πŸƒ Active Participation πŸ‘¨β€πŸ’»
You need to upload prediction json files You need to submit code (and AIcrowd evaluators runs the code to generate predictions)
Scores are computed on 40% of the publicly released test set Scores are computed on 100% of the publicly released test set + 40% of the (unreleased) extended test set
You are not eligible for the final leaderboard (and prizes) You are eligible for the final leaderboard and prizes

The flow for active participation look as follows:

🧩 Repository structure

Required files

File Description
aicrowd.json A configuration file used to identify the benchmark and resources needed for evaluation
apt.txt List of packages that should be installed (via apt) for your code to run
requirements.txt List of python packages that should be installed (via pip) for your code to run
predict.py Entry point to your model

Other important files

File Description
score.py Helps your generate score for your run locally
utils/ Directory containing some useful scripts and notebooks
utils/requirements_detectron2.txt A sample requirements.txt file for using detectron2
utils/requirements_mmdetection.txt A sample requirements.txt file for using mmdetection

πŸš€ Submission

Quick Participation πŸƒ

As promised, we will keep it quick for you. Participating is as simple as:

  • Generate your predictions using the starter kit
  • Upload predictions.json on the benchmark website
  • Get scores, iterate, improve! πŸ’ͺ

Active Participation πŸ‘¨β€πŸ’»

  • Prepare your runtime environment
  • Make submissions by pushing your code repository
  • Get scores, more scores πŸ˜‰, iterate faster, improve faster! πŸ’ͺ

More details for active participation in present in SUBMISSION.md

πŸ“Ž Important links

✍️ Maintainers

Thanks to our awesome contributors! ✨


Open Source Agenda is not affiliated with "Food Recognition Benchmark Starter Kit" Project. README Source: AIcrowd/food-recognition-benchmark-starter-kit

Open Source Agenda Badge

Open Source Agenda Rating