This is our solution for KDD Cup 2020. We implemented a very neat and simple neural ranking model based on siamese BERT which ranked first among the solo teams and ranked 12th among all teams on the final leaderboard.
This is our solution for KDD Cup 2020. We implemented a very neat and simple neural ranking model based on siamese BERT[1] which ranked FIRST among the solo teams and ranked 12th among all teams on the final leaderboard.
jupyter lab or jupyter notebook
Open 01PRE.ipynb, change the variable 'filename' and run all cells
. In this notebook, we perform base64 decoding process and using BERT tokenizer to covert the queryies into token_id lists. We also convert the data type into float16 to further reduce both disk and memory usage.Open one of 02MODEL.ipynb and run all cells
. In this notebook, we remove the sample if its bounding box number is higher than 40, which is the also the maximum bounding box number of the test set. Then we use a siamese light-BERT with 4-layers to learning every (query, image) pairs sampled uniformly from the training set. A off-the-shelf framework - pytorch-lightning is used.Open 03INFER.ipynb
. In this notebook, we rewrite the InferSet class to implement our local-linear-embedding-like method. By the way, the inference code is almost the same as validation code so we think it is unnecessay to provide our messy code :).Local-Linear-Embedding-like Method
As mentioned in the step4, we adopted the local-linear-embedding-like method to further enhance the feature.
Given a ROI, we find the top3 most similar ROIs using KNN (K-nearest neighbour) method, then we summed them by weight 0.7, 0.2, 0.1 for keeping the same input numerical scale.
This is a solo team which consists of:
Thanks for Weiwei Xu, who provided a 4-GPU server.
It is worthy to try the SOTA of image-text representation models, like UNITER[2] or OSCAR[3].
I will be graduated in the summer of 2021 from Dalian University of Technology. If you can refer me to any company, please contact me [email protected].