Title: Attacking Object Detectors via Imperceptible Patches on Background
Authors: Yuezun Li, Xian Bian, Siwei Lyu
Published: 2018
Link: https://arxiv.org/pdf/1809.05966.pdf
Summary: State of the art object detection techniques use context information to achieve high classification accuracy. This makes them vulnerable.

daniel etzold sketchnote: Attacking Object Detectors via Imperceptible Patches on Background

Extended summary:

An adversarial example is an image created from another image by adding small perturbations. These perturbations are not visible for humans but they cause the classifier to assign the wrong label to the object shown on the image. For the original image the object is assigned the correct label.

Previous work on creating adversarial examples either created the perturbations on the entire image or on the target object. In this work the perturbations are added outside the target object at small regions near the object. The authors exploit the fact that a Region Proposal Network (RPN) which is a very common component in state-of-the-art detectors to generate object proposals before the object is given to a CNN for classification, considers context information outside the target when making proposals. As RPMs are used in many detectors the attack works for many object detectors.

Because RPNs usually generate a lot of proposals which are ranked and from which only the top ranked ones are used for the next steps in the classification pipeline, the idea is to re-rank the object proposals via the perturbations. The goal is to give false positives a higher ranking and to move true positives down the ranking at the bottom.

Different loss functions and combination of loss functions are evaluated on VGG16, mobileNet, Resnet-50, Resnet-101 and Resnet-152. The authors show that the performance of each detector can be decreased by at least 10% and by up to 20%.