2018 DAC System Design Contest

Source codes of contest winners

FPGA platform:

GPU platform:

Contest announcement: June 21, 2017
Registration deadline: October 6, October 15, 2017 Closed
Ranking update: 2nd Tuesday of each month starting Feb.
Contest closes: May 28, 2018
Award presentation: DAC 2018

Award: grand cash prize for the top three teams.


  • Jul. 20, 2018: Source codes of all contest winners are released!

  • Jul. 12, 2018: Detailed final ranking is released here.

  • May 14, 2018: May Posting on FPGA platform is updated.

  • May 09, 2018: Submission ranking – May Posting.

  • Apr. 10, 2018: Submission ranking on FPGA is updated – April Posting.

  • Apr. 10, 2018: Submission ranking on GPU is updated – April Posting.

  • Apr. 01, 2018: Submission guidelines are detailed on here.

  • Mar. 15, 2018: FPGA Design Contest Webinar 2 Video is posted, which can be downloaded here.

  • Mar. 12, 2018: Submission ranking – March Posting.

  • Mar. 05, 2018: Some more details on evaluation is provided (labelled in blue).

  • Feb. 19, 2018: Submission ranking – February Posting.

  • Dec. 12, 2017: FPGA Design Contest Webinar (slides)

  • Dec. 07, 2017: boat9 and group4 are removed from dataset.

  • Dec. 06, 2017: Reference design on I/O for FPGA is released an updated version.

  • Nov. 23, 2017: Reference designs on I/O are released.

  • Oct. 17, 2017: Please check Team Summary to confirm your team is officially included in the contest.

  • Oct. 16, 2017: Contest registration is closed. There are 119 teams signed up for the contest.

  • Oct. 04, 2017: We post a Q&A Session for common questions.

  • Sep. 01, 2017: Registration is open.

  • Jun. 21, 2017: Contest topic announced.


Each team is required to register at the following link: Registration Link (Closes: Oct. 15, 2017).

We will evaluate the registration and notify you within three days whether your registration is successful or not. The evaluation is purely a mechanism to screen out incompetent teams not truly interested in the contestant so that we can devote all our resources to those who are serious.


The 2018 System Design Contest features embedded system implementation of neural network based object detection for drones. Contestants will receive training dataset provided by our industry sponsor DJI, and a hidden dataset will be used to evaluate the performance of the designs in terms of accuracy and power. Contestants will compete in two different categories: FPGA and GPU, and grand cash awards will be given to the top three teams in each category. In addition, our industry sponsor Xilinx and Nvidia will provide a limited number of teams successfully registered with a free design kit (on a first-come-first-served basis). The award ceremony will be held at 2018 IEEE/ACM Design Automation Conference.


The link to download training dataset will be provided to successfully registered teams. We expect to release the dataset to the general public at the conclusion of the contest.

Target Platforms

Reference Designs:

To standardize the input/output format and to reduce participating teams’ effort in designing I/O, please use the reference designs provided and make changes based on it. Please DO NOT change anything in the I/O part.


Contestants are required to select one of the target platforms to exploit machine learning algorithms for the chosen application.


The contest is open to both industry and academia.


The evaluation for the design is based on the accuracy, throughput, and energy consumption.

Formally, to apply IoU to evaluate an object detector we need:

Suppose we have \(I\) registered teams (models); the dataset contains \(K\) evaluation images. Let \(IoU_{i_k}\) be the IoU score of image \(k (k \le K)\) for team \(i (i \le I)\). It is computed by:

\[ IoU_{i_k} = \cfrac{\text{Area of Overlap}}{\text{Area of Union}} = \cfrac{DetectionResult \cap GroundTruth}{DetectionResult \cup GroundTruth}. \]

A good example of Intersection over Union can be found at here.

Let \(R_{IoU_i}\) be the IoU score for team \(i\). It is computed as

\[ R_{IoU_i} = \cfrac{\sum_{k=1}^K IoU_{i_k}}{K}. \]

Let \(E_i\) be the energy consumption of processing all \(K\) images for team \(i\). Let \(\bar{E_I}\) be the average energy consumption of \(I\) teams. It is computed as

\[ \bar{E_I} = \cfrac{\sum_{i=1}^I E_i}{I}. \]

Let \(ES_i\) be the energy consumption score for team \(i\). It is computed as

\[ ES_i = \max\{0, 1 + 0.2 \times \log_x \cfrac{\bar{E_I}}{E_i} \}, \]

where x is 2 for FPGA platform and 10 for GPU platform. Let \(TS_i\) be the total score for team \(i\), which is computed by

\[ TS_i = R_{IoU_i} \times (1 + ES_i), \]

Note: The dataset provided for participants to download contains 70% of the total dataset provided by our sponsor. The remaining 30% of the dataset is reserved for our evaluation. We will ONLY use the reserved dataset to evaluate and rank all the teams.


Each team will submit their design once each month until the final deadline, and the ranking will be updated monthly. Following are detailed submission guideline:

In submission, please use the following XML format for the output:


Note: If a team receives the free design kit from Xilinx or Nvidia and quits the contest without reasonable efforts, we reserve the right to request it be returned to us.

Q&A Session:

Please refer the this link.


The works below addresses object detection in an end-to-end manner. They have simple pipelines, can work in real-time and are suitable for system implementation. Also, they provide source codes and deep learning models and may be used as a good starting point.


Yiyu Shi University of Notre Dame Chair
Jingtong Hu University of Pittsburgh Co-Chair
Christopher Rowen Cognite Ventures DAC representative
Bei Yu Chinese University of Hong Kong Publicity

Address any question or comments to Yiyu Shi (YSHI4 AT ND DOT EDU).

Sponsored by: