Game Mechanics
How do I register for the contest?
Complete the registration form here . You will recieve an automated reply confirming your registration. Design artifacts will be sent out Friday around 10AM PDT.
What is the submission deadline?
Results/findings must be submitted by Saturday August 12th at 6PM PDT to be considered as part of the contest. Late submissions may be considered at the discretion of the judges.
How do I submit results?
Threat findings will be submitted via this Google Form . The form is for a single finding and will be re-used for all findings you identify. We encourage you to submit findings as you go, this will allow us to pre-review findings throughout the contest and will enhance our judging experience. The form will email you a copy of your submission so you will have a record of your findings.
How will submissions be judged?
Submissions will be judged on overall quality, using the following criteria:
Things that we really care about (but not limited to):
- Good documentation of threats.
- The total count of plausible threats.
- Results that are more actionable for a development team.
- Identification of discrepancies between diagrams and their associated flow descriptions.
Things to avoid:
- Duplicate findings in everything but location. For example, if you discover a vulnerability that applies to multiple locations in the system, cite all the locations as a single finding; repeating a finding over and over will make the judges’ job more difficult.
- Duplicating the same finding multiple times throughout the document, possibly with slightly different words.
- Findings that require a particular tool to be able to read (e.g a 100,000 line file that can only be read by a particular application that the judges might not have); these are unlikely to be evaluated by the judges and may invalidate your submission.
The goal of this event is to test your threat modeling skills - for you to identify threats in the provided design and document them using the recommended format. While there are tools to model applications and identify threats, using these is against the spirit of the event. We want to see how you have internalized the design and what threats you are able to identify. Judges discretion on how to handle these submissions.
Wow you are being really vague about judging.
That is by design (pun intended!). Threat modeling enables engineering team members - software engineers, quality assurance, managers, and customer support - to make informed decisions about their system’s security and privacy. The better your submissions would do this, the more likely you will be to win.
We want to see different threat modelers’ approaches, what assumptions they make, and how they structure their results for development teams.
What if I only find one or two things?
Submit them! Your findings may be better documented or more interesting than someone who finds a litany of issues. Besides, you have no chance to win if you don’t submit findings!
Can I use a threat modeling tool?
We request that you don’t. The goal of this event is for you to identify and document threats in the provided design. While tools exist to model applications and identify threats, using these is against the spirit of the event. Judges discretion how to handle these submissions.
Can I use an AI to identify issues?
We request that you don’t. First, the design is DCNTTM property and is not authorized for sharing with AI systems. Second, the competition is for humans, not machines.
How many contestants are allowed to register/submit results?
We are not putting any restrictions on the number of registrations or submissions.
Our ability to process all submissions will depend greatly on the number of results we receive, number of threats in those results, how closely to the contest deadline results are submitted, etc. If we receive a small number of results this is not likely to be an issue, but if we receive a large number of results we may need to be more judicious when reviewing all valid submissions before results are due to DEF CON Contest & Events (e.g. only reviewing well-documented and easy-to-understand submissions following the recommended format).