Contents

Game Mechanics

How do I register for the contest?

Complete the registration form here . You will recieve an automated reply confirming your registration. Design artifacts will be sent out Friday around 10AM PDT.

What is the submission deadline?

Results/findings must be submitted by Saturday August 13th at 6PM PDT to be considered as part of the contest. Late submissions may be considered at the discretion of the judges.

How do I submit results?

Go to our submission page to view details to submit results.

How should results be structured?

We request that all submissions to be judged will be in the JSON format described in this FAQ (see Example JSON format and JSON schema , below). This allows us to partially automate the submission intake process, maximizing the number of submissions we’re able to handle.

If you create any supporting files in order to produce your list of findings, we encourage you to submit those as well. Any such supporting files may be considered by the judges when reviewing your submission.

You are welcome to submit non-JSON formatted results, but they might not be accepted by the judges. We only commit to judging results in the defined format. If we are able, we may judge other submissions in other formats. This will depend greatly on the number of submissions received and number of threats within those submissions requiring review.

If you submit findings in the requested JSON format but you structure individual findings differently before putting into our JSON format, please submit your raw format as well. We would love to collect differences between modelers and share our learnings with the greater Threat Modeling community.

What is the JSON format?

See below for an example of a properly formatted submission, and a JSON schema for submissions. Only the Threat Name/Summary and Threat Description are required Fields. The schema allows for any number of custom fields in each threat.

You can use a site like https://jsonformatter.curiousconcept.com/ to validate your JSON before submitting.

Example JSON Format

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
{ 
 "Name":"Your name, should match the name used when registering.", 
 "findings":[ 
  { 
   "Id": 1, 
   "ThreatName" : "Name of a threat, should be a few words, no more than a sentence.", 
   "ThreatDescription" : "This should be a plausible scenario for the threat.",
   "CustomFields" : [ 
    { 
     "Name" : "CustomField1", 
     "FieldDescription" : "Description for this custom field.", 
     "Value" : "Text Value for CustomField1" 
    }, 
    { 
     "Name" : "CustomField2", 
     "FieldDescription" : "Description for this custom field.", 
     "Value" : "Text Value for CustomField2" 
    }
   ]
  }
 ] 
} 

JSON Schema

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "type": "object",
  "properties": {
    "Name": {
      "type": "string"
    },
    "findings": {
      "type": "array",
      "items": [
        {
          "type": "object",
          "properties": {
            "Id": {
              "type": "integer"
            },
            "ThreatName": {
              "type": "string"
            },
            "ThreatDescription": {
              "type": "string"
            },
            "CustomFields": {
              "type": "array",
              "items": [
                {
                  "type": "object",
                  "properties": {
                    "Name": {
                      "type": "string"
                    },
                    "FieldDescription": {
                      "type": "string"
                    },
                    "Value": {
                      "type": "string"
                    }
                  },
                  "required": [
                    "Name",
                    "FieldDescription",
                    "Value"
                  ]
                },
                {
                  "type": "object",
                  "properties": {
                    "Name": {
                      "type": "string"
                    },
                    "FieldDescription": {
                      "type": "string"
                    },
                    "Value": {
                      "type": "string"
                    }
                  },
                  "required": [
                    "Name",
                    "FieldDescription",
                    "Value"
                  ]
                }
              ]
            }
          },
          "required": [
            "Id",
            "ThreatName",
            "ThreatDescription",
            "CustomFields"
          ]
        }
      ]
    }
  },
  "required": [
    "Name",
    "findings"
  ]
}

What are these CustomFields exactly?

We don’t know - you tell us ;) The contest includes 2 parts - identifying threats and documenting them. CustomFields will allow you to document the threat however you want, using whatever terminology you would like to use. If you use CustomFields we ask you provide the judges some details to help them interpret your findings as you intend.

Use Name to define a name for your field, use FieldDescription to define what this field is and how you will use it and use Value to document the value you are assigning the CustomFields in relation to the Threat.

Can you give examples of a CustomField?

We could give you some explicit examples, but then everyone will include them and this would defeat some of the objectives for this event.

How will submissions be judged?

Submissions will be judged on overall quality, using the following criteria:

Things that we really care about (but not limited to):

  • Good documentation of threats.
  • The total count of plausible threats.
  • Results that are more actionable for a development team.
  • In order to be more actionable you may need to be creative and add Custom Fields to your threats to include additional data.
  • Identification of discrepancies between diagrams and their associated flow descriptions.

Things to avoid:

  • Duplicate findings in everything but location. For example, if you discover a vulnerability that applies to multiple locations in the system, cite all the locations as a single finding; repeating a finding over and over will make the judges’ job more difficult.
  • Duplicating the same finding multiple times throughout the document, possibly with slightly different words.
  • Findings that require a particular tool to be able to read (e.g a 100,000 line file that can only be read by a particular application that the judges might not have); these are unlikely to be evaluated by the judges and may invalidate your submission.

The goal of this event is to test your threat modeling skills - for you to identify threats in the provided design and document them using the recommended format. While there are tools to model applications and identify threats, using these is against the spirit of the event. We want to see how you have internalized the design and what threats you are able to identify. Judges discretion on how to handle these submissions.

Wow you are being really vague about judging.

That is by design (pun intended!). Threat modeling enables engineering team members - software engineers, quality assurance, managers, and customer support - to make informed decisions about their system’s security and privacy. The better your submissions would do this, the more likely you will be to win.

We want to see different threat modelers’ approaches, what assumptions they make, and how they structure their results for development teams.

How many contestants are allowed to register/submit results?

We are not putting any restrictions on the number of registrations or submissions.

Our ability to process all submissions will depend greatly on the number of results we receive, number of threats in those results, how closely to the contest deadline results are submitted, etc. If we receive a small number of results this is not likely to be an issue, but if we receive a large number of results we may need to be more judicious when reviewing all valid submissions before results are due to DEF CON Contest & Events (e.g. only reviewing well-documented and easy-to-understand submissions following the recommended format).