Categories: Programming, Security
Overview
Here are some notes on two books (both coincidentally from Wiley) related to threat modelling.
See this article on threat modelling from this site for a more detailed presentation on Threat Modelling, and STRIDE in particular.
Threat Modeling: Designing for Security
This book is the classic presentation of the STRIDE methodology.
Details:
- Author: Shostack
- Publisher: Wiley & Sons, 2014
- Threat modelling methodology: STRIDE
Quick summary:
- An opinionated guide to threat modelling (finding security holes in a system).
- Generally very good. Entertainingly written. Proposes a pragmatic approach.
- Still mostly applicable despite the age of the book (2014).
Some chapters, particularly those in the middle, drift into academic and theoretical discussions about “what might be available some day”, “what would be cool if it were possible”, etc. There is also significant amount of content which could be described as “a summary of the literature in the field of security with references to source materials” - ie a starting point for research into security principles rather than an immediate “how to”.
The frequency of “maybe”, “perhaps”, “on the other hand” and similar expressions does increase through the book. However for those just wanting to “put threat modelling into practice tomorrow” there is also sufficient concrete advice and concrete recommendations to make reading this book worthwhile. In particular, the early chapters are more practical.
It isn’t always clear who the audience for the book is: a senior developer/architect who is interested in analysing a system, or someone looking to make a career change into threat modelling.
Risk Centric Threat Modeling: Process for Attack Simulation and Threat Analysis
This book is from two security consultants who work with PASTA.
Details:
- Author: UcedaVelez, Morana
- Publisher: Wiley & Sons, 2015
- Threat modelling methodology: PASTA
Quick summary:
- Lots of words, little useable content.
- Chapters 1 through 5 are completely vague, a buzzword-filled word-cloud that do little except convey a sense that “security stuff is hard, and requires a process-heavy solution involving lots of consultants, lots of meetings, and lots of reports”.
The start of chapter 6 sums it up: PASTA is an approach whose results can be “socialized with senior executives”. This book seems to be aimed at people looking to set up a security team with broad responsibilities within a very large and bureaucratic organisation - exactly the target audience who do not need a book of this sort, as they should be properly qualified. Even then, a book so full of vague verbosity (and grammatic errors) is not helpful to any target audience.
If you’re looking for a way to secure a specific IT system, avoid this book. Try Shostack’s “Threat Modeling: Designing for Security” instead.
Risk-management = “defense in depth” for assets; provide multiple protections through which an attacker must go.
This book suggests learning from military threat modelling, in which capabilities and motives of an adversary are analysed. Not sure this really applies to software. Schostock suggests this approach does not effectively lead to possible mitigations, which I find convincing. Software is a special case: it is stationary and passive (waits for attack), like a fortress. Software-based systems can generally also assume that the enemy has competence and reasonable computing power (both are cheap and plentiful for internet-based attackers). For specific vulnerabilities, a cost/benefit analysis can then include an estimate of the motives/investment an attacker might apply (rather than considering this first).
Considering “does the opponent have aircraft” and then “only if so, look for aircraft-based vulnerabilities” seems flawed and fragile in the face of incomplete knowledge of a dynamic adversary. Better to identify aircraft-based vulnerabilities first, then assign the mitigations a priority based upon knowledge of the current capabilities than the reverse.
An enemy capability that cannot be applied is useless - better to concentrate on what you have and can control. Knowledge of “attack patterns” is possibly useful - it is hard to find a vulnerability when you don’t know what attacks are possible. Focussing on the attacker however is less productive.
In the end, we don’t care what the motives or skillsets of attackers are - our goal is a system that is secure against all reasonable attacks. This book is, however, obsessed with motives (reason unknown).
This book suggests basing analysis on “risks”, ie (asset, motive) => one or more “trust boundaries” or “component types” through which that asset might be reachable. As shostock points out, that leads only very weakly to mitigations. And such component types might not exist in the system.
And suggests creating an “attack tree” for each asset (identified).
According to the book, PASTA is a 7-step framework:
- Define Objectives
- gather any business requirements docs that exist
- gather security requirements (company internal rules, legal requirements)
- gather output of previous assessment for this app (if any), or similar apps (if available)
- figure out whether classified or sensitive data is present in the system
- output: a business impact report for executive-level which summarizes the worst-case if this system loses data or is unavailable
- output: raise concerns if inconsistent or duplicated business requirements are found
- Define technical scope
- identify datastores
- identify app components
- identify actors (users, apps) which interact with the system
- identify accounts used to perform operations within the system
- identify network protocols, system services, third-party interfaces, auth-servers, libraries, plugins, data formats, ….
- categorise each of the above so that attacks on that “type of component” can be investigated later (exactly which categories should exist is an unsolved problem)
- apply “blind threat modeling” by checking for a “standard hardening procedure” for each of the identified asset types above, and ensuring it has been applied (eg standard hardening for a webserver or database) without having identified a specific threat.
- output: a list of assets (and possibly a list of tasks to do standard hardening on standard-typed components)
- Application decomposition
- create a dataflow diagram of the system (if one does not exist)
- walk through all business-requirements usecases to ensure the dataflow diagram is complete
- define trust boundaries on the DFD
- output: a dataflow diagram
- Threat analysis
- review “overall threat scenario” for similar apps (research, feeds of up-to-date info)
- review existing company security logs and incident reports
- create list of threats (loss of data, loss of service), attacker types and the assets they threaten
- output: a list of the most likely attacks (??!)
- output: for each asset
- list the kinds of attackers and their motivations (??!), and the probability of success(!!??!!) (not at all clear how a sensible “probability of success” could be assigned here…)
- start an “attack tree” for each asset
- Vulnerability/Weakness mapping
- review known vulnerabilities for similar systems
- look for design flaws in the overall system, eg poor authentication or logging, unencrypted data (!!=> WTF??!)
- map threats to vulnerabilities (add to attack tree started in 4)
- starting with the threat-list generated in (4), identify mechanisms (techniques) through which that could possibly achieved (eg “sniff network”) then link the mechanisms to vulnerabilities such as “unencrypted network traffic” (plausible point of entry). Vulnerabilities can be taken from “libraries” such as CVE and CWE. (cve = common vulnerability enumeration, cwe = common weakness enumeration). Many other sources of vulnerabilities exist. SK note: yes, for bulletproof security, a wide range of possibilities needs to be considered. But most errors, including the Target one, are far more basic. A hugely complex procedure is the enemy of adequate security.
- provide contextual risk analysis
- link assets to vulnerabilities
- evaluate likelihood of threat based on identified vulnerabilities
- select most likely threats for actual testing
- Attack modeling
- determine the probability for each vulnerability to be attacked (??!)
- conduct tests
- output: complete attack tree
- Risk and impact analysis
- calculate overall risk of each threat
- identify countermeasures (aka mitigations) for identified vulnerabilities linked to each threat
- calculate residual risk (taking mitigations into consideration)
- recommend strategy
Language is overly verbose, complicated sentence structures (often grammatically dubious), and full of assertions unsupported by references or evidence of any sort. buzzword bingo.
Talk of “threat intelligence” is partly an acknowledgement that checking every known threat against a system would take too long; it is necessary to somehow check “the most likely” threats. One way is to guess at attacker motivation and capabilities, and see what attacks they would use. Or just check “the most common”. However IMO competent devs/architects will have a good intuitive feel for what threats are relevant to each component without “comprehensive up-to-date threat libraries” or “threat intelligence”.
Considers review of business requirements as in-scope, in order to detect “forgotten” or “duplicated” parts of the system. Seems irrelevant:
- if system is new, that won’t happen
- if system is existing, but has good architectural docs, that won’t happen
- if old and undocumented, and then threat modelling is applied (far too late) to it, then it might happen. But in that case, the workshop members will be aware of the situation, and will just need to work a little harder to identify all components of the system. It will still be easier to find such components from the implementation side than the business requirements side.
- and if old and undocumented from architectural side, it is very unlikely that there is good business requirements documentation. Considers “gathering compliance requirements” as a step. Yes, this is needed - but unlikely that it will be forgotten in a STRIDE workshop approach.
Good point made: modelling should be “collaborative” not “adversarial” - include the designers and implementers of the product in the process, as they know the system best. Publish results as a team (which includes the designers and implementers). Also a good learning process for the team members. A “security audit” by an external team is far less productive, and more punitive. Even “testers as security audit” can lead to this kind of team dynamic.