Anyone care to speculate why the NSA would work so hard to subvert crypto standards and then allow them to be implemented with a flaw that ensures they are never used? (at least for SSL)
This is an unusual bug report for an unusual situation. I'm using it as an opportunity to point out some considerations that have not been widely reported. Stephen Checkoway and Matt Green of the Johns Hopkins University Information Security Institute discovered a fatal bug in the Dual EC DRBG implemention in the OpenSSL FIPS Object Module v2.0. This bug is fatal in the sense that it prevents all use of the Dual EC DRBG algorithm. Note the bug is present in the Dual EC DRBG only, no other DRBG types are affected. The nature of the bug shows that no one has been using the OpenSSL Dual EC DRBG. Given the current status of Dual EC DRBG (now disowned by the NIST CMVP and pretty much toxic for any purpose) we do not plan to correct the bug. A FIPS 140-2 validated module cannot be changed without considerable expense and effort, and we have recently commenced that process of entirely removing the Dual EC DRBG code from the formally validated module.
When a PRNG is in free running mode it has to continuously check that each block of output doesn't match the previous one (the so called "continuous PRNG test").If there is no previous block (as is the case on the very first call) then a block has to be generated, stored as the "previous block" and discarded. The output of the PRNG that the application sees is the **next* *block which is now compared with the previous block. It's this discarding part where the bug occurs: when the discard is done the code places the output into a buffer and updates the Dual EC DRBG state. When the discard occurs the data must not be output and the Dual EC DRBG state must be updated, but that state update isn't done. In the case of no additional input this has no effect,but additional input is used by the "FIPS capable" OpenSSL. Note that additional input does *not* effectively defeat the backdoor vulnerability.
We have no plans to fix this bug, as NIST has disowned Dual EC
DRBG in an official NIST Recommendation
and use of Dual EC DRBG is already disabled in upcoming OpenSSL
Even if we wanted to fix it our options are severely constrained by
the fact that the CMVP process forbids modifications of any
kind (even to address severe vulnerabilities) without the substantial
time and expense of formal retesting and review.
diff --git a/fips/rand/fips_drbg_ec.c b/fips/rand/fips_drbg_ec.c
index 6be6534..270cfbb 100644
- --- a/fips/rand/fips_drbg_ec.c
@@ -328,6 +328,7 @@ static int drbg_ec_generate(DRBG_CTX *dctx,
if (!bn2binpad(dctx->lb, dctx->blocklength, r))
dctx->lb_valid = 1;
+ t = s;
if (outlen < dctx->blocklength)
This patch is of academic interest only as *any* modification to the official FIPS module source code distribution means that the result isn't validated and is not suitable for any context requiring a FIPS
140-2 validated module.
The OpenSSL FIPS module is commonly used as the basis for rebranded proprietary validations (we call these "private label" validations). Any such private label validations will have this same bug, and thus an assurance that Dual EC DRBG is not being used, *unless* the vendor detected and corrected the bug beforehand without notifying us. Or removed the additional input supplied by the "FIPS capable" OpenSSL,which would eliminate fork protection (we have also determined that
a workaround in the "FIPS capable" OpenSSL that retains fork protection is possible, but we don't plan to implement it).
First enable the Dual EC DRBG as default in the "FIPS capable" OpenSSL 1.0.1:
./config fips -DOPENSSL_DRBG_DEFAULT_TYPE=0x19f02a0 \
Note this rather complex incantation demonstrates that one cannot accidentally enable the Dual EC DRBG as the default. The bug is then manifested by: OPENSSL_FIPS=1 apps/openssl sha1 README which will exhibit Dual EC DRBG stuck errors. Apply the above patch to the FIPS module, rebuild and reinstall the module, recompile the FIPS capable OpenSSL and the bug will no longer be present. Why did we implement Dual EC DRBG in the first place? It was requested by a sponsor as one of several deliverables. The reasoning at the time (my reasoning and call as the project manager) was that we would implement any algorithm based on official published standards. SP800-90A is a more or less mandatory part of FIPS 140-2, for any module of non-trivial complexity. FIPS 140-2 validations are expensive and difficult, taking on average a year to complete and we have to wait years between validations.
So, there is an incentive to pack as much as possible into each validation and our sponsors (dozens of them) had a long list of requirements they were willing to fund. We knew at the time (this was the pre-Snowden era) that Dual EC DRBG had a dubious reputation, but it was part of an official standard (one of the four DRBGs in SP800-90A) and OpenSSL is after all a comprehensive cryptographic library and toolkit. As such it implements many algorithms of varying strength and utility, from worthless to robust. We of course did not enable Dual EC DRBG by default, and the discovery of this bug demonstrates that no one has even attempted to use it.
The client requirement was simply "Implement all of SP800-90A". Our code was implemented solely from that standard.
No. We did specifically ask the accredited test lab if we had any discretion at all in the choice of points (the written standard isn't entirely clear), and were told that we were required to use the
compromised points. SP800-90A allows implementers to either use a set of compromised points or to generate their own. What almost all commentators have missed is that hidden away in the small print (and subsequently confirmed by our specific query) is that if you want to be FIPS 140-2 compliant you MUST use the compromised points. Several official statements including the NIST recommendation don't mention this at all and give the impression that alternative uncompromised points can be generated and used.
Not only the original validation (#1747) but many subsequent validations and platforms have successfully passed the CAVP algorithm tests ... several hundred times now. That's a lot of fail. In test mode the implementation works fine both with and without additional data. In free running mode the bug is triggered by additional data on the first call, which is done automatically by the "FIPS capable" OpenSSL. Frankly the FIPS 140-2 validation testing isn't very useful for catching "real world" problems.
Outside of test mode the first PRNG block generated is discarded and so the output would not agree with the algorithm tests. So in the artificial environment of the FIPS algorithm tests we did have to use the test mode. There are several ways to implement the continuous PRNG test. These were discussed with test labs quite extensively as we had prior unfortunate experiences with a continuous PRNG implementation in an earlier validation that resulted in effective revocation of that validation. In principle we could have tried a new and better approach, but the CMVP process abhors novelty of any kind so we were strongly motivated to stick with what has been accepted in the past.
NIST "SUPPLEMENTAL ITL BULLETIN FOR SEPTEMBER 2013" (http://csrc.nist.gov/publications/nistbul/itlbul2013_09_supplemental.pdf)
 The Cryptographic Module Validation Program, one of the two bureaucracies responsible for FIPS 140-2 validations.
 Matt Green, private communication.
 The 0x19f02a0 comes from the type parameter documented in Section 6.1 of the OpenSSL FIPS Object Module User Guide (http://www.openssl.org/docs/fips/UserGuide-2.0.pdf), which for P-256 using SHA256 translates to 0x19f02a0.
 The Cryptographic Algorithm Validation Program, one of the two bureaucracies responsible for FIPS 140-2 validations.