OpenSSL verify a certificate chain (chain verification and validation) using the “verify” command

In addition to the verification of the chain through the “s_client” command demonstrated earlier in the series, one can also use the ” verify” command to the same. It is easier in the case when the certificate chain is not already installed on a web server (in that case we can use the verify option with the “s_client” command) or it is a chain for the client certificates.

In the following example, we have an end-entity client certificate (PEM encoded) in 1.pem and the intermediate certificate in 2.pem. The root self-signed CA certificate is in 3.pem. We are verifying the end-entity certificate (1.pem) with the intermediate CA certificate (2.pem).


$ openssl verify -verbose -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 20 at 0 depth lookup:unable to get local issuer certificate

To delve deeper for the failure, we need to add the “issuer_checks” option to display all the checks that are taking place. And we notice that the intermediate certificate does not have the “Certificate Signing” (KeyCertSign) bit set so the verification fails. We need to ask for a intermediate CA certificate with the right key usage bits. Please see “keyCertSign” in RFC 5280.

$ openssl verify -verbose -issuer_checks -purpose sslclient -CAfile 2.pem 1.pem
1.pem: C = US, CN = XXX, O = YYY
error 29 at 0 depth lookup:subject issuer mismatch
...
error 32 at 0 depth lookup:key usage does not include certificate signing
.....
....
error 20 at 0 depth lookup:unable to get local issuer certificate

 

The heap data structure

A sample implementation of the heap data structure is at:

https://github.com/Khanna111/DS/tree/master/Heap/src/com/khanna111

You would find two classes that implement the “siftDown” approach that creates a heap from an input array of “n” elements in O (n) complexity.

For details on the “siftDown” approach and the complexity being O (n), please refer to:

http://en.wikipedia.org/wiki/Heapsort

X509 certificate and keyUsage

The keyUsage as delineated in RFC 5280 specifies the the purpose of the key (public key) contained in the certificate.

For instance:

  1. “keyEncipherment” implies that the public key is used to encrypt private or secret keys.
  2. “digitalSignature” implies that the public key can be used to validate the digital signatures.
  3. “keyAgreement” implies that the public key is used for key agreement as in the DH case. The key agreement algorithm could be ECDH (Elliptic Curve DH) where the public key of the end-entity certificate is a ECDH public key. The certificate could be signed by any normal CA – for example with it’s  ECDSA or RSA private keys. So in the case of a ECC certificate or any certificate containing an ECC public key, one would find the same ECC public key being utilized for key agreement as in  the ECDH (not ECDHE) case. Note that ECDHE does not require this keyUsage bit to be set. 

For the other bits in the keyUsage extension, please see the RFC.

 

What is the encoding of the SSL certificates on the wire and how is the certificate chain configured?

It is DER and it follows the RFC for TLS v1.2. Opened up WireShark and exported the raw bytes for one of the certificates among the chain transmitted by the server in the SSL / TLS “Certificate” message and decoded it and validated the DER encoding. This was on an HTTPS connection to an Apache Web Server.

The other question that web server administrators and writers of server certificate verification code would need to know is what should be the order of the certificates in the certificate chain that is being sent back by the web server. The RFC provides details on that as well wherein the sender’s certificate must come in first followed by the certificate that would certify it and so on.

Nginx 1.2.x and install Elliptic Curve Crytpography (ECC) support – installation on Linux (.configure options and build for SSL / TLS support and enable HTTPS)

As of writing this and as far as I know, the pre-compiled binaries for nginx for various platforms (RedHat / CentOS or another linux variant) do not come with ECC support so you would not be able to utilize ECC based certificates (ECDHE key exchange or ECDSA  authentication). The solution is to compile the Nginx source code with an OpenSSL version that has ECC support such as OpenSSL 1.0.1c or 1.0.1e. As of writing, 1.0.1c has a vulnerability (please see the OpenSSL web site for more details) and 1.0.1e is the recommended version.

When comparing with Apache HTTPD web server compilation, I found building Nginx to be simpler based on the fact that one needs to specify the OpenSSL source and Nginx build process takes care of building and linking to it. If you are only interested in building OpenSSL from source with ECC support then refer to this post.

After downloading Nginx source, run the following to check the options for configure:

./configure --help

This will list out all options that determine what modules to enable or disable, locations of dependencies such as OpenSSL if they are not in the obvious locations.

Since we have downloaded the OpenSSL source (1.0.1x) to support ECC into a different folder, we need to specify that so the configure option becomes:

./configure --prefix=/app/installs/nginx --with-http_ssl_module --with-openssl=/app/source/openssl/openssl-1.0.1c

This implies that nginx will be installed at “/app/installs/nginx” with the module to add SSL / TLS support and the location of the OpenSSL source is specified as well (this is where the OpenSSL source was extracted).

Thereafter run the following commands:

make

Switch to root if not already so and

make install

Uncomment the HTTPS / SSL sections from the Nginx configuration file and specify the certificates and you are all set.

To check the options for the nginx command line:

nginx --help

To start nginx:

nginx

If you get errors about PCRE at the configure stage or later (error messages replicated below) and if you have previously installed it, update the LD_LIBRARY_PATH environment variable to include the library but if you do not have it installed, there is a section on this blog on installing PCRE. All one has to do is to download the source and simply install that. Another approach is to install the PCRE development libraries. Both approaches are outlined below.

Error Message 1 (at configure time):


./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre= option.

Error Message 2 (later at run time):


nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory

Solution 1:

While configuring nginx, one can specify the location of the source of PCRE source (8.3.1 is the version that I used and can be downloaded from the PCRE website) at the configure step:

./configure ..... ..... ... --with-pcre=/app/source/pcre/pcre-8.31

And repeat the “make, make install” steps as outlined earlier.

Solution 2:

Alternatively if PCRE is already installed then simply point to it (you would need the development libraries):

$ yum search pcre
Matched: pcre ==
opensips-regex.x86_64 : RegExp via PCRE library
pcre.i386 : Perl-compatible regular expression library
pcre.x86_64 : Perl-compatible regular expression library
pcre-devel.i386 : Development files for pcre
pcre-devel.x86_64 : Development files for pcre

Then install the development version:

$ yum install pcre-devel

Time to reconfigure and install Nginx:


$ ./configure ...... [same arguments as above]

A successful “./configure” would have something akin to this output:

$ ./configure .........
checking for OS
+ Linux 2.6.18-128.1.6.el5 x86_64
checking for C compiler ... found
+ using GNU C compiler
+ gcc version: 4.1.2 20080704 (Red Hat 4.1.2-44)
checking for gcc -pipe switch ... found
checking for gcc builtin atomic operations ... found
checking for C99 variadic macros ... found
checking for gcc variadic macros ... found
checking for unistd.h ... found
checking for inttypes.h ... found
checking for limits.h ... found
checking for sys/filio.h ... not found
checking for sys/param.h ... found
checking for sys/mount.h ... found
checking for sys/statvfs.h ... found
checking for crypt.h ... found
checking for Linux specific features
checking for epoll ... found
checking for sendfile() ... found
checking for sendfile64() ... found
checking for sys/prctl.h ... found
checking for prctl(PR_SET_DUMPABLE) ... found
checking for sched_setaffinity() ... found
checking for crypt_r() ... found
checking for sys/vfs.h ... found
checking for nobody group ... found
checking for poll() ... found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for crypt() in libcrypt ... found
checking for F_READAHEAD ... not found
checking for posix_fadvise() ... found
checking for O_DIRECT ... found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for statfs() ... found
checking for statvfs() ... found
checking for dlopen() ... not found
checking for dlopen() in libdl ... found
checking for sched_yield() ... found
checking for SO_SETFIB ... not found
checking for SO_ACCEPTFILTER ... not found
checking for TCP_DEFER_ACCEPT ... found
checking for TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT ... found
checking for TCP_INFO ... not found
checking for accept4() ... not found
checking for int size ... 4 bytes
checking for long size ... 8 bytes
checking for long long size ... 8 bytes
checking for void * size ... 8 bytes
checking for uint64_t ... found
checking for sig_atomic_t ... found
checking for sig_atomic_t size ... 4 bytes
checking for socklen_t ... found
checking for in_addr_t ... found
checking for in_port_t ... found
checking for rlim_t ... found
checking for uintptr_t ... uintptr_t found
checking for system byte ordering ... little endian
checking for size_t size ... 8 bytes
checking for off_t size ... 8 bytes
checking for time_t size ... 8 bytes
checking for setproctitle() ... not found
checking for pread() ... found
checking for pwrite() ... found
checking for sys_nerr ... found
checking for localtime_r() ... found
checking for posix_memalign() ... found
checking for memalign() ... found
checking for mmap(MAP_ANON|MAP_SHARED) ... found
checking for mmap("/dev/zero", MAP_SHARED) ... found
checking for System V shared memory ... found
checking for POSIX semaphores ... not found
checking for POSIX semaphores in libpthread ... found
checking for struct msghdr.msg_control ... found
checking for ioctl(FIONBIO) ... found
checking for struct tm.tm_gmtoff ... found
checking for struct dirent.d_namlen ... not found
checking for struct dirent.d_type ... found
checking for sysconf(_SC_NPROCESSORS_ONLN) ... found
checking for openat(), fstatat() ... found
checking for getaddrinfo() ... found
checking for PCRE library ... found
checking for PCRE JIT support ... not found
checking for OpenSSL library ... found
checking for zlib library ... found
creating objs/Makefile

Configuration summary
+ using system PCRE library
+ using system OpenSSL library [or the source location]
+ md5: using OpenSSL library
+ sha1: using OpenSSL library
+ using system zlib library

nginx path prefix: “/app/…..”
nginx binary file: “/app/….”
nginx configuration prefix: “/app/…..”
nginx configuration file: “/app/…..”
nginx pid file: “/app/…/nginx.pid”
nginx error log file: “/app/…./logs/error.log”
nginx http access log file: “/app/…../logs/access.log”
nginx http client request body temporary files: “client_body_temp”
nginx http proxy temporary files: “proxy_temp”
nginx http fastcgi temporary files: “fastcgi_temp”
nginx http uwsgi temporary files: “uwsgi_temp”
nginx http scgi temporary files: “scgi_temp”

And then proceed with the make and make install.

JMeter (Java) and DNS and SSL and CRL and OCSP

While utilizing JMeter for some load testing of a web service on HTTPS, wanted to confirm the external invocations being made by the program for OCSP and CRL etc. The easiest way is to utilize the “strace” command to display the network system calls:

strace -f -s 1024 -e trace=network ./jmeter.sh

[pid  7361] connect(86, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “::ffff:10.0.0.xx, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
[pid  7361] getsockname(86, {sa_family=AF_INET6, sin6_port=htons(35606), inet_pton(AF_INET6, “::ffff:10.0.0.xx”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
[pid  7361] socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 87
[pid  7361] connect(87, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr(“10.0.0.xx”)}, 16) = 0
[pid  7361] sendto(87, “\226q\1\0\0\1\0\0\0\0\0\0\00274\0010\0010\00210\7in-addr\4arpa\0\0\f\0\1″, 40, MSG_NOSIGNAL, NULL, 0) = 40
So the snippet above determines that there is a DNS call to port 53 of the name server (in bold above).
There are no OCSP calls being made as well. By default all of that is disabled. To allow for OCSP calls and CRL checking, one needs to set the appropriate system properties. Please see: https://blogs.oracle.com/xuelei/entry/enable_ocsp_checking

A snippet to enable OCSP and CRL is:
// params is an instance of PKIXParameters
params.setRevocationEnabled(true);
Security.setProperty("ocsp.enable", "true");
// for CRL
System.setProperty("com.sun.security.enableCRLDP", "true");

RFC 5077: TLS Session Resumption without Server-Side State

If you view the output of SSLDump and if there is evidence of SSL Session Resumption especially if there is “session cache” is not configured on the server then you might be perplexed. I was and after a little investigation was able to attribute it to the implementation of RFC 5077.

Essentially the client sends an empty SessionTicket extension to the server (in the ClientHello message) and the server responds with an empty one if it supports such resumption (in the ServerHello message). The server, later on, after the computation of the “MasterSecret” would encrypt it along with the other session state such as the cipher suite in a “SessionTicket” and return the “NewSessionTicket” message to the client right before the ChangeCipherSpec message.

The following is a screen shot of a “SessionTicket” message / packet from the server to the client captured on WireShark.

OpenSSL’ s_time command simple and short tutorial – CPU user time versus real time

A succinct tutorial on s_time and the interpretation of its results

One can install OpenSSL and do a quick check with respect to the performance of a remote server. For instance: the s_time invocation will attempt to make as many connections for a specified period of time. The default period is 30 seconds but one can override that with the appropriate option (“-time”) in this case. With s_time, we can get the numbers of connections per second that are full handshakes as well as resumed handshakes. For details on what “handshake” implies, one could refer to other texts on the web such as the wikipedia page on “Secure Sockets Layer” that has a succinct explanation of the different flavors of handshakes including “resumed” handshakes. Please see the references section below for the link.

The key facet that would like to emphasize is that this command does not invoke the server through concurrent connections but it is sequential and attempts to extract the total time that X connections took in the time (default is 3o secs) specified. For instance, we infer from the run below that for “new” connections, the total number of connections made were 107 and the total time expended in those connections was 1.20 seconds (CPU user time). The test was run for around 30 seconds.

openssl s_time  -cipher 'RSA' -connect host:443 -CAfile chain.pem -www /

Collecting connection statistics for 30 seconds
Collecting connection statistics for 30 seconds
ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt

107 connections in 1.20s; 89.17 connections/user sec, bytes read 44298
107 connections in 31 real seconds, 414 bytes read per connection

Now timing with session id reuse.
starting
trrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

126 connections in 0.07s; 1800.00 connections/user sec, bytes read 52164
126 connections in 31 real seconds, 414 bytes read per connection

 

From the snippet above, one can also realize that the in the “reuse” (session resumption) case, we see that the number of connections has increased to 126 and it can be extrapolated to 1800 connections per second. Please note that the rest of the 31 seconds, the program was busy in network IO etc.

Also note that if SSL session cache is not setup on the server then s_time will display the same result as for “new” connections. This command does not support RFC 5077: TLS Session Resumption without Server-Side State.

References

  • http://en.wikipedia.org/wiki/Secure_Sockets_Layer [Provides information on SSL / TLS handshakes]
  • http://tools.ietf.org/html/rfc5077 [RFC on TLS Session Resumption without Server-Side State]

Amazon Web Service (AWS) and VNC

Today had a minor issue wherein was not able to access a VNC desktop on a RedHat or a CentOS Linux instance on AWS (EC2). Although defined a Security Group allowing all TCP traffic and halted the AWS instance’s firewalls but to no avail.
Thereafter searched the web assuming (incorrectly) to be a glitch at AWS. There were some posts on the AWS forum detailing something similar but there was a common refrain that struck me – an instance owner could not access VNC on port 5901 (5900 to 5902 etc) but Amazon Support could.

Got me thinking about a local firewalls and it turns out that there was a local network firewall that was disabling connecting to an external IP (such as my Amazon instance) on these ports. To confirm, quickly created a server on port 80 and 443 on the AWS instance and I was able to access those seamlessly.

Thereafter created a tunnel that ssh’ed into the remote machine using port 22 (that is open in both the local and remote firewalls):


ssh -i amazonKey.pem -f root@XXX.compute-1.amazonaws.com -L 5901:XXX.compute-1.amazonaws.com:5901 -N

Here we are ssh’ing into the Amazon instance and opening the local port 5901 and forwarding all traffic on that to the remote Amazon instance’s vnc server listening on port 5901.

Also one could also disable or stop the remote Amazon instance’s “iptables or ip6tables” firewall:

service iptables stop
service ip6tables stop

Certificate Chain Validation explicit “cipher-suite” specification and “curl_loader”

If the “curl_loader” tool is used to load test a website that is available through HTTPS (TLS / SSL) and if certificate chain verification is required then you would need to update the source and recompile. Note that curl_loader utilizes libcurl that generally is powered by OpenSSL API. In steps:

  1. Open the loader.c file
  2. Search for “SSL_VERIFY_PEER” in loader.c file
  3. Replace and add the following code:

    ----
    curl_easy_setopt (handle, CURLOPT_SSL_VERIFYPEER, 1);
    curl_easy_setopt (handle, CURLOPT_SSL_VERIFYHOST, 2);
    // this is the location of the foile that holds the CA certificate that would be trusted by the underlying curl protocol stack
    curl_easy_setopt (handle, CURLOPT_CAINFO, "chain.pem");
    // Specify the cipher-suite as an environment variable.
    char *cipherString = getenv("CIPHER_STRING");
    curl_easy_setopt (handle, CURLOPT_SSL_CIPHER_LIST, cipherString);

And that is it.