HTTP and application security
Encrypt everything
There is no longer any excuse not to encrypt everything by default. A long time ago there was—maybe—but not any longer. The first barrier fell with the increase of CPU power, which removed encryption as a computational bottleneck. More recently, several things happened to make encryption widely adopted. First, there was the rise of Let’s Encrypt, which started to offer free certificates and automated issuance. Second, browsers started to mark plaintext content as insecure and search engines started to favor encrypted content.
Mixed content is the name we use to refer to web pages that are themselves encrypted but rely on resources that are not. For example, an HTML page could be fetching audio or visual files without encryption. The original excuse—that heavy content can’t be delivered encrypted—no longer applies, and today we need to deal with the legacy. Browsers have been restricting mixed content for a while. The long-term direction is not only that all content within a page must be encrypted, but also that the related actions (e.g., downloads) must be as well.
Secure cookies
In HTTP, cookies are a weak link and need additional attention. You could have a web site that is 100% encrypted and yet remains insecure because of how its cookies are configured. Browsers have been working hard to eliminate this problem, but they’ll need your help.
Mark cookies secure
Depending on the user agent, cookies may by default span both HTTP and HTTPS contexts, which is why they need to be explicitly marked as secure to disable transmission over insecure channels.
Mark cookies as HttpOnly
If a website uses cookies that need not be accessed from the browser itself, they should be marked as HttpOnly. This is a defense-indepth technique that aims to minimize the attack surface.
Use cookie name prefixes
Cookie prefixes are a new security measure that is now supported by browsers and being added to the main cookie specification (RFC 6265bis). Cookies with names that start with prefixes Host- and Secure- are given special powers that address a variety of problems that existed for years. All cookies should be transitioned to use these prefixes.
For best results, consider adding cryptographic integrity validation or even encryption to your cookies. These techniques are useful with cookies that include application data. Encryption can help if the data inadvertently includes something that the user doesn’t already know. Integrity validation will prevent tampering. With these kinds of cookies, it’s also a good practice to bond the cookies to the context in which they were issued—for example, to the user account to which they were issued.
Use strict transport security
For proper security of the transport layer, you must indicate your preference for encrypted content. HTTP Strict Transport Security (HSTS) is a standard that allows web sites to request strict handling of encryption. Web sites signal their policies via an HTTP response header for enforcement in compliant browsers. Once HSTS is deployed, compliant browsers will switch to always using TLS when communicating with the web site. This addresses a number of issues that are otherwise difficult to enforce: (1) users who have plaintext bookmarks and follow plaintext links, (2) insecure cookies, (3) HTTPS stripping attacks, and (4) mixedcontent issues within the same site.
In addition, and perhaps more importantly, HSTS fixes handling of invalid certificates. Without HSTS, when browsers encounter invalid certificates, they allow their users to proceed to the site. Many users can’t differentiate between attacks and configuration issues and decide to proceed, which makes them susceptible to active network attacks. With HSTS, certificate validation failures are final and can’t be bypassed. That brings TLS back to how it should have been implemented in the first place.
All web sites should deploy HSTS to fix legacy browser issues in how encryption is handled. In fact, deploying HSTS is probably the single most important improvement you can make. The following configuration enables HSTS on the current domain and all subdomains, with a policy duration of one full year:
Strict-Transport-Security: maxage=31536000; includeSubDomains; preload
For best results, consider adding your properties to the HSTS preload list. With that, browsers and other clients can ship with a full list of encryption properties, which means that even first access to those sites can enforce encryption.
Warning
Unless you have full control over your infrastructure, it’s best to deploy HSTS incrementally, starting with a short policy duration (e.g., 300 seconds) and no preloading. The fact that HSTS has a memory effect, combined with its potential effect on subdomains, can lead to problems in complex environments. With incremental deployments, problems are discovered while they’re still easy to fix. Request preloading as the last deployment step, and after you activate sufficiently long policy duration.
HSTS is not the only technology that can help with enforcing encryption. Although much more recent and with a lot of catching up to do, there are also the HTTPS DNS resource records, which build on the DNS infrastructure to carry various metadata, including signaling of support for encryption. In the SMTP space, there is MTA Strict Transport Security (MTA-STS), which enforces encryption for transmission of email messages.
Deploy content security policy
Content Security Policy (CSP) is a mechanism that enables web sites to control how resources embedded in HTML pages are retrieved. As with HSTS, web sites signal their policies via an HTTP response header for enforcement in compliant browsers. Although CSP was originally primarily designed as a way of combating XSS, it has an important application for web site encryption; that is, it can be used to prevent third-party mixed content by rejecting plaintext links that might be present in the page via the following command:
Content-Security-Policy: upgradeinsecure-requests
Disable caching
Encryption at the network level prevents both passive and active network attacks, but TLS doesn’t actually provide full end-toend encryption. Both sides involved in the communication have access to the plaintext. Caching is commonly used with HTTP to improve performance, so, for example, browsers may choose to store plaintext data in persistent storage. Intermediate proxy services (e.g., content delivery networks) may choose to not only cache sensitive data, but even share it with other users in some situations when incorrect configuration is involved.
With the increase of cloud-based application delivery platforms and content delivery networks, it’s never been more important to very carefully mark all sensitive content as private. The most secure option is to indicate that the content is private and that it must not be cached:
Cache-Control: private, no-store
With this setting, neither intermediate devices nor browsers will be allowed to cache the served content.
Be aware of issues with HTTP compression
In 2012, the CRIME attack showed how data compression can be used to compromise network encryption, and TLS in particular. This discovery eventually led to the removal of compression from TLS. The following year, TIME and BREACH attack variations focused on retrieving secrets from compressed HTTP response content. Unlike TLS compression, HTTP compression has a huge performance and financial impact and the world decided to leave it on—and to leave the security issues to linger.
TIME and BREACH attacks can target any sensitive data embedded in a HTML page, which is why there isn’t a generic mitigation technique. In practice, most attacks would target CSRF tokens, which would give attackers the ability to carry out some activity on a web site under the identity of the attacked user. For best security, ensure that CSRF tokens are masked. In addition, web sites should generally be looking at adopting same-site cookies, another recent security measure designed to improve cookie security, this time against CSRF attacks.
Understand and acknowledge third-party trust
When everything else is properly configured and secured, we still can’t escape the fact that many web sites rely on services provided by third parties. It could be that some JavaScript libraries are hosted on a content delivery network or that ads are supplied by an ad delivery network or that there are genuine services (e.g., chat widgets) supplied by others.
These third parties are effectively a backdoor that can be used to break your web site. The bigger the service, the more attractive it is. For example, Google Analytics is known to provide its service to half the Internet; what if its code is compromised?
This is not an easy problem to solve. Although it would be ideal to self-host all resources and have full control over everything, in practice that’s not quite possible because we don’t have infinite budgets to do everything ourselves. What we should do, however, is evaluate every thirdparty dependency from a security perspective and ask ourselves if keeping it is worth the risk.
A technology called Subresource Integrity (SRI) can be used to secure resources that are hosted by third parties and that don’t change. SRI works by embedding cryptographic hashes of included references, which browsers check every time the resource is retrieved.