With the modern web, the 'Internet of Things' and more and more essential business being moved online, the web is becoming a bigger and bigger target for attackers. Gone are the days when you didn't really have to worry about things such as security and HTTPS because it didn't really matter. You may also find that you are under legal obligations to protect your user's data correctly.
It could be expected that with all of this focus on security, especially in light of many of the recent massive hacks and data breaches, that security would be the focus of all developers; but that isn't the case. A lot of developers try to avoid it and if you are coming to Vapor from iOS it may be something that you have never really considered, especially since on iOS a lot of things are taken care for you.
In this post, I'll go over some of the basics of what you can do to secure your Vapor applications, your user's data and your users themselves. Much of what I talk about can be seen as common sense, but if you haven't had any experience of it before then it may not be obvious.
First off however are some caveats! No system will ever be 100% secure and safe. You could spend millions and employ hundreds of people, but any large system will always be vulnerable, and if someone is determined enough and they have enough resource and time, they will be able to find a way in.
Having said that, you should know what your threat vector is and who your attackers are. If you are a small startup just storing a username and password for a few hundred users then obviously your concerns are completely different to a multinational corporation or government with millions of users and their personal data or confidential information worried about nation-state attackers! The amount of time and money you need to invest should reflect this.
Security is like an onion - it consists of lots of layers. There is no silver bullet that is a single thing you can do to protect yourselves, but there are multitude of things that can be done, each adding a layer that an attacker has to get through and making it more difficult. In an ideal world we would all have perfectly secure systems and have lots of time to spend on it, but the real world isn't like that. Your number of layers should reflect your size and threat vector (and if you are a company of any reasonable size you should be employing dedicated people to work on this full time!).
The world of security and hackers is a constant game of cat and mouse and I can't promise anything in this post will protect you completely, but it should give you some basics to get you started, based off my experiences of being on both sides of the fence.
This first thing to talk about when it comes to security is passwords. Most systems will have some sort of login so that you can identify and customise your site for your different users. To do this your user will login with a username and some form of password. Note that delegated authentication (such as handing off to Facebook or Google) is out of scope for this post but you still need to make sure you handle tokens securely.
I see lots of questions on the Slack channel asking questions about passwords and people coming up with their own way of managing passwords. The only thought to that should be:
Rolling your own mechanisms for storing passwords or hashing them should be avoided like the plague. For Vapor there is one, and only one, very simple solution which is to use BCrypt. It is built in to Vapor, easy to use and is battle tested. The BCrypt algorithm has been around for nearly 20 years and has been tested and hardened by people who spend their lives on this. It is slow (which is a good thing! This means that it can't be bruteforced) and if used properly, secure. Do not try and configure your own salt or try and use it in any other way, just simply hash your passwords with
You should also not save the password anywhere before hashing it. As soon as you have received the password to hash, you should hash it. It should certainly not be saved in a cache anywhere and you also shouldn't log the password anywhere, for instance in the case of an incorrect login request. The only time a password should be saved to anywhere is after it has been hashed. That mean seem fairly obvious, but you would be surprised at the number of sites who still store passwords in plaintext! If you can email your user their password if they have forgotten it then you are doing it wrong. Likewise, if you come across a site that does this, then you should stop using them!
You also shouldn't give any hints to users if they attempt to login incorrectly. So if your site contains a form with a username and a password, if there is any issue with the login it should only show "Your username or password was incorrect". If their username is correct and you tell them that only their password was incorrect then an attacker will know that they have a valid username they can attempt to bruteforce or use elsewhere. If a user has forgotten their password, then you should send them a token through an external channel, such as email, which they can use to reset their password. You shouldn't allow them to reset their password there and then in case it is an attacker trying to break in with a known username. This also has the added benefit that a user will be alerted if someone tries to reset their password.
Finally you should consider rate limiting login attempts to prevent accounts from being brute forced. So if a user has attempted to log into their account with incorrect details more than 5 times, maybe you should lock their account for a minute. BCrypt also helps with this since it should take somewhere around 250ms for a password to be hashed. If you have a few million password possibilities then it becomes unviable to try them all if it takes so long to attempt each possible password.
Authorisation and Authentication
When talking about passwords, it's important to touch on authentication and authorisation and to explain the difference. Authentication is about verifying that a user is who they say they are - this is normally done with a username and password for instance. Authorisation is about verifying that that user is allowed to do what they are trying to do. So for instance, if someone goes to a bank to withdraw some money then bank clerk would first verify that that user is who they say they are (authentication) and then verify that that user is allowed to withdraw money on the account they are trying to (authorisation).
Most apps will have some form of authorisation, whether it be something like Instagram with private accounts where only certain users are allowed to view a private profile, or something like Github where certain users will have admin permissions on a repository. You should test that you authorisation flows work as expected to make sure that users can't do any actions they shouldn't be able to.
Another layer of security to add is to add a second password check for important actions. Think about when trying to transfer money when banking online, or enter the admin page for a repository on Github - you will typically be asked to reverify your password, despite the fact that you are already logged in. This is a great way to protect important actions if a user's cookie or login token has been compromised.
Finally one thing that should be obvious but still happens - don't do authorisation via queries or cookies. What I mean by this is don't have a flag in a cookie or the page query that decides whether a user is an admin user! I've seen this many times and all it requires is for a user to edit their cookie or add the query to the URL and then they can make themselves an admin!
The next big topic for security is HTTPS. And in 2017 there is no excuse for not deploying HTTPS on your sites. Let's Encrypt have been instrumental in making it easy to do by making certificates free. Services such as Vapor Cloud make it easy to add to your site where you run one command and then you don't have to worry about it ever again. Even massive, complicated sites such as the Guardian and Stack Overflow have managed to deploy it across their sites.
HTTPS is great and should be used wherever possible but it isn't going to matter much if an attacker can force a user to visit your website over HTTP with a man-in-the-middle downgrade attack for instance. HSTS (HTTP Strict Transport Security) is a way to help prevent this. This is set as a header on every response to your website and is read by the browser. If the browser gets this header then every future request must be served over HTTPS, even if the user explicitly navigates to an
http:// address. This also speeds us your site since it removes the initial 301 redirect to the HTTPS site for any HTTP request.
However, there is still another issue with this - what happens if the user has never visited your site before? An attacker will be able to intercept the initial response and remove the HSTS header and the user's browser will never know to force the requests to go over HTTPS. There is also a way around this. If you set the HSTS cache age to be at least a year, set the
subdomain flag to ensure that all subdomains of your site will be served over HTTPS and set the
preload flag on the HSTS header, you can actually submit your site to the HSTS Preload List. This will make its way into Chrome's source code (and later on the other major browsers) and tells the browser that your domain must be served over HTTPS, even if your user has never visited before.
One warning with this though, you must ensure that you are happy for every single domain of your site to be served over HTTPS. For new apps and sites this shouldn't really be an issue, but if you have any legacy sites running on subdomains then you need to ensure that these will work on HTTPS otherwise your user's won't be able to visit them!
Now that you can ensure that your site is run over HTTPS, how do you ensure that some isn't using a rouge SSL certificate to pretend to be you (Note - when I refer to SSL I mean TLS). This shouldn't really ever happen, but it can if a Certificate Authority is compromised, your DNS records are compromised, or a CA doesn't really do its job properly. So you may want to try and protect against this.
Things are different between mobile (and desktop apps) and web for this. On mobile, it is actually a lot simpler - you simply pin against an expected certificate. So when your mobile app makes a request to
https://api.brokenhands.io for example, you verify that the certificate presented as
api.brokenhands.io is the one you are expecting; if not you the connection fails. The Alamofire library, which is popular on Swift iOS apps makes this easy to do. Then when a new certificate is issued, you simply update your app with the new certificates. (You may also pin against certificates higher up the chain, or pin against the Certificate Signing Request so that you don't have to update every 90 days or have any downtime for example, but I'll let better people explain how to do that!)
Up until a couple of weeks ago I would have said you can pin your certificates on the web with HPKP (HTTP Public Key Pinning), but it looks like that soon that won't be an option. HPKP never really took off as it was difficult to implement and maintain and the consequences if you got it wrong could be disastrous. There was also the possibility of having your site held to ransom.
However there are ways to protect against rogue certificates without HPKP, using a combination of CAA, CT and OCSP stapling.
CAA, CT and OCSP Stapling
Certificate Authority Authorisation, or CAA, ensures that only Certificate Authorities you want to issue certificates for your domain are allowed. It is a DNS record that you set and whenever a CA gets a certificate request, they check the record in the domain to see if they are allowed to issue them. This allows you to have control over which CAs issue certificates for your domains.
Certificate Transparency is a way of ensuring that all certificates that are issued for your domain are announced. You can sign up to a certificate transparency log (such as on Facebook) and be notified when certificates are issued for your domain. You can even ensure that your certificates are added to a transparency log with the Expect CT header.
With these two, you should now know if a rogue certificate is issued for your site. But how do you go about revoking a bad certificate? It turns out that certificate revocation is actually really difficult. There are a number of mechanisms, but most of the them have compromises. One original way was to keep a list of all revoked certificates. However, obviously this becomes problematic when you have huge numbers of certificates and the size of the revocation list becomes unmanageable and not something you want to download. Another way is to check with the CA for each certificate to see if it is still valid. However, this has privacy implications (since a third party can now build up a picture of your browsing habits) and what do you do when the CA check site is down or overloaded? You don't want to block the request because then the sites become a single point of failure. But if you assume the certificate is valid if you don't get a response then an attacker can simply block the response.
Currently there isn't actually a nice way to protect yourself against this, but it is being worked on. The solution that is emerging is OCSP Stapling (Online Certificate Status Protocol). With this, the host server will check the CA at a regular interval to ensure that the certificate is still valid and get a staple. When a user makes a connection, the server can return this signed staple which certifies that this certificate is still valid. It also means that all the work for checking the status and maintaining the status is done by the server and not the client.
However an attacker obviously isn't going to provide a staple, so what we can do is tell the CA that our certificates must been stapled in the certificate signing request. So when the browser requests the certificate it knows it must be sent with a staple. Finally you can also set an
Expect-Staple header that is sent with each request and with the the browser should expect a staple with the certificate or reject the connection. These are not yet widely supported in browsers, but support will grow in time.
That was a really quick overview of how it works, but for a much better and more detailed explanation, take a look at Scott Helme's blog.
Content Security Policy
Content Security Policy, or CSP, is one of the best tools you can use to protect your users on your site. It is a simple tool to use that can help protect you against cross-site scripting attacks (XSS), downgrade attacks, insecure loading of third party assets or even help you spot issues when deploying and running your site.
- you can set an URL to send violations to so if there are any issues, your users will do the testing for you!
- there is a 'report only' mode where it will report violations but won't block them. This is really useful for big legacy sites that have a lot of mixed content and finding out what would break if you were to deploy it, without actually breaking your site.
- can whitelist which domains you are allowed to submit forms to
Once you have locked down your external assets to a select few domains, the chances are that you will be loading something like Bootstrap or jQuery (it is still popular!) from the official CDNs. Even if you have your own CDNs, what happens if these become compromised? You want to make sure you know what scripts and stylesheets you are using, so that if someone does manage to compromise the Bootstrap CDN for example, or even your own CDN site, they won't be able to change the scripts you are loading. For this, you can use something called Subresource Integrity. With SRI you can embed an
integrity key into the
<stylesheet> tag where you load the asset from, which contains a hash of the script. The browser will then check the asset it downloads and if it doesn't match (i.e. it is not the one you are expecting) then it will refused to load the asset. This ensures that you only load the scripts and stylesheets you know are safe. Finally, you can even enforce the use of SRI with CSP, with the
require-sri-for field. Pretty neat!
The final part to mention about CSP mainly applies to large legacy sites that have
http links to content throughout. Changing all of these can be impossible for very large sites. You may also find that you are linking to third-party assets via plugins, such as a comment plugin. If all the assets that those 3rd parties loads aren't all served over HTTPS, you may find yourself with mixed content warnings, and in modern browsers they may even refuse to show a nice padlock. You can actually get around this with a single CSP function,
upgrade-insecure-requests. This will upgrade all HTTP requests to HTTPS and you don't have to do anything else!
There are some really interesting articles out on the internet about the use of CSP and how it helps sites migrate to HTTPS. The Guardian website used CSP to slowly migrate all their content over to HTTPS. StackOverflow had probably one of the most complicated deployments of HTTPS due to the number of subdomains. Finally Github, which has probably the best commercial deployment of CSP out there, due to the fact that they have to allow users to upload code and images which is all sanitised and comes from a single domain, has done some really interesting investigations into how far you can take CSP and the attacks that can be used to try and subvert it.
We have to take some time to talk about templating languages. Templating languages, such as Leaf are awesome because they allow us to create personalised and dynamic sites without having to repeat code. However, as Uncle Ben once said, "With Great Power Comes Great Responsibility". No templating languages are secure, and all of the major ones have been hacked. The only reason the smaller ones haven't been is due to the fact they are not popular enough yet. Leaf will be no exception - if someone spends enough time, they will find a way to break it.
The most obvious one is do not pass the user's password to the template. Vapor 2 makes this a lot easier since
makeRow() now takes over responsibility for the database transactions, but if you have Vapor 1 code lying around, you must not add the password in the
The other point to make is that you absolutely must not pass the entire request into the template. Doing this is useful as you get access to things like the
storage on the request, but the request contains a lot of very sensitive information. For instance, every request will contain all of the cookies for that request, including session cookies that will identify a user. The request will also contain any authentication headers, again not something you want to be stolen!
Cross Site Request Forgery
Cross Site Request Forgery, or CSRF, is a way of getting a website to execute malicious commands transmitted from a user. Whilst XSS (Cross Site Scripting) exploits the site that the user is currently on, CSRF is about exploiting a different site. It works by sending requests to a site that a user has already visited to make it do something. So lets say we have a banking website. In order to transfer money, you fill out a web form and then submit that web form to the website with the amount of money to transfer and an account number to transfer the money to. Because this is a web form, the web application will accept a POST request with those fields. By default, the application will probably accept that POST from anywhere because there is no way to tell that the POST came from the banking website, or from an attackers website without any additional work.
The easy way to protect against this is to use a token which you provide to the GET request, either through a header or embed it into the form. This token must then be returned along with the POST request and checked to see if it is a valid token. You can use a session to ensure that the user that submitted the token with the POST request was the same user that was provided with that token. If the token does not match then you reject the request.
There is a package in Vapor Community which provides an easy way to integrate CSRF protection into your API, but it is fairly easy to implement on your own as well. I integrated it into Vapor OAuth to protect rogue applications from authorising themselves.
There are a couple of easy things we can do to make sure our cookies can't be stolen easily. When your server returns a response to a request, if you want to create a cookie it will add a
Set-Cookie header with the cookie details. As well as being able to set the usual things such as values and expiry dates, there are a couple of useful flags we should set:
secure- setting this flag ensures that the browser will only send the cookie on HTTPS requests. If a user makes a request to your site over HTTP then any cookies with this flag set will not be sent. This should certainly be set on any login or session cookies, but given your site should all be served over HTTPS now(!) you should set this on all your cookies
Both of these flags are supported by Vapor and should (probably) be set by default on all of your cookies.
In recent years, several advances have been made to further harden cookies. There are a couple more things you can set on your cookies to make them even more secure.
The first is the Same Site flag, which is another great mechanism to prevent CSRF attacks. In essence it stops the browser from sending the cookie unless it has come from the same site. So if you are logged in to your banking website, your login cookie will only be sent with any requests from the banking site. So if a malicious website makes a request to a form POST to transfer money to another account, the cookie will not be sent and the request can be rejected.
There are two settings for same site -
strict. Strict means that no cookies will be sent to the site unless the user has visited from that site. This is good for applications that need the security but can be a pain for users as it will look like they have been logged out every time they first visit your site. Some sites will use a simple cookie that contains a 'logged in flag' so the application knows to make it look like the user is logged in and then another cookie that is required to do anything. The lax option means that the cookie will only be sent with safe HTTP methods. Any POST calls won't have the cookie attached unless that have come from the same site. This can strike a good balance between usability and security.
I submitted a PR a while back to add support in Vapor for Same Site and you should use this to help protect your users.
You can also use Cookie Prefixes to ensure that cookies are set correctly and add restrictions to how cookies can be set. This is still an early draft and Vapor doesn't yet support it but here is a great explanation on how they work.
Anything that you are saving into the database that comes from a client needs to be treated with caution. If you are using Fluent for your database interactions, you really have to try to make yourself susceptible to SQL injection attacks but if you start dealing with raw queries ensure that you use prepared statements.
Likewise, if you allow users to upload content that will later be rendered ensure that this is escaped when you pass it to Leaf or whatever you are using. Leaf will escape input by default, but if you are using custom tags you need to ensure that are careful with what gets passed in.
If you are using parsing libraries on the server, ensure that these are sandboxed - parsing libraries are notoriously vulnerable and can be a great way to execute code on a server. Finally if you have to call out to external terminal commands, be really careful about what you pass to that command. If should not be possible to add an
; to your input and get remote code execution on a sever, but this is a common attack vector!
Finally you need to make sure your server is configured properly. Whatever you do, do not touch
sudo. The user that is running your Vapor app should be a locked down account that does not have access to do anything but receive requests and return responses on a non-sudo port. The server should only expose the ports that are required (80 and 443 usually) and that is it. You certainly shouldn't expose port 22 (SSH) for anything other than your IP. AWS is really good at ensuring that your servers are locked down by default but do make sure. Even better is to lock everything down on a VPN and just expose the load balancer to the public internet.
Any applications (and user accounts!) you are using should not be using a default password! Databases as well should be locked down. If you are setting up your own network, then the databases should only be accessible from within the local network by the application servers and not by anyone. You should also make sure they have passwords set! MongoDB especially has picked up a reputation for being left open by default and being hacked. Talking of databases, it is popular to use something like S3 to store your backups on - make sure that you can only access these backups with credentials! You can have the best security in the world on your application and databases, but it won't matter at all if your backups are lying open somewhere on an S3 bucket! This has become such a problem that AWS have released tools to try and protect against this.
When creating your models in the database, usually you will let Fluent auto-assign the ID. Consider whether you should use an auto-incrementing ID as this makes it easy to guess for attacks. If you take the example of trying to transfer money out of bank accounts, it is easy to start at account
1 and just increment the number. If you have to guess a UUID (something like
b3656db6-e6f1-4219-bef3-3e414cceda73) then this becomes impractical.
Finally, make sure you keep your systems up to date! If you are managing your own infrastructure, you need to make sure you keep your servers up to date. You don't want a Wanna Cry episode. You also shouldn't advertise what software you are running on your servers. By default, Nginx and Apache will return a
Server header in the response which can contain the version numbers of well. This makes it trivial to see if you are running vulnerable software, so this should be hidden.
That was a bit of whirlwind tour of some of the things you can do to add security to your Vapor applications. It can be complicated to to and understand everything, but there are some useful links to check out which will help. The first is www.owasp.org - they have a ton of really good guides of best practices for all things web applications. Things like password reset flows and their cheat sheets are really useful things to reference when developing your system.
Also take a look at www.securityheaders.io, it provides an easy to way to see if you are getting the low hanging fruit to protect your users. Vapor Security Headers provides an easy way to implement all of these headers.
Security is hard, from a developers point of view you have to get everything right and even if you do there will still be things you rely on that may be vulnerable. There is never a single thing that will make your systems secure and there are many things you should do and build up the layers of security. As I mentioned at the start, know your attackers - if you are a small site then most of what I mentioned may be enough! If you are a Google or a Government then you should be employing people full-time to do penetration testing. But it should certainly be thought about and designed in from the beginning and not as an afterthought.
As ever, leave questions in the comments!