Scalable-server-setup

A scalable server setup using Laravel Forge and Envoyer

New projects often start small and a single server taking care of everything is just fine. However, as successful applications grow in time, a more scalable setup may be required. In such a setup you use dedicated servers for the application itself, database, caching and queues and use a load balancer to divide traffic between the application servers. Each server can then be optimized for its own needs.

We typically build our applications in Laravel and we use Forge to manage a Digital Ocean server and Envoyer to manage the deployments. Recently, we made the switch from a single server setup to a better scalable, multi-server setup in one of our projects. In this post, we describe how we set this up using Laravel Forge and Envoyer. 

Setup

The setup we like to create has the following components:

  • a load balancer to divide traffic between 2 application servers. 
  • a queue server to handle heavy jobs.
  • a database server to which the application servers and queue server have access.
  • a caching server to handle cache and session(s).

When changing the server setup, we have to take special care of the following aspects of the application:

  • The database of the application will have it’s own server. The application servers should be able to access this. 
  • The application has cronjobs/scheduled tasks. Some of them should only run on one server.
  • The application makes use of jobs/queues. Because these are heavy tasks, we run them on a separate server. This server is executing code, so the codebase should also be deployed to this server. 
  • Because there will be multiple application servers, the handling of caching and session should be on a dedicated server that is accessible for all application servers.
  • File uploads can no longer be stored on a single application server. There will be multiple application servers and app server 1 will have no access to the files uploaded to app server 2. Instead, we use an external service (Rackspace) to store the uploads.
  • Storing logs on the application server itself (using the default ‘daily’ config option of the framework) may not be useful, since there will be logs on all application servers. An external logging service may be better. We use Papertrail for this.

As you can see, the switch from a single to a scalable, multi-server setup puts some requirements on your code base.

For the remaining part of this post, we assume that you already have an application running on a single server, and you want to switch to a more scalable, multi-server setup (you’re not starting from scratch). 

Database

The database of the application will be on it’s own server, optimized for database tasks. The following steps are required to achieve this:

  • Create a new server in Laravel Forge. Make sure the server is in the same region as your application server. The server doesn’t need php, but you have to choose a version. You can pick whatever you like, because we will disable after provisioning. Give the server a clear name, e.g. your-project-db.
    Next, login to the server and run the following commands to quit php: sudo systemctl stop php7.x-fpm followed by sudo systemctl disable php7.x-fpm. In these commands, specify the correct php version.
  • The application server needs access to this database server. In Forge, go to the ‘Network’ tab of the application server. Add the newly created database server to the ‘can connect to’ list. 
  • Create a new database on the database server and copy the database content from the application server to the database server, e.g. using sequel pro.
  • Open the .env file of the project on your application server. Here you have to change the database credentials. Switch the DB_HOST variable to the private ip of the database server, and update the database name, username and password. 
  • Now, the application server will connect to the database on the dedicated database server. MySQL can be disabled on the application server.
Laravel Forge server network tab

File storage

When your application has file uploads from users, they can no longer be stored on the application server. There will be multiple application servers, and file uploads stored at one application server won’t be accessible for another server. To avoid this ‘problem’, you can make use of an external file storage, like Amazon, Rackspace, dropbox etc. We like to use Rackspace, but you can use any file storage system you like. Laravel has some filesystems enabled by default, but you can easily add your own. Make sure your application uses an external file storage before your continue with the remaining steps of the setup. 

Logging

In this multi server setup, we don’t want to store the error log on the application servers anymore. Instead, we started using Papertrail for logging. Configure Papertrail as described in the Laravel documentation.

Caching and Session server

In a setup with multiple application servers, sessions and cache should no longer be stored on one of the application servers. If a user first visits your website on app server 1 and its session is stored on server 1, it’s not available for this user if he/she would return at a later moment on app server 2. A similar situation may occur for cache. If you cache the results of an API call on the first application server, you have to cache it on the second app server as well.

This situations can be avoid by using an external server for session and cache. You may use a separate server for caching and another one for the sessions, but we choose to use one server for both. We use redis as driver for both sessions and caching, more on this can be found in the Laravel documentation for session and caching.

Use the following steps to set this up:

  • Install the predis/predis package in your application
  • Go to Laravel Forge and create a new server and give it a clear name like ‘your-project-cache’. A caching server may use a lot of RAM, so take this into account. Make sure it’s in the same region as your application server. This server doesn’t need to receive code deployments. 
  • The application server needs access to the caching server. In Forge, go to the application server and move to the ‘Network’ tab. Add the newly created caching server to the ‘can connect to’ list (like before for the database server). 
  • Open the .env file, set CACHE_DRIVER to redis and set the REDIS_HOST to the private ip of the cache server. 
  • Check the ‘Session Database Connection’ section of config/session.php file. It specifies the connection used for the sessions. This connection should be present on the config/database.php file. If not, add an entry to the ‘redis’ block.
  • In the .env file, set SESSION_DRIVER to redis. 
Laravel redis database connection

Cronjobs/tasks

Your application may use scheduled tasks, for example to send invoices at the beginning of a new month. If your code runs on multiple application servers, the invoices should still only be sent once. To achieve this, Laravel has the onOneServer() method. Add this method to scheduled tasks, e.g:

$schedule->command('invoices:send')->monthlyOn(1, '20:00')->onOneServer()

This method only works when you use redis or memcached as caching driver and all application servers communicate to the same central caching server, but if you follow the steps in this post, this will be the case.

Queues

Queues often contain jobs that consume a lot of RAM, so it’s better to run them on a dedicated server. We use Laravel horizon to manage queues and in this post we described how to set this up using Laravel Forge and Envoyer.

To run the queues on an external server, follow these steps:

  • In Forge, create a new server in the same region as the application servers. Give it a clear name like ‘your-project-queue’. This server will receive code deployments, because it will run jobs in which the code of the application is executed. 
  • Next, create a site on this server. Use the same domain as used on the application server. 
  • The application servers should have access to the queue server and the queue server should have access to the database server. Go to the network tab and set the ‘can connect to’ checkboxes accordingly. 
  • Because the queue server needs the application code as well, we have to make some modifications in Envoyer. In Envoyer, go to the server tab and add the queue server. Open the environment file, and let the .env also sync to the new queue server. Next, check your deployment hooks. Some hooks should only be executed on the application server (e.g. if you have a deployment hook to run migrations), while others only need to run on the queue server (e.g. the horizon:terminate command to let horizon restart after deployment).
  • Push the deploy button. Now your code will be deployed to both servers. 
  • In forge, go to the daemons tab on the queue server and add the php artisan horizon command, so that supervisor will keep this process running on the queue server. You can remove this daemons for the application server, because this server will no longer handle the queues. 
  • In the .env file, set QUEUE_DRIVER to redis and set the REDIS_HOST to the private ip of the queue server. 

Extra application servers

Now, the application is ready for a load balancer to divide traffic between multiple application servers. The next step is to create additional application servers.

Go to Laravel Forge and create a server in the same region and server provider as your original application server. Add the same sites to this new application server. If you made any customizations to the original application server or the sites on this server, you should also apply them to the new application server. The new application server should also have access to the database and queue server. In the network tab of Forge, this can be set.

Next, go to the corresponding project in Envoyer and navigate to the server tab. Here, you add the newly created application server. Use the ‘connect’ link to check that Envoyer has access to this server and then click on the ‘environment’ button. Here you can select the servers to which the .env file should be synced. Add the new application server to this list.

You may have custom deployment hooks. For each hook, you can select the servers on which the hook should run. Check all deployment hooks and make sure they run on the correct server. For example, a command to run migrations should run on only one application server, a command to clear cache after deployment should run on all application servers and a command to restart horizon should only run on the queue server.

Now, everything is set and you can push the deploy button. Note that we now have a setup with two application servers, but you can add as many servers as you like.

Load balancer

The last step for this scalable setup is to create a load balancer and start separating the traffic.

In Forge create a new server in the same region and server provider and select the ‘provision as load balancer’ checkbox. Once this server is provisioned, add your site. Use the same domains as on the application servers.

To make sure that everything keeps working correctly, you have to add the private ip of the load balancer to the ‘$proxies’ array in the TrustedProxies middleware of your application. After applying this change, redeploy the application code via Envoyer.

Now, go to the load balancer in Forge. For each site, select the servers to which the traffic should be routed: all application servers.

In the previous setup with one server taking care of everything, a DNS record connects the domain name to this one server. In this scalable setup, we have to change this. Go to your DNS management and let the domain point to the public ip of the load balancer.

You likely want to serve your website via https. To do this, install a SSL certificate on the load balancer. In Forge, go to the SSL tab of the site on the load balancer and install a Let’s Encrypt certificate or a certificate you purchased yourself. 

File uploads

In the old setup with a single application server, the maximum size of file uploads was set for this server via forge. Now, in this scalable setup, we have to make an additional change on the load balancer. By default, the load balancer only accepts files upto 1mb, no matter what setting you have for the application servers. To change this, ssh into the load balancer and navigate to:

/etc/nginx/nginx.conf

Add the following line to the http block, where 30M is the maximum file upload size (change this to your needs):

client_max_body_size 30M;

Next, restart nginx. 

Concluding remarks

This was the last step in creating a scalable server setup for your website or application. For the users of your website, nothing should have changed. However, behind the scenes you have a lot more flexibility to add resources or perform maintenance on parts of the system when needed.


Mijn Twitter profiel Mijn Facebook profiel
Leonie Derendorp Webdeveloper and co-owner of PLint-sites in Sittard, The Netherlands. I love to create complex webapplications using Laravel! All posts
View all posts by Leonie Derendorp

8 thoughts on “A scalable server setup using Laravel Forge and Envoyer

    1. Leonie Derendorp Post author

      Thanks! No, it’s not necessary to use Envoyer, you can manage deployment with Forge only. Envoyer has some nice features like zero-downtime deployment and therefore we like to use it.

      Reply
  1. Seb

    Great article!
    I’m just confused in the last step of Queues: “In the .env file, set QUEUE_DRIVER to redis and set the REDIS_HOST to the private IP of the queue server. ”

    Shouldn’t the REDIS_HOST be the private IP of the cache server instead of the queue server?

    Reply
    1. Leonie Derendorp Post author

      Hi Seb,

      It depends a bit on the details of your setup, but I believe it should be the private ip of the server that handles your queues.

      Reply
  2. Usama

    Did you move all your existing storage data to cloud or only the upcoming data?
    If you did move it, how did you do it?

    Reply
    1. Leonie Derendorp Post author

      I moved all storage data, also the existing data. I downloaded all data overnight and uploaded it manually on another night. In the past we also used Mountain Duck to move data, but that also took a lot of time, so doing it manually was easier this time.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *