How to achieve zero downtime deployment?

Most websites need to be up 24 hours every day, so we need a solution to deploy changes to the website without taking it down. The solution is to have more than one application server and use a load balancer. You have your application running on one of the servers and when you need to make changes, you deploy the new version on the other server. Now when a user is currently using the website he will still be using the old version, but when he connects to the website the next time he will be redirected to the other server with the new version. As long as there are sessions on the old version it keeps running, and when all the sessions are on the new server you can take down the old version and keep that server free for the next time you need to deploy changes.

Deploying a Haskell web application to Heroku

In order to work with Heroku, we need to have our project in a Git Repository (Source Control), an account at Heroku and its toolbelt, and the Heroku Haskell Buildpack. The following paragraphs will outline the process:

1. Step: Login or Sign up for a Heroku account

You may sign up here:
It's free for 1 instance, which is enough for testing purposes

Step 2: Install the Toolbelt

Heroku Toolbelt is a command line tool that provides functionality to interact with the Heroku platform.
The toolbelt can be downloaded here:

Step 3: Login

In order to use Heroku, you must login with the following command:

heroku login

...and provide your credentials

Step 4: Create an app at Heroku with the proper buildpack

Log in at Heroku and create an app instance using the following commands

echo 'web: cabal run -- -p $PORT' > Procfile
heroku create --stack=cedar --buildpack
git push heroku master

The first deployment IS VERY SLOW, because all dependencies need to be installed on your Heroku instance. So wait a little bit the first time around.

Sometimes you need to upgrade the 15 minutes time limit for this:

heroku plugins:install
heroku build -r -b

Step 5: Deploy

Now you can deploy everytime you push to master

git push heroku master

And your web application/web service is now available at [appname]


Deploying a Haskell web application to the Cloud

We want to expose our logic that is programmed in the functional programming Haskell as a web service and need to deploy it as a public web service. For this we want to use a PaaS (Platform-as-a-Service) vendor, i.e., 'Cloud hoster', in order to publish this web service with minimal configuration.

How to remote deploy a Web service to Apache Tomcat from within a Java program

The deployment of servlets to a Servlet container like Tomcat is usually a simple task: Just copy the developed servlet to a specific target directory. The container will then hot deploy it. The proposed task is however more complex: The goal is to make a programmatical remote deployment of a Web service. In other words, an already developed Web service is to be deployed from a PC to a remote server which is running a Tomcat Servlet container. This task has three major requirements: - First, the deployment has to be remote: The Web service is to be deployed onto a different machine. - Second, it is to be accomplished programmatically, which means that it is necessary to develop a software component (for example in Java) which will carry out that task. - Third, the deployment has to be “hot”, which means that the deployed Web service has to run after a short time, without the need to restart the server or its Servlet container. As an addition to the above it is interesting to know, how to use a Servlet container like Tomcat for Web service deployment. In short the goal is to develop an application which can “hot deploy” a Servlet containing one or more Web services onto a remote machine, which is only known to run an Apache Tomcat Servlet container. The process could be called: “Programmatical-remote-hot-deployment of a Web service to an Apache Tomcat Servlet container.”

Correctly configuring a Jetty Java Servlet container to be used through an Apache Web server via mod_jk

When deploying a JVM-based Web app usually so-called Java Servlet containers resp. application servers are used for the produciton system/environment. Probably the most popular and common Java server in this field is Apache Tomcat (and other even more feature-rich ones like JBoss or GlassFish). Apart from that, there's also Jetty which can be seen as a somewhat lightweight alternative. Nevertheless, there are some subtle differences to be taken into consideration when configuring it as opposed to Tomcat. The usual way to setup such a production system for a Java Web app is to use the Servlet container to serve the Web app and put so to speak in front of it an Apache Web server which handles the requests, hands them over to the container instance (e.g., Jetty or Tomcat) and receives its responses then (to say it in an a bit simplified way). Usually this is done via Apache's mod_jk module which enables communication between app server and Web server through the AJP13 protocol. What should be described and explained now is how to setup such a Java Web app production system ready for deployment in detail (mainly from a configuration perspective). The main focus shall be put at differences which are to be taken into account here between Jetty and Apache Tomcat.
Subscribe to deployment