Upgrade Docker Mac



Estimated reading time: 3 minutes

By default, when the Docker daemon terminates, it shuts down running containers.You can configure the daemon so that containers remain running if the daemonbecomes unavailable. This functionality is called live restore. The live restoreoption helps reduce container downtime due to daemon crashes, planned outages,or upgrades.

Note

Upgrade Docker EE. To upgrade Docker EE: If upgrading to a new major Docker EE version (such as when going from Docker 17.03.x to Docker 17.06.x), add the new repository. Run sudo yum makecache fast. Follow the installation instructions, choosing the new version you want to install. Install from a package. In order to create or upgrade virtual machines running Docker, Docker Machine will check the Github API for the latest release of the boot2docker operating system. The Github API allows for a small number of unauthenticated requests from a given client, but if you share an IP address with many other users (e.g. In an office), you may get rate.

Live restore is not supported on Windows containers, but it does work forLinux containers running on Docker Desktop for Windows.

All open source functionality can be used free with the option to upgrade to the full paid Enterprise feature set, including support for Enterprise plugins. Linux Windows Mac Docker ARM Docker (Alpine base image). Restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use systemd, then use the command systemctl reload docker. Otherwise, send a SIGHUP signal to the dockerd process. If you prefer, you can start the dockerd process manually with the -live-restore flag.

Enable live restore

There are two ways to enable the live restore setting to keep containers alivewhen the daemon becomes unavailable. Only do one of the following.

  • Add the configuration to the daemon configuration file. On Linux, thisdefaults to /etc/docker/daemon.json. On Docker Desktop for Mac or DockerDesktop for Windows, select the Docker icon from the task bar, then clickPreferences -> Daemon -> Advanced.

    • Use the following JSON to enable live-restore.

    • Restart the Docker daemon. On Linux, you can avoid a restart (and avoid anydowntime for your containers) by reloading the Docker daemon. If you usesystemd, then use the command systemctl reload docker. Otherwise, send aSIGHUP signal to the dockerd process.

  • If you prefer, you can start the dockerd process manually with the--live-restore flag. This approach is not recommended because it does notset up the environment that systemd or another process manager would usewhen starting the Docker process. This can cause unexpected behavior.

Live restore during upgrades

Live restore allows you to keep containers running across Docker daemon updates,but is only supported when installing patch releases (YY.MM.x), not formajor (YY.MM) daemon upgrades.

If you skip releases during an upgrade, the daemon may not restore itsconnection to the containers. If the daemon can’t restore the connection, itcannot manage the running containers and you must stop them manually.

Live restore upon restart

The live restore option only works to restore containers if the daemon options,such as bridge IP addresses and graph driver, did not change. If any of thesedaemon-level configuration options have changed, the live restore may not workand you may need to manually stop the containers.

Impact of live restore on running containers

If the daemon is down for a long time, running containers may fill up the FIFOlog the daemon normally reads. A full log blocks containers from logging moredata. The default buffer size is 64K. If the buffers fill, you must restartthe Docker daemon to flush them.

On Linux, you can modify the kernel’s buffer size by changing/proc/sys/fs/pipe-max-size. You cannot modify the buffer size on Docker Desktop forMac or Docker Desktop for Windows.

Live restore and swarm mode

The live restore option only pertains to standalone containers, and not to swarmservices. Swarm services are managed by swarm managers. If swarm managers arenot available, swarm services continue to run on worker nodes but cannot bemanaged until enough swarm managers are available to maintain a quorum.

docker, upgrade, daemon, dockerd, live-restore, daemonless container

The following sections help you to understand what containerized deployment is, and the deployment options available for Content Services when using containers.

Deployment concepts

In addition to the standard deployment methods for non-containerized deployment, Alfresco provides Content Services packaged in the form of Docker containers, for companies who choose to use containerized and orchestrated deployment tools. While this is a much more advanced approach to deployment, it is expected that customers who choose this approach have the necessary skills to manage its complexity.

You can start Content Services from a number of Docker images. These images are available in the Docker Hub and Quay repositories. However, starting individual Docker containers based on these images, and configuring them to work together can be complicated. To make things easier, a Docker Compose file is available to quickly start Content Services when you need to test something or work on a proof-of-concept (PoC).

There are also Helm charts available to deploy Content Services in a Kubernetes cluster, for example, on Amazon Web Services (AWS). These charts are a deployment template which can be used as the basis for your specific deployment needs. The Helm charts are undergoing continual development and improvement and should not be used “as-is” for a production deployment, but should help you save time and effort deploying Content Services for your organization.

The following is a list of concepts and technologies that you’ll need to understand as part of deploying and using Content Services. If you know all about Docker, then you can skip this part.

Virtual Machine Monitor (Hypervisor)

A Hypervisor is used to run other OS instances on your local host machine. Typically it’s used to run a different OS on your machine, such as Windows on a Mac. When you run another OS on your host it is called a guest OS, and it runs in a Virtual Machine (VM).

Image

An image is a number of layers that can be used to instantiate a container. This could be, for example, Java and Apache Tomcat. You can find all kinds of Docker images on the public repository Docker Hub. There are also private image repositories (for things like commercial enterprise images), such as the one Alfresco uses called Quay.

Container

An instance of an image is called a container. If you start this image, you have a running container of this image. You can have many running containers of the same image.

Docker

Docker is one of the most popular container platforms. Docker provides functionality for deploying and running applications in containers based on images.

Docker Compose

When you have many containers making up your solution, such as with Content Services, and you need to configure each individual container so that they all work well together, then you need a tool for this. Docker Compose is such a tool for defining and running multi-container Docker applications locally. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Dockerfile

A Dockerfile is a script containing a successive series of instructions, directions, and commands which are run to form a new Docker image. Each command translates to a new layer in the image, forming the end product. The Dockerfile replaces the process of doing everything manually and repeatedly. When a Dockerfile finishes building, the end result is a new image, which you can use to start a new Docker container.

Difference between containers and virtual machines

It’s important to understand the difference between using containers and using VMs. Here’s a comparison from the Docker site - What is a Container:

The main difference is that when you run a container, you are not starting a complete new OS instance. This makes containers much more lightweight and quicker to start. A container also takes up much less space on your hard-disk as it doesn’t have to ship the whole OS.

Alfresco Docker images

The public Alfresco Docker images are available in the Docker Hub registry. There are also private Enterprise-only images in the Quay.io registry. Access to these can be requested from Hyland Community

Go to Docker Hub to see a list of images belonging to the alfresco user or, alternatively, search for alfresco from the Docker Hub home page:

Note: This shows a snippet from Docker Hub - not all images are visible.

The following Docker images relate to Content Services:

  • alfresco/alfresco-content-repository - the repository app (i.e. alfresco.war) running on Apache Tomcat
  • alfresco/alfresco-share - the Share web interface (i.e. share.war) running on Apache Tomcat
  • alfresco/alfresco-search-services - the Solr 6 based search service running on Jetty
  • alfresco/alfresco-activemq - the Alfresco ActiveMQ image
  • alfresco/alfresco-acs-ngnix

There are also other supporting features available, such as Docker images for image and document transformation:

Upgrade
  • alfresco/alfresco-imagemagick
  • alfresco/alfresco-libreoffice
  • alfresco/alfresco-pdf-renderer
  • alfresco/alfresco-tika
  • alfresco/alfresco-transform-misc
  • alfresco/alfresco-transform-core-aio

Content Services provides a number of content transforms, but also allows custom transforms to be added. It’s possible to create custom transforms that run in separate processes from the repository, known as Transform Engines (i.e. T-Engines). The same engines may be used in the Community and Enterprise Editions of Content Services. They may be directly connected to the repository as Local Transforms. Note that in the Enterprise Edition, the default option is to use them as part of Alfresco Transform Service, which provides more balanced throughput and scalability improvements.

See Custom Transforms and Renditions for more.

Note: The core Transform Engine images can be used in Content Services. The open-sourced code for the Transform Engines is available in the Alfresco/alfresco-transform-core GitHub project.

From Content Services 6.2.1, you can replace the five separate T-Engines with a single all-in-one Transform Core Engine that performs all the core transforms (i.e. alfresco/alfresco-transform-core-aio). Note that the all-in-one core T-Engine is the default option for the Docker Compose deployment, however Helm deployments continue to use the five separate T-Engines in order to provide balanced throughput and scalability improvements.

To build the alfresco/alfresco-content-repository image, Alfresco uses the Alfresco/acs-packaging GitHub project. This project doesn’t include any deployment templates. The Alfresco/acs-deployment GitHub project contains deployment templates and instructions. It includes a Docker Compose script that’s used to launch a demo, test, or PoC of Content Services. You can customize this script, if you like, in order to run with different versions than those set by default (which are usually the latest versions).

What’s deployed in Content Services

When you deploy Content Services, a number of containers are started.

  • Alfresco repository with:
    • Alfresco Share Services AMP
    • Alfresco Office Services (AOS) AMP
    • Alfresco vti-bin war - that helps with AOS integration
    • Alfresco Google Docs Integration repository AMP
  • Alfresco Share with:
    • Alfresco Google Docs Integration Share AMP
  • Alfresco Search Services (Solr 6)
  • A PostgreSQL database

GitHub projects

Below are links to various GitHub projects that are used to deploy Content Services, build the repository artifacts, or provide supporting services.

Upgrade docker image

Deployment project

The deployment project contains the Docker Compose file to start up a Content Services environment locally. You’ll find the relevant files in the docker-compose folder. To look at the project in more detail, just browse to:

  • https://github.com/Alfresco/acs-deployment for Enterprise deployment

If you’re interested in the Helm charts to deploy Content Services with Kubernetes, you’ll find the relevant files in the helm/alfresco-content-services folder.

Upgrade Docker Compose

Packaging project

The packaging project is used to build the repository artifacts, such as the Docker image for the repository. To look at the project in more detail, just browse to:

  • https://github.com/Alfresco/acs-packaging for Enterprise packaging

Upgrade Docker Container

Other projects

Note that the Docker files for Alfresco Share, Alfresco Search Services, and other services are in their own projects:

  • Alfresco Share: https://github.com/Alfresco/share/tree/alfresco-share-parent-7.0.0
  • Alfresco Search Services: https://github.com/Alfresco/SearchServices
  • Alfresco Content Services Nginx Proxy: https://github.com/Alfresco/acs-ingress

Prerequisites

There are a number of software requirements for installing (or deploying) Content Services when using containerized deployment.

Note that the VERSIONS.md file in GitHub lists the supported versions.

Note: The images downloaded directly from Docker Hub, or Quay.io are for a limited trial of the Enterprise version of Content Services that goes into read-only mode after 2 days. For a longer (30-day) trial, get the Alfresco Content Services Download Trial by following the steps in Deploy using Docker Compose.

Note: Alfresco customers can request Quay.io credentials by logging a ticket at Alfresco Support. These credentials are required to pull private (Enterprise-only) Docker images from Quay.io.

You can review the requirements for your chosen deployment method below.

Helm charts

To deploy Content Services using Helm charts, you need to install the following software:

  • AWS CLI - the command line interface for Amazon Web Services.
  • Kubectl - the command line tool for Kubernetes.
  • Helm - the tool for installing and managing Kubernetes applications.
    • There are Helm charts that allow you to deploy Content Services in a Kubernetes cluster, for example, on AWS.

See Install using Helm for more.

Docker Compose (recommended for evaluations only)

  • Docker (latest stable version)
    • This allows you to run Docker images and docker-compose on a single computer.
  • Docker Compose
    • Docker Compose is included as part of some Docker installers. If it’s not part of your installation, then install it separately after you’ve installed Docker.

Note: Check the prerequisites for your operating system, for Docker and Docker Compose, using the links provided.

See Install using Docker Compose for more.