Policy Driven Networking

Use Apcera policy­ driven networking to know and control exactly which workloads can connect to network resources and how.

Requirements:

Part 1: Controlling Network Egress with Policy

In the first part of this tutorial you learn how to control network egress from a container using policy.

1. Create a capsule

In the web console for your cluster, select New > Capsule.

Tutorial Image

Click Ubuntu 14.04 to select the Ubuntu 14.04 package.

Tutorial Image

Enter a Name for the capsule, such as “network-egress”.

Use the default namespace.

For the purposes of this tutorial, leave the Allow Egress option unchecked.

Click Submit to create the capsule.

Tutorial Image

In the Job Details screen, verify that your capsule is running.

2. Connect to the capsule using APC

Target your cluster and log in using APC:

apc target https://<sub-domain-name>.apcera-platform.io

apc login --basic

Run the following command to connect to the capsule via SSH as the root user for that container.

apc capsule connect network-egress

root@ip-169-254-0-3:/root#

Attempt to update the list of OS packages using the following command:

apt-get update

0% [Connecting to archive.ubuntu.com (91.189.91.23)]

The upddate hangs because by default the capsule is not permitted to connect to an outside network. This demonstrates the isolation context of Apcera containers.

3. Enable egress for the capsule

Select the Services > Outside service.

Tutorial Image

Apcera provides the service /apcera/service::outside to bind a container to the outside network.

Access to this service is controlled by policy, which is provided out-of-the-box so you can easily connect to such services.

service::/apcera {
  { permit read, bind }
}

To allow network egress to the capsule, in the web console, select the network-egress capsule from the Capsules > Capsule List screen.

Check the “Allow Egress” option.

Tutorial Image

Select Yes to restart the container when prompted.

Tutorial Image

Allow Egress is a shortcut for binding to the outside network. It is a developer convenience and requires policy on the service::/apcera::outside resource.

SSH back into the capsule and run the the apt-get update request.

apc capsule connect network-egress

#apt-get update

This time the update succeeds because you have enabled egress to the outside network.

Exit the SSH session.

exit

4. Snapshot the capsule using APC

A snapshot saves the state as a package. We create a snapshot to further demonstrate how --allow-egress works.

apc capsule snapshot network-egress --name network-egress-snapshot

Create a new capsule using the snapshot.

apc capsule create network-egress-test -p network-egress-snapshot

Connect to the new capsule you created.

apc capsule connect network-egress-test

Attempt to update the OS list of packages to install.

apt-get update

Note that the apt-get operation does not succeed. Even though you granted egress to the capsule and snapshotted it, for security reasons the Allow Egress setting is not preserved in snapshots.

In the web console, grant egress again to the capsule by checking the Allow Egress box and restarting the container.

Tutorial Image

SSH into the container.

apc capsule connect network-egress-test

You can now successfully run apt-get because network egress is enabled.

apt-get update

Part 2: Controlling SSH Access with Policy

This part of the tutorial demonstrates how to use policy to control SSH access to containers.

SSH access requires policy on the job and it must be enabled for the job. By default SSH is enabled for capsules since their purpose is to have software installed into them. For an app you must enable it during app creation or update. Note that some Docker jobs may not allow SSH access.

1: Create a capsule and connect using SSH

In the web console for your cluster, select New > Capsule.

Follow the same procedure that you did in Part 1 to create a capsule.

Enter a Name for the capsule, such as “network-ssh”.

Once you have created the capsule, in the Capsules List screen, verify that your capsule is running.

Select the capsule you created from the Capsules > Capslue List screen.

Note that “Allow SSH” is enabled by default. Let’s test this.

Using your APC client, run the apc job list command to list the jobs. You should see the capsule.

Connect to the capsule using SSH:

apc capsule connect network-ssh

root@ip-169-254-0-31:/root#

You should be dropped into the container as root.

Exit the container:

exit

You cannot disable SSH access for a capsule since you need to access it to install software. We will explore how to disable SSH access for an app next. Note that for security purposes you must use APC to SSH into the container; you cannot use another tool to do this.

2: Disable SSH access for an app

Create a sample app.

For example:

cd /sample-apps/example-go
apc app create my-go-app -dr --start --batch

In the web console, select the my-go-app.

Disable SSH access by unchecking the option in the web console. On job restart SSH access is removed.

Run apc app connect. You get the error message “Unable to find a valid instance to connect to. Ensure SSH is enabled for the job.”

Re-enable SSH access and restart the job.

Verify that you can access the app using SSH.

apc capsule connect network-ssh

root@ip-169-254-0-31:/root#

You should be dropped into the container as root.

Exit the container:

exit

3: Disallow SSH using policy

As a member of the admin role, you have “all” permissions on jobs in the system. All permissions on the job resource include an explicit SSH grant (permit ssh). In addition to enabling SSH at the job level, SSH must be allowed by policy. In this step you will explore how this works.

In the web console, select the Policy tab.

Select the policy named systemDataTableDefaultPermissions from the Policy > Policy List screen. (This policy is a policy variable that defines the permissions for users in the admin role.)

Click Edit Policy to edit the policy. (For the purposes of this tutorial it is OK to edit this policy as long as you revert the changes you make when you are done with the tutorial.)

Copy the entire policy block for the job::/ realm and paste the copy below, then comment out the original job::/ policy block.

on variables::/ {
  system policy variable {
    DefaultPermissions (resource, from, to) {
      {"audit::/",       "all", [read]}
      {"cluster::/",     "all", [read, update]}
      {"gateway::/",     "all", [use, promote]}
      // {"job::/",         "all", [create, read, update, delete, start, stop, map, ssh, link, promote, bind, join]}
      {"job::/",         "all", [create, read, update, delete, start, stop, map, ssh, link, promote, bind, join]}
      {"network::/",     "all", [create, join, read, delete]}
      {"package::/",     "all", [create, read, update, delete, use]}
      {"policy::/",      "all", [read, update]}
      {"policydoc::/",   "all", [create, read, update, delete]}
      {"principal::/",   "all", [create, read, update, delete]}
      {"provider::/",    "all", [create, read, update, delete]}
      {"route::/",       "all", [map]}
      {"sempiperule::/", "all", [create, read, delete]}
      {"service::/",     "all", [create, read, update, delete, bind]}
      {"stagpipe::/",    "all", [create, read, update, delete, use]}
    }
  }
}

In the policy line you copied, remove the explicit SSH permission grant.

on variables::/ {
  system policy variable {
    DefaultPermissions (resource, from, to) {
      {"audit::/",       "all", [read]}
      {"cluster::/",     "all", [read, update]}
      {"gateway::/",     "all", [use, promote]}
      //{"job::/",         "all", [create, read, update, delete, start, stop, map, ssh, link, promote, bind, join]}
      {"job::/",         "all", [create, read, update, delete, start, stop, map, link, promote, bind, join]}
      {"network::/",     "all", [create, join, read, delete]}
      {"package::/",     "all", [create, read, update, delete, use]}
      {"policy::/",      "all", [read, update]}
      {"policydoc::/",   "all", [create, read, update, delete]}
      {"principal::/",   "all", [create, read, update, delete]}
      {"provider::/",    "all", [create, read, update, delete]}
      {"route::/",       "all", [map]}
      {"sempiperule::/", "all", [create, read, delete]}
      {"service::/",     "all", [create, read, update, delete, bind]}
      {"stagpipe::/",    "all", [create, read, update, delete, use]}
    }
  }
}

Click Apply Changes to save your policy changes.

Now try to SSH into the workload again.

apc app connect my-go-app

This time you receive a policy error missing claim "permit ssh".

Error: Not allowed by policy: missing claim "permit ssh"

This demonstrates how you can use Apcera policy to automate and control SSH access to application containers.

Re-enable SSH access to jobs by removing the line you copied and uncommenting the original job::/ block.

on variables::/ {
  system policy variable {
    DefaultPermissions (resource, from, to) {
      {"audit::/",       "all", [read]}
      {"cluster::/",     "all", [read, update]}
      {"gateway::/",     "all", [use, promote]}
      {"job::/",         "all", [create, read, update, delete, start, stop, map, ssh, link, promote, bind, join]}
      {"network::/",     "all", [create, join, read, delete]}
      {"package::/",     "all", [create, read, update, delete, use]}
      {"policy::/",      "all", [read, update]}
      {"policydoc::/",   "all", [create, read, update, delete]}
      {"principal::/",   "all", [create, read, update, delete]}
      {"provider::/",    "all", [create, read, update, delete]}
      {"route::/",       "all", [map]}
      {"sempiperule::/", "all", [create, read, delete]}
      {"service::/",     "all", [create, read, update, delete, bind]}
      {"stagpipe::/",    "all", [create, read, update, delete, use]}
    }
  }
}

Now try to SSH into the container again. You should be able to access it.