# Install Fileserver

Installation instruction for the Psono fileserver

# Preamble

At this point we assume that you already have a Psono server running, ready to accept connections. File servers are besides file repositories the second way to store and share files. File servers solve various usecases and are the prefered way, if one of the following things is required:

  • You need a pure on premise solution
  • You dont want that users upload (even if it's encrypted) anything to a cloud provider
  • You want to be prepared for advanced features (e.g. AV scanning before encryption)
  • You want to provide a "default" storage to all users without users being forced to configure anything

Before you actually start setting up your file server(s), you have to think about the dimensions and size of your installation, the required throughput, the rquired bandwidth and failover / availability constraints. Each company has its own requirement on hand. The file server component was designed with these requirements in mind and is extremely powerful and flexible supporting:

  • HA
  • Failover
  • Active / active replication
  • Read / write master and read only followers
  • IP based request routing / site affinity

The fileserver itself supports multiple storage backends allowing you to share storage across installations. Supported storage backends are:

  • Local storage
  • Amazon S3
  • Apache Libcloud
  • Azure Storage
  • Digital Ocean
  • FTP
  • Google Cloud Storage
  • SFTP

If you use local storage, you can use simple mechanisms like rsync or more advanced ones like glusterfs to share storage.

Each file is split up in chunks, encrypted and then stored on a so called "shard". The information about "which chunk is stored on which shard" is stored in the postgres database. If a user later wants to download a file, it will loop through all the chunks, check which fileserver announces the shard (and allows this particular user to access it) and then downloads it before decrypting it locally and merging it with the other decrypted chunks.

Clusters are a collection of fileservers, restricting access rights and policies for its fileserver members for shards. With them it's possible to restrict fileservers so a fileserver (A) in cluster (A) would only be allowed to offer a shard (X) for reads, yet while a fileserver (B) in cluster (B) would be allowed to offer the same shard (X) for writes in addition. Typical use cases are scenarios with "exposed" (maybe even external) fileservers, allowing public third parties only to upload something, while preventing all downloads for them. Another usecase would be, that you have an office in Tokyo and another one in New York. People in New York would upload and download to a Fileserver in New York yet would be able to download files stored on the Fileserver in Tokyo too. To speed things up you could even implement a one way sync, syncing all files from Tokyo to New York periodically, allowing users in New York a faster access to the files.

# Cluster and Shard configuration

Once you know how to structure your shards and clusters, you can begin with the configuration on the server.


All commands of this part are executed on a server, not on the fileserver!

  1. Enable Fileserver API

    Fileservers will "announce" themself to a server. That server does not necessarily need to be the same server(s), that clients are accessing and can be a separate instance. You have to adjust the configuration of those servers as shown below:



    Don't forget to restart the server afterward.

  2. Configure Content-Security-Policy

    You need to modify your server's Content-Security-Policy which will instruct your user's browsers to allow connections to the fileserver later from the origin of your webclient. As such add https://fs01.example.com to the connect-src. The corresponding part should look like this:

    add_header Content-Security-Policy "default-src 'none';  manifest-src 'self'; connect-src 'self' https://fs01.example.com https://static.psono.com https://api.pwnedpasswords.com https://storage.googleapis.com https://*.digitaloceanspaces.com https://*.blob.core.windows.net https://*.s3.amazonaws.com; font-src 'self'; img-src 'self' data:; script-src 'self'; style-src 'self' 'unsafe-inline'; object-src 'self'; child-src 'self'";


    Don't forget to restart the nginx afterward.

  3. Create clusters

    As a next step create the clusters that you want to have on the server with the following management command:

    docker run --rm \
      -v /opt/docker/psono/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-combo:latest python3 ./psono/manage.py fsclustercreate "Default Cluster"

    This command returns you the cluster id. Note it down as you will require it later

  4. Create shards

    As a next step create the shard (that will be later mapped to one storage backend) with the following management command:

    docker run --rm \
      -v /opt/docker/psono/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-combo:latest python3 ./psono/manage.py fsshardcreate "Some Title" "Some description"

    The title will be later visible to the user if multiple shards are available, so the user can pick one. Note down the shard id as you need it later.

  5. Link the shards and clusters

    Link now together the shard and the clusters, allowing fileservers of this cluster to announce these shards

    docker run --rm \
      -v /opt/docker/psono/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-combo:latest python3 ./psono/manage.py fsshardlink YOUR_CLUSTER_ID YOUR_SHARD_ID rw
  6. Generate the configuration for a fileserver

    The following command with generate / show you the configuration of a fileserver

    docker run --rm \
      -v /opt/docker/psono/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-combo:latest python3 ./psono/manage.py fsclustershowconfig YOUR_CLUSTER_ID

    The shown configuration will list a couple of secrets and a "dummy" configuration for the shard, configuring it to be locally in e.g. /opt/psono-shard/...

    The line looks similar to this one:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]

    You can adjust it to your needs, point it to a different storage provider or directory, so you could adjust it to look like this for AWS:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'amazon_s3', kwargs: {access_key: 'YOUR_ACCESS_KEY', secret_key: 'YOUR_SECRET_KEY', bucket_name: 'YOUR_BUCKET_NAME', }}}]

    or like this for GCS:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'google_cloud', kwargs: {project_id: 'YOUR_PROJECT_ID', bucket_name: 'YOUR_BUCKET_NAME', credentials: '/path/to/credentials.json'}}}]


    If you use docker, then the provided path to your credentials.json is inside the docker containers file system.

    or like this for Azure:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'azure', kwargs: {account_name: 'YOUR_ACCOUNT_NAME', account_key: 'YOUR_ACCOUNT_KEY', azure_container: 'YOUR_AZURE_CONTAINER'}}}]

# Installation

  1. Create a settings.yaml in e.g. /opt/docker/psonofileserver/ with the following content

    # generate the following eight parameters as described above
    PRIVATE_KEY: '02...0b'
    PUBLIC_KEY: '02...0b'
    SERVER_URL: 'https://example.com/server'
    SERVER_PUBLIC_KEY: '02...0b'
    CLUSTER_ID: '02...0b'
    CLUSTER_PRIVATE_KEY: '02...0b'
    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]
    # Disable SSL verification for the SERVER_URL (don't use this in production)
    # Switch DEBUG to false if you go into production
    DEBUG: False
    # Adjust this according to Django Documentation https://docs.djangoproject.com/en/2.2/ref/settings/
    ALLOWED_HOSTS: ['*']
    # Should be the full path to your fileserver. This is the path that will later be used by the client to transfer files.
    HOST_URL: 'https://fs01.example.com/fileserver'

    Update credentials / secrets / paths like described in the comments


    Make sure that you did add the fileserver's authority (e.g. https://fs01.example.com) to the "connect-src" of the Content-Security-Policy of the server's reverse proxy

  2. Create local storage location

    If you are using local storage, create a local folder that you then can mount in the next command to the folder that you specified in your SHARDS config

    mkdir /opt/psono-shard
  3. Test the configuration

    docker run --rm \
        -v /opt/docker/psonofileserver/settings.yaml:/root/.psono_fileserver/settings.yaml \
        -v /opt/psono-shard:/opt/psono-shard \
        -ti psono/psono-fileserver:latest python3 ./psono/manage.py testconfig
  4. Run the Psono server container and expose the server port

    docker run --name psono-fileserver \
        --sysctl net.core.somaxconn=65535 \
        -v /opt/docker/psonofileserver/settings.yaml:/root/.psono_fileserver/settings.yaml \
        -v /opt/psono-shard:/opt/psono-shard \
        -d --restart=unless-stopped -p 10300:80 psono/psono-fileserver:latest

    This will start the Psono fileserver on port 10300. If you open now http://your-ip:10300/info/ you should see something like this:

    {"info":"{\"version\": \"....}

    If you don't, please make sure no firewall is blocking your request.

    The fileserver will announce itself now to the server and the server will propagate the information to the connected clients.


    Test that /opt/psono-shard actually contains your data after uploading the first file, otherwise your config might be wrong data might be stuck in your docker container and get lost if you ever switch docker containers.

  5. Setup Reverse Proxy

    To run the Psono password manager in production, a reverse proxy is needed, to handle the ssl offloading and glue the Psono server and webclient together. Follow the guide in the "Installation Fileserver Reverse Proxy" section below.

# Installation Fileserver Reverse Proxy

To run the Psono file server in production, a reverse proxy is needed, to handle the ssl offloading. This one is usually a different one as the reverse proxy for the webclient / server, yet if you only have a small installation and want to host everything on one server, then you can adjust the reverse config accordingly and reuse the existing reverse proxy.

  1. Install Nginx

    sudo apt-get install nginx
  2. Create nginx config

    Create fs01.example.com.conf in /etc/nginx/sites-available with the following content:

    server {
        listen 80;
        server_name fs01.example.com;
        return 301 https://$host$request_uri;
    server {
        listen 443 ssl http2;
        server_name fs01.example.com;
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_session_timeout 1d;
        resolver valid=300s;
        resolver_timeout 5s;
        # Comment this in if you know what you are doing
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header Referrer-Policy same-origin;
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        ssl_certificate /etc/ssl/fullchain.pem;
        ssl_certificate_key /etc/ssl/privkey.pem;
        root /var/www/html;
        location /fileserver {
    		rewrite ^/fileserver/(.*) /$1 break;
    		proxy_set_header        Host $host;
    		proxy_set_header        X-Real-IP $remote_addr;
    		proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    		proxy_set_header        X-Forwarded-Proto $scheme;
    		proxy_hide_header Access-Control-Allow-Origin;
            # Replace psono.example.com with the domain that your server runs on
            add_header Access-Control-Allow-Origin "https://psono.example.com" always;
            add_header Last-Modified $date_gmt;
            add_header Pragma "no-cache";
            add_header Cache-Control "private, max-age=0, no-cache, no-store";
            if_modified_since off;
            expires off;
            etag off;
    		proxy_pass          http://localhost:10300;
            # The big traffic will be encrypted chunks, so using gzip here causes only server load
            gzip off;
            # To allow the 128 MB chunks
            client_max_body_size 256m;
  3. Enable nginx config

    ln -s /etc/nginx/sites-available/fs01.example.com.conf /etc/nginx/sites-enabled/
  4. Test nginx config

    sudo nginx -t
  5. Restart nginx

    sudo service nginx restart

    If you open https://fs01.example.com/fileserver/info/ in your browser, you should see the following:

    {"info":"{\"version\": \"....}

# Note: Installation behind Firewall

If you have put your installation behind a firewall, you have to whitelist some ports / adjust some settings, that all features work:

  • Incoming TCP connection (usually 443) from clients to fileservers
  • Outgoing TCP connection (usually 443) from fileservers to servers
  • Outgoing TCP / UDP 123 connection to time.google.com: Psono requires a synced time for various reasons (Google Authenticator, YubiKey, Throttling, Replay protection, ...) Therefore it has a healthcheck, to compare the local time to a time server (by default time.google.com). You can specify your own timeserver in the settings.yaml with the "TIME_SERVER" parameter. If you are confident that your server always has the correct time you can also disable the healthcheck with "HEALTHCHECK_TIME_SYNC_ENABLED: False".
  • (optional) Outgoing TCP connection to the configured storage providers (if you use GCS or AWS or whatever instead of local storage)