Install instruction for the Psono fileserver

Preamble

At this point we assume that you already have a psono server running, ready to accept connections. File servers are besides file repositories the second way to store and share files. File servers solve various usecases and are the prefered way, if one of the following things is required:

  • You need a pure on premise solution
  • You dont want that users upload (even if it’s encrypted) anything to a cloud provider
  • You want to be prepared for advanced features (e.g. AV scanning before encryption)
  • You want to provide a “default” storage to all users without users being forced to configure anything

Before you actually start setting up your file server(s), you have to think about the dimensions and size of your installation, the required throughput, the rquired bandwidth and failover / availability constraints. Each company has its own requirement on hand. The file server component was designed with these requirements in mind and is extremely powerful and flexible supporting:

  • HA
  • Failover
  • Active / active replication
  • Read / write master and read only followers
  • IP based request routing / site affinity

The fileserver itself supports multiple storage backends allowing you to share storage across installations. Supported storage backends are:

  • Local storage
  • Amazon S3
  • Apache Libcloud
  • Azure Storage
  • DropBox
  • FTP
  • Google Cloud Storage
  • SFTP

If you use local storage, you can use simple mechanisms like rsync or more advanced ones like glusterfs to share storage.

Each file is split up in chunks, encrypted and then stored on a so called “shard”. The information about “which chunk is stored on which shard” is stored in the postgres database. If a user later wants to download a file, it will loop through all the chunks, check which fileserver announces the shard (and allows this particular user to access it) and then downloads it before decrypting it locally and merging it with the other decrypted chunks.

Clusters are a collection of fileservers, restricting access rights and policies for its fileserver members for shards. With them its possible to restrict fileservers so a fileserver (A) in cluster (A) would only be allowed to offer a shard (X) for reads, yet while a fileserver (B) in cluster (B) would be allowed to offer the same shard (X) for writes in addition. A typical usecase are scenarios with “exposed” (maybe even external) fileservers, allowing public third parties only to upload something, while preventing all downloads for them. Another usecase would be, that you have an office in Tokyo and another one in New York. People in New York would upload and download to a Fileserver in New York yet would be able to download files stored on the Fileserver in Tokyo too. To speed things up you could even implement a one way sync, syncing all files from Tokyo to New York periodically, allowing users in New York a faster access to the files.

Cluster and Shard configuration

Once you know how to structure your shards and clusters, you can begin with the configuration on the server.

  1. Enable Fileserver API

    Fileservers will “announce” themself to a server. That server does not necessary need to be the same server(s), that clients are accessing and can be a separate instance. You have to adjust the configuration of those servers as shown below:

    FILESERVER_HANDLER_ENABLED: True
    
  2. Create clusters

    As a next step create the clusters that you want to have on the server with the following management command:

    With docker:

    docker run --rm \
      -v /path/to/modified/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-server:latest python3 ./psono/manage.py fsclustercreate "Default Cluster"
    

    or alternatively without docker:

    python3 ./psono/manage.py fsclustercreate "Default Cluster"
    

    This command returns you the cluster id. Note it down as you will require it later

  3. Create shards

    As a next step create the the shard (that will be later mapped to one storage backend) with the following management command:

    With docker:

    docker run --rm \
      -v /path/to/modified/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-server:latest python3 ./psono/manage.py fsshardcreate "Some Title" "Some description"
    

    or alternatively without docker:

    python3 ./psono/manage.py fsshardcreate "Some Title" "Some description"
    

    The title will be later visible to the user if multiple shards are available, so the user can pick one. Note down the shard id as you need it later.

  4. Link the shards and clusters

    Link now together the shard and the clusters, allowing fileservers of this cluster to announce these shards

    With docker:

    docker run --rm \
      -v /path/to/modified/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-server:latest python3 ./psono/manage.py fsshardlink YOUR_CLUSTER_ID YOUR_SHARD_ID rw
    

    or alternatively without docker:

    python3 ./psono/manage.py fsshardlink YOUR_CLUSTER_ID YOUR_SHARD_ID rw
    
  5. Generate the configuration for a fileserver

    The following command with generate / show you the configuration of a fileserver

    With docker:

    docker run --rm \
      -v /path/to/modified/settings.yaml:/root/.psono_server/settings.yaml \
      -ti psono/psono-server:latest python3 ./psono/manage.py fsclustershowconfig YOUR_CLUSTER_ID
    

    or alternatively without docker:

    python3 ./psono/manage.py fsclustershowconfig YOUR_CLUSTER_ID
    

    The shown configuration will list a couple of secrets and a “dummy” configuration for the shard, configuring it to be locally in e.g. /opt/psono-shard/…

    The line looks similar to this one:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]
    

    You can adjust it to your needs, point it to a different storage provider or directory, so you could adjust it to look like this for AWS:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'amazon_s3', kwargs: {access_key: 'YOUR_ACCESS_KEY', secret_key: 'YOUR_SECRET_KEY', bucket_name: 'YOUR_BUCKET_NAME', }}}]
    

    or like this for GCS:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'google_cloud', kwargs: {project_id: 'YOUR_PROJECT_ID', bucket_name: 'YOUR_BUCKET_NAME', credentials: '/path/to/credentials.json'}}}]
    

    or like this for Azure:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'azure', kwargs: {account_name: 'YOUR_ACCOUNT_NAME', account_key: 'YOUR_ACCOUNT_KEY', azure_container: 'YOUR_AZURE_CONTAINER'}}}]
    

Installation with Docker

  1. Create a settings.yaml in e.g. /opt/docker/psonofileserver/ with the following content

    # generate the following eight parameters as described above
    SECRET_KEY: 'SOME SUPER SECRET KEY THAT SHOULD BE RANDOM AND 32 OR MORE DIGITS LONG'
    PRIVATE_KEY: '302650c3c82f7111c2e8ceb660d32173cdc8c3d7717f1d4f982aad5234648fcb'
    PUBLIC_KEY: '02da2ad857321d701d754a7e60d0a147cdbc400ff4465e1f57bc2d9fbfeddf0b'
    SERVER_URL: 'https://example.com/server'
    SERVER_PUBLIC_KEY: '02da2ad857321d701d754a7e60d0a147cdbc400ff4465e1f57bc2d9fbfeddf0b'
    CLUSTER_ID: '1713b68b-df64-41c5-822c-6eb8c877037c'
    CLUSTER_PRIVATE_KEY: '686cd9753ee1cdfa2ba9c45f3292168607d5dca9bd92a6c8562f7234ea3cb7fa'
    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]
    
    # Disable SSL verification for the SERVER_URL (don't use this in production)
    # SERVER_URL_VERIFY_SSL: False
    
    # Switch DEBUG to false if you go into production
    DEBUG: False
    
    # Adjust this according to Django Documentation https://docs.djangoproject.com/en/1.10/ref/settings/
    ALLOWED_HOSTS: ['*']
    
    # Should be the full path to your fileserver
    HOST_URL: 'https://fs01.example.com/fileserver'
    
  2. Create local storage location

    If you are using local storage, create a local folder that you then can mount in the next command to the folder that you specified in your SHARDS config

    mkdir /opt/psono-shard
    
  3. Test the configuration

    docker run --rm \
        -v /path/to/modified/settings.yaml:/root/.psono_fileserver/settings.yaml \
        -v /opt/psono-shard:/opt/psono-shard \
        -ti psono/psono-fileserver:latest python3 ./psono/manage.py testconfig
    
  4. Run the dockered psono server image and expose the server port

    docker run --name psono-fileserver \
        --sysctl net.core.somaxconn=65535 \
        -v /path/to/modified/settings.yaml:/root/.psono_fileserver/settings.yaml \
        -v /opt/psono-shard:/opt/psono-shard \
        -d --restart=unless-stopped -p 10200:80 psono/psono-fileserver:latest
    

    This will start the Psono fileserver on port 10200. If you open now http://your-ip:10200/info/ you should see something like this:

    {"info":"{\"version\": \"....}
    

    If not, please make sure you have no firewall on the server blocking you.

    The fileserver will announce itself now to the server and the server will propagate the information to the connected clients.

  5. Setup Reverse Proxy

    To run the Psono password manager in production, a reverse proxy is needed, to handle the ssl offloading and glue the psono server and webclient together. Follow the guide in the “Installation Fileserver Reverse Proxy” section below.

Installation for Ubuntu

This guide will install the Psono server, and runs it with gunicorn and nginx. It has been tested on Ubuntu 18.04.

  1. Become root

    sudo su
    
  2. Install some generic stuff

    apt-get update
    apt-get install -y \
            git \
            haveged \
            libyaml-dev \
            libpython3-dev \
            libpq-dev \
            libffi-dev \
            libssl-dev \
            python3-dev \
            python3-pip \
            python3-setuptools \
            supervisor
    pip3 install wheel gunicorn
    
  3. Create psono user

    adduser psono
    
  4. Become the psono user

    su psono
    
  5. Clone git repository

    git clone https://gitlab.com/psono/psono-fileserver.git ~/psono-fileserver
    
  6. Install python requirements

    Ctrl + D                            # become root again
    cd /home/psono/psono-fileserver
    pip3 install -r requirements.txt
    su psono                            # become psono again
    
  7. Create settings folder

    mkdir ~/.psono_fileserver
    
  8. Create a settings.yaml in ~/.psono_fileserver/ with the following content

    # generate the following eight parameters as described in the "Cluster and Shard configuration" section
    SECRET_KEY: 'SOME SUPER SECRET KEY THAT SHOULD BE RANDOM AND 32 OR MORE DIGITS LONG'
    PRIVATE_KEY: '302650c3c82f7111c2e8ceb660d32173cdc8c3d7717f1d4f982aad5234648fcb'
    PUBLIC_KEY: '02da2ad857321d701d754a7e60d0a147cdbc400ff4465e1f57bc2d9fbfeddf0b'
    SERVER_URL: 'https://example.com/server'
    SERVER_PUBLIC_KEY: '02da2ad857321d701d754a7e60d0a147cdbc400ff4465e1f57bc2d9fbfeddf0b'
    CLUSTER_ID: '1713b68b-df64-41c5-822c-6eb8c877037c'
    CLUSTER_PRIVATE_KEY: '686cd9753ee1cdfa2ba9c45f3292168607d5dca9bd92a6c8562f7234ea3cb7fa'
    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]
    
    # Create this next key manually. e.g. with: tr -dc 'A-F0-9' < /dev/random | head -c64
    # you will need this key later in the cronjobs
    CRON_ACCESS_KEY: 'SOME SUPER SECRET KEY THAT SHOULD BE RANDOM AND 32 OR MORE DIGITS LONG'
    
    # Disable SSL verification for the SERVER_URL (don't use this in production)
    # SERVER_URL_VERIFY_SSL: False
    
    # Switch DEBUG to false if you go into production
    DEBUG: False
    
    # Adjust this according to Django Documentation https://docs.djangoproject.com/en/1.10/ref/settings/
    ALLOWED_HOSTS: ['*']
    
    # Should be the full path to your fileserver
    HOST_URL: 'https://fs01.example.com/fileserver'
    
  9. Create local storage location

    If you are using local storage, create a local folder that you then can mount in the next command to the folder that you specified in your SHARDS config

    Ctrl + D                            # become root again
    mkdir /opt/psono-shard
    chown psono: /opt/psono-shard
    su psono                            # become psono again
    
  10. Test the configuration

    python3 ./psono/manage.py testconfig
    
  11. Run the psono fileserver

    cd ~/psono-fileserver/psono
    gunicorn --bind 0.0.0.0:10200 psono.wsgi
    

    This will start the Psono fileserver on port 10200. If you open now http://your-ip:10200/info/ should see something like this:

    {"info":"{\"version\": \"....}
    

    If not, please make sure you have no firewall on the server blocking you.

  12. Become root again

    Ctrl + D
    
  13. Create supervisor config

    Create a psono-fileserver.conf in /etc/supervisor/conf.d/ with the following content:

    [program:psono-fileserver]
    command = /usr/local/bin/gunicorn --bind 127.0.0.1:10200 psono.wsgi
    directory=/home/psono/psono-fileserver/psono
    user = psono
    autostart=true
    autorestart=true
    redirect_stderr=true
    
  14. Reload supervisorctl

    supervisorctl reload
    

    Now you can control the Psono server with supervisorctrl commands e.g.

    • supervisorctl status psono-fileserver
    • supervisorctl start psono-fileserver
    • supervisorctl stop psono-fileserver
  15. Setup cron job

    Execute the following command:

    crontab -e
    

    and add the following lines (while replacing CRON_ACCES_KEY with the CRON_ACCES_KEY from the settings.yml:

    *       *       *       *       *       ( sleep 5; touch /tmp/psono_fileserver_ping && curl -f --header "Authorization: Token CRON_ACCES_KEY" http://localhost:8002/cron/ping/ && touch /tmp/psono_fileserver_ping_success )
    *       *       *       *       *       ( sleep 15; touch /tmp/psono_fileserver_ping && curl -f --header "Authorization: Token CRON_ACCES_KEY" http://localhost:8002/cron/ping/ && touch /tmp/psono_fileserver_ping_success )
    *       *       *       *       *       ( sleep 25; touch /tmp/psono_fileserver_ping && curl -f --header "Authorization: Token CRON_ACCES_KEY" http://localhost:8002/cron/ping/ && touch /tmp/psono_fileserver_ping_success )
    *       *       *       *       *       ( sleep 35; touch /tmp/psono_fileserver_ping && curl -f --header "Authorization: Token CRON_ACCES_KEY" http://localhost:8002/cron/ping/ && touch /tmp/psono_fileserver_ping_success )
    *       *       *       *       *       ( sleep 45; touch /tmp/psono_fileserver_ping && curl -f --header "Authorization: Token CRON_ACCES_KEY" http://localhost:8002/cron/ping/ && touch /tmp/psono_fileserver_ping_success )
    *       *       *       *       *       ( sleep 55; touch /tmp/psono_fileserver_ping && curl -f --header "Authorization: Token CRON_ACCES_KEY" http://localhost:8002/cron/ping/ && touch /tmp/psono_fileserver_ping_success )
    
  16. Setup Reverse Proxy

    To run the Psono password manager in production, a reverse proxy is needed, to handle the ssl offloading and glue the psono server and webclient together. Follow the guide in the “Installation Fileserver Reverse Proxy” section below.

Installation Fileserver Reverse Proxy

To run the Psono file server in production, a reverse proxy is needed, to handle the ssl offloading. This one is usually a different one as the the reverse proxy for the webclient / server, yet if you only have a small installation and want to host everything on one server, then you can adjust the reverse config accordingly and reuse the existing reverse proxy.

  1. Install Nginx

    sudo apt-get install nginx
    
  2. Create nginx config

    Create fs01.example.com.conf in /etc/nginx/sites-available with the following content:

    server {
        listen 80;
        server_name fs01.example.com;
        return 301 https://$host$request_uri;
    }
    server {
        listen 443 ssl http2;
        server_name fs01.example.com;
    
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_session_timeout 1d;
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 5s;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    
        # Comment this in if you know what you are doing
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
    
        add_header Referrer-Policy same-origin;
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
    
        ssl_certificate /etc/ssl/fullchain.pem;
        ssl_certificate_key /etc/ssl/privkey.pem;
    
        root /var/www/html;
    
        location /fileserver {
            rewrite ^/fileserver/(.*) /$1 break;
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Proto $scheme;
    
            add_header Last-Modified $date_gmt;
            add_header Pragma "no-cache";
            add_header Cache-Control "private, max-age=0, no-cache, no-store";
            if_modified_since off;
            expires off;
            etag off;
    
            proxy_pass          http://localhost:10200;
    
            # The big traffic will be encrypted chunks, so using gzip here causes only server load
            gzip off;
    
            # To allow the 128 MB chunks
            client_max_body_size 256m;
        }
    }
    
  3. Enable nginx config

    ln -s /etc/nginx/sites-available/fs01.example.com.conf /etc/nginx/sites-enabled/
    
  4. Test nginx config

    sudo nginx -t
    
  5. Restart nginx

    sudo service nginx restart
    

    If you open https://fs01.example.com/fileserver/info/ in your browser, you should see the following:

    {"info":"{\"version\": \"....}
    

Note: Installation behind Firewall

If you have put your installation behind a firewall, you have to whitelist some ports / adjust some settings, that all features work:

  • Incoming TCP connection (usually 443) from clients to fileservers
  • Outgoing TCP connection (usually 443) from fileservers to servers
  • (optional) Outgoing TCP connection to the configured storage providers (if you use GCS or AWS or whatever instead of local storage)
Edit me
Tags: installation