Install instruction for the Psono fileserver


At this point we assume that you already have a psono server running, ready to accept connections. File servers are besides file repositories the second way to store and share files. File servers solve various usecases and are the prefered way, if one of the following things is required:

  • You need a pure on premise solution
  • You dont want that users upload (even if it’s encrypted) anything to a cloud provider
  • You want to be prepared for advanced features (e.g. AV scanning before encryption)
  • You want to provide a “default” storage to all users without users being forced to configure anything

Before you actually start setting up your file server(s), you have to think about the dimensions and size of your installation, the required throughput, the rquired bandwidth and failover / availability constraints. Each company has its own requirement on hand. The file server component was designed with these requirements in mind and is extremely powerful and flexible supporting:

  • HA
  • Failover
  • Active / active replication
  • Read / write master and read only followers
  • IP based request routing / site affinity

The fileserver itself supports multiple storage backends allowing you to share storage across installations. Supported storage backends are:

  • Local storage
  • Amazon S3
  • Apache Libcloud
  • Azure Storage
  • DropBox
  • FTP
  • Google Cloud Storage
  • SFTP

If you use local storage, you can use simple mechanisms like rsync or more advanced ones like glusterfs to share storage.

Each file is split up in chunks, encrypted and then stored on a so called “shard”. The information about “which chunk is stored on which shard” is stored in the postgres database. If a user later wants to download a file, it will loop through all the chunks, check which fileserver announces the shard (and allows this particular user to access it) and then downloads it before decrypting it locally and merging it with the other decrypted chunks.

Clusters are a collection of fileservers, restricting access rights and policies for its fileserver members for shards. With them its possible to restrict fileservers so a fileserver (A) in cluster (A) would only be allowed to offer a shard (X) for reads, yet while a fileserver (B) in cluster (B) would be allowed to offer the same shard (X) for writes in addition. A typical usecase are scenarios with “exposed” (maybe even external) fileservers, allowing public third parties only to upload something, while preventing all downloads for them. Another usecase would be, that you have an office in Tokyo and another one in New York. People in New York would upload and download to a Fileserver in New York yet would be able to download files stored on the Fileserver in Tokyo too. To speed things up you could even implement a one way sync, syncing all files from Tokyo to New York periodically, allowing users in New York a faster access to the files.

Cluster and Shard configuration

Once you know how to structure your shards and clusters, you can begin with the configuration on the server.

  1. Enable Fileserver API

    Fileservers will “announce” themself to a server. That server does not necessary need to be the same server(s), that clients are accessing and can be a separate instance. You have to adjust the configuration of those servers as shown below:

  2. Create clusters

    As a next step create the clusters that you want to have on the server with the following management command:

    python3 ./psono/ fsclustercreate "Default Cluster"

    This command returns you the cluster id. Note it down as you will require it later

  3. Create shards

    As a next step create the the shard (that will be later mapped to one storage backend) with the following management command:

    python3 ./psono/ fsshardcreate "Some Title" "Some description"

    The title will be later visible to the user if multiple shards are available, so the user can pick one. Note down the shard id as you need it later.

  4. Link the shards and clusters

    Link now together the shard and the clusters, allowing fileservers of this cluster to announce these shards

    python3 ./psono/ fsshardlink YOUR_CLUSTER_ID YOUR_SHARD_ID rw
  5. Generate the configuration for a fileserver

    The following command with generate / show you the configuration of a fileserver

    python3 ./psono/ fsclustershowconfig YOUR_CLUSTER_ID

    The shown configuration will list a couple of secrets and a “dummy” configuration for the shard, configuring it to be locally in e.g. /opt/psono-shard/…

    The line looks similar to this one:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]

    You can adjust it to your needs, point it to a different storage provider or directory, so you could adjust it to look like this for AWS:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'amazon_s3', kwargs: {access_key: 'YOUR_ACCESS_KEY', secret_key: 'YOUR_SECRET_KEY', bucket_name: 'YOUR_BUCKET_NAME', }}}]

    or like this for GCS:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'google_cloud', kwargs: {project_id: 'YOUR_PROJECT_ID', bucket_name: 'YOUR_BUCKET_NAME', credentials: '/path/to/credentials.json'}}}]

    or like this for Azure:

    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'azure', kwargs: {account_name: 'YOUR_ACCOUNT_NAME', account_key: 'YOUR_ACCOUNT_KEY', azure_container: 'YOUR_AZURE_CONTAINER'}}}]

Installation with Docker

  1. Create a settings.yaml in e.g. /opt/docker/psonofileserver/ with the following content

    # generate the following eight parameters as described above
    PRIVATE_KEY: '302650c3c82f7111c2e8ceb660d32173cdc8c3d7717f1d4f982aad5234648fcb'
    PUBLIC_KEY: '02da2ad857321d701d754a7e60d0a147cdbc400ff4465e1f57bc2d9fbfeddf0b'
    SERVER_URL: ''
    SERVER_PUBLIC_KEY: '02da2ad857321d701d754a7e60d0a147cdbc400ff4465e1f57bc2d9fbfeddf0b'
    CLUSTER_ID: '1713b68b-df64-41c5-822c-6eb8c877037c'
    CLUSTER_PRIVATE_KEY: '686cd9753ee1cdfa2ba9c45f3292168607d5dca9bd92a6c8562f7234ea3cb7fa'
    SHARDS: [{shard_id: a8a1176c-bad8-4b84-b45e-1084b3a48e7d, read: True, write: True, delete: True, engine: {class: 'local', kwargs: {location: '/opt/docker/psono-shard/a8a1176c-bad8-4b84-b45e-1084b3a48e7d'}}}]
    # Disable SSL verification for the SERVER_URL (don't use this in production)
    # Switch DEBUG to false if you go into production
    DEBUG: False
    # Adjust this according to Django Documentation
    ALLOWED_HOSTS: ['*']
    # Should be the full path to your fileserver
    HOST_URL: ''
  2. Create local storage location

    If you are using local storage, create a local folder that you then can mount in the next command to the folder that you specified in your SHARDS config

    mkdir /opt/psono-shard
  3. Run the dockered psono server image and expose the server port

    docker run --name psono-fileserver \
        --sysctl net.core.somaxconn=65535 \
        -v /path/to/modified/settings.yaml:/root/.psono_fileserver/settings.yaml \
        -v /opt/psono-shard:/opt/psono-shard \
        -d --restart=unless-stopped -p 10200:80 psono/psono-fileserver:latest

    This will start the Psono fileserver on port 10200. If you open now http://your-ip:10200/info/ you should see something like this:

    {"info":"{\"version\": \"....}

    If not, please make sure you have no firewall on the server blocking you.

    The fileserver will announce intself now to the server and the server will propagate the information to the connected clients.

Installation Fileserver Reverse Proxy

To run the Psono file server in production, a reverse proxy is needed, to handle the ssl offloading. This one is usually a different one as the the reverse proxy for the webclient / server, yet if you only have a small installation and want to host everything on one server, then you can adjust the reverse config accordingly and reuse the existing reverse proxy.

  1. Install Nginx

    sudo apt-get install nginx
  2. Create nginx config

    Create in /etc/nginx/sites-available with the following content:

    server {
        listen 80;
        return 301 https://$host$request_uri;
    server {
        listen 443 ssl http2;
        ssl_protocols TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_session_timeout 1d;
        resolver valid=300s;
        resolver_timeout 5s;
        # Comment this in if you know what you are doing
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
        add_header Referrer-Policy same-origin;
        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
        ssl_certificate /etc/ssl/fullchain.pem;
        ssl_certificate_key /etc/ssl/privkey.pem;
        root /var/www/html;
        location /fileserver {
            rewrite ^/fileserver/(.*) /$1 break;
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Proto $scheme;
            add_header Last-Modified $date_gmt;
            add_header Pragma "no-cache";
            add_header Cache-Control "private, max-age=0, no-cache, no-store";
            if_modified_since off;
            expires off;
            etag off;
            proxy_pass          http://localhost:10200;
            # The big traffic will be encrypted chunks, so using gzip here causes only server load
            gzip off;
            # To allow the 128 MB chunks
            client_max_body_size 256m;
  3. Test nginx config

    sudo nginx -t
  4. Restart nginx

    sudo service nginx restart

    If you open in your browser, you should see the following:

    {"info":"{\"version\": \"....}

Note: Installation behind Firewall

If you have put your installation behind a firewall, you have to whitelist some ports / adjust some settings, that all features work:

  • Incoming TCP connection (usually 443) from clients to fileservers
  • Outgoing TCP connection (usually 443) from fileservers to servers
  • (optional) Outgoing TCP connection to the configured storage providers (if you use GCS or AWS or whatever instead of local storage)
Edit me
Tags: installation