Advanced

Advanced Snippets

You've got a basic Logship deployment! Let's try out some more advanced configuration and functionality.

Provision First User

An ephemeral Logship instance can be tedious, requiring first-time setup on each start. To automate that away, enable setupService provisioning in your logship-database config:

{
  "backend": {
    "services": {
      [...],
      "setupService": {
        "enable": true,
+       "provision": true,
+       "accounts": [
+         {
+           "accountId": "00000000-0000-0000-0000-000000000000",
+           "accountName": "Default Account"
+         }
+       ],
+       "users": [
+         {
+           "userId": "00000000-0000-0000-0000-00000000cafe",
+           "username": "admin",
+           "password": "admin",
+           "firstname": "Logship",
+           "lastname": "Admin",
+           "email": "admin@logship.io",
+           "defaultGlobalPermissions": ["Logship.Global.Admin"],
+           "defaultAccounts": [
+             {
+               "accountName": "Default Account",
+               "userPermissions": ["Logship.Account.Admin"]
+             }
+           ]
+         }
+       ]
      }
    }
  }
}

On startup, you'll immediately be able to log in with your configured credentials.

Persistent Storage

By default, you can configure persistent storage by mounting to /logship/ on your host. You can configure storage paths so you can choose what storage is persisted and where. View configuration reference.

    logship-database:
        [...]
+        volumes:
+           - ./logship:/logship:rw

Post Custom Data

With your logship instance running, post a metric to your backend server:

const backend = "http://localhost:5000";
const sub = "00000000-0000-0000-0000-000000000000";
await fetch(`${backend}/inflow/${sub}`, {
  method: "POST",
  body: JSON.stringify([
    {
      schema: "hello.world",
      timestamp: new Date(),
      data: {
        text: "Hello, World!",
        userAgent: navigator.userAgent,
        value: 1,
      },
    },
  ]),
});
$backend = "http://localhost:5000"
$sub = "00000000-0000-0000-0000-000000000000"
$body = @{
    schema = 'hello.world'
    timestamp = (Get-Date).ToUniversalTime().ToString('yyyy-MM-ddTHH:mm:ss.fffZ')
    data = @{
        text = "Hello, World!"
        userAgent = $env:COMPUTERNAME
        value = 1
    }
} | ConvertTo-Json
Invoke-RestMethod -Uri "$backend/inflow/$sub" -Method Post -Body "[$body]" -ContentType 'application/json'
backend="http://localhost:5000"
sub="00000000-0000-0000-0000-000000000000"
timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S.%NZ")
userAgent=$(hostname)
value=1
json_data='[{
    "schema": "hello.world",
    "timestamp": "'$timestamp'",
    "data": {
        "text": "Hello, World!",
        "userAgent": "'$(hostname)'",
        "value": 1
    }
}]'
curl -X POST "$backend/inflow/$sub" -d "$json_data" -H "Content-Type: application/json"

Test it out on the Query Page! Your new metric can be queried.

hello.world
| where timestamp > ago(1h)
| project timestamp, text, value, userAgent
| limit 100

Automate with logsh

Use the CLI to script repeatable tasks:

logsh configure backend http://localhost:5000
logsh login --username Admin --password default
logsh query "hello.world | summarize c=count() by bin(timestamp, 1m)"

Scale beyond one node

  • Split backend and database roles: run logship-database with database.enable=true and backend.enable=false, then a second instance with backend.enable=true pointing to the database endpoints.
  • Add more workers by replicating the backend service with unique worker endpoints; keep one master for coordination.
  • Persist storage on SSD/NVMe and ensure minimumFreeSpaceBytes thresholds are honored to avoid throttling.