Recently, the Stern utility I use for logging in Kubernetes has been updated and got a new maintainer. You can read my original post about this utility here.

After updating to that version, I figured out that there is a difference in the way the line-endings are treated compared to the old version. If you specify a custom template like I do, there is now no longer a line ending after each message, screwing up the whole output:

$ stern staging --template '{{.PodName}} | {{.Message}}'

To get the output correct again, you need to add the line-ending which can be done as follows:

$ stern staging --template '{{.PodName}} | {{.Message}}{{"\n"}}'

The commit which changed this behaviour is this one.

Another tip when using Stern, make proper use of labels to get the output you want. By default, the query specified in the command line queries a single pod as you can figure out from the command usage:

Tail multiple pods and containers from Kubernetes

Usage:
  stern pod-query [flags]

Flags:
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.
      --color string               Color output. Can be 'always', 'never', or 'auto' (default "auto")
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'
  -c, --container string           Container name when multiple containers in pod (default ".*")
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default "running")
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.
  -e, --exclude strings            Regex of log lines to exclude
  -E, --exclude-container string   Exclude a Container name
  -h, --help                       help for stern
  -i, --include strings            Regex of log lines to include
      --init-containers            Include or exclude init containers (default true)
      --kubeconfig string          Path to kubeconfig file to use
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default "default")
  -l, --selector string            Selector (label query) to filter on. If present, default to ".*" for the pod-query.
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)
      --template string            Template to use for log lines, leave empty to use --output flag
  -t, --timestamps                 Print timestamps
  -v, --version                    Print the version and exit

This is anoying if you have multiple replicas of a pod. The easiest solution is to make sure you have a common label for these pods when you deploy them. Doing so enables you to use -l to query on the label instead:

Imagine you use the following deployment description:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-server
spec:
  replicas: 3
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      app: example-server
  template:
    metadata:
      labels:
        app: example-server
    spec:
      containers:
      - name: general-example-environ-server
        image: pieterclaerhout/example-environ-server:1.7

As we have set the proper labels, we can now query all 3 replicas of this pod using:

stern -l "app=example-server" --template '{{.PodName}} | {{.Message}}{{"\n"}}'