<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>SysAdmin on Brad's Blog</title><link>https://blog.bjdean.id.au/tags/sysadmin/</link><description>Recent content in SysAdmin on Brad's Blog</description><generator>Hugo -- 0.152.2</generator><language>en-au</language><copyright>Bradley Dean</copyright><lastBuildDate>Thu, 27 Nov 2025 11:34:44 +1100</lastBuildDate><atom:link href="https://blog.bjdean.id.au/tags/sysadmin/index.xml" rel="self" type="application/rss+xml"/><item><title>Understanding Linux Load Averages and Multi-Core Systems</title><link>https://blog.bjdean.id.au/2025/11/understanding-linux-load-averages-and-multi-core-systems/</link><pubDate>Thu, 27 Nov 2025 11:34:44 +1100</pubDate><guid>https://blog.bjdean.id.au/2025/11/understanding-linux-load-averages-and-multi-core-systems/</guid><description>How to interpret load averages on Linux systems and understand what happens when you add more CPU cores</description></item><item><title>Django Custom Management Commands</title><link>https://blog.bjdean.id.au/2025/11/django-custom-management-commands/</link><pubDate>Wed, 12 Nov 2025 21:09:50 +1100</pubDate><guid>https://blog.bjdean.id.au/2025/11/django-custom-management-commands/</guid><description>&lt;h2 id="tldr"&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;Django allows custom management commands to extend the core &lt;code&gt;manage.py&lt;/code&gt; interface, allowing easy integration with the application when building backend processes, automation scripts, and scheduled jobs (while providing access into the Django application environment, data model and functions via the same structures used to build the website).&lt;/p&gt;
&lt;h2 id="where-to-create-andor-find-the-code"&gt;Where to create and/or find the code&lt;/h2&gt;
&lt;p&gt;Django discovers management commands through a specific directory layout in your apps:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;your_app/
management/
__init__.py
commands/
__init__.py
send_notifications.py
update_reports.py
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Every command file defines a &lt;code&gt;Command&lt;/code&gt; class that extends &lt;code&gt;BaseCommand&lt;/code&gt;:&lt;/p&gt;</description></item><item><title>Django Authentication and Permissions</title><link>https://blog.bjdean.id.au/2025/11/django-authentication-and-permissions/</link><pubDate>Wed, 05 Nov 2025 15:24:37 +1100</pubDate><guid>https://blog.bjdean.id.au/2025/11/django-authentication-and-permissions/</guid><description>&lt;h2 id="tldr"&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;Django provides a complete authentication and authorization sub-system out of the box. Use &lt;code&gt;@login_required&lt;/code&gt; to restrict views to authenticated users and &lt;code&gt;@permission_required&lt;/code&gt; to enforce granular access control based on custom permissions defined in your models.&lt;/p&gt;
&lt;h2 id="interesting"&gt;Interesting!&lt;/h2&gt;
&lt;p&gt;Django automatically creates four default permissions for every model (add, change, delete, view) during migrations, but you can define custom permissions in your model&amp;rsquo;s &lt;code&gt;Meta&lt;/code&gt; class to implement fine-grained authorization for any business logic you need.&lt;/p&gt;</description></item><item><title>Adding a Site to AWStats With Historical Logs</title><link>https://blog.bjdean.id.au/2023/11/adding-a-site-to-awstats-with-historical-logs/</link><pubDate>Mon, 13 Nov 2023 09:57:46 +1100</pubDate><guid>https://blog.bjdean.id.au/2023/11/adding-a-site-to-awstats-with-historical-logs/</guid><description>A quick walkthrough of adding a new site to AWStats and importing archived access logs without disrupting existing sites</description></item><item><title>Migrating git to svn (subversion)</title><link>https://blog.bjdean.id.au/2023/06/migrating-git-to-svn-subversion/</link><pubDate>Thu, 22 Jun 2023 15:36:50 +0000</pubDate><guid>https://blog.bjdean.id.au/2023/06/migrating-git-to-svn-subversion/</guid><description>&lt;p&gt;I&amp;rsquo;ve found that most documentation / forum discussion around the web for this topic tends to be about migrating &lt;a href="https://subversion.apache.org/"&gt;svn&lt;/a&gt; to &lt;a href="https://git-scm.com/"&gt;git&lt;/a&gt; - so here&amp;rsquo;s a quick shell script (reconfigure the variables at the start) to help migrate &lt;strong&gt;from&lt;/strong&gt; &lt;a href="https://git-scm.com/"&gt;git&lt;/a&gt; &lt;strong&gt;to&lt;/strong&gt; &lt;a href="https://subversion.apache.org/"&gt;subversion&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This script is also available here: &lt;a href="https://bjdean.id.au/public-files/migrate-git-to-svn.sh.txt"&gt;migrate-git-to-svn.sh.txt&lt;/a&gt;&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;#!/bin/bash
# Run this script from an empty working directory - it will:
# 1. git clone from the ORIGIN_GIT URL
# 2. run through a number of stages to construct a local svn repository
# 3. check out the svn repo for you to check
#
# NO error checking is done - you need to look at the output and
# look for any issues. This script DOES delete it&amp;#39;s working directories
# on each run (so make sure to start in an empty directory to be safe!)
# Configuration
PROJECT_NAME=&amp;lt;strong&amp;gt;MyProjectName&amp;lt;/strong&amp;gt;
ORIGIN_GIT=&amp;#34;git@github.com:&amp;lt;strong&amp;gt;UserName&amp;lt;/strong&amp;gt;/&amp;lt;strong&amp;gt;MyProjectName&amp;lt;/strong&amp;gt;.git&amp;#34;
# Keep track of starting directory to make working in sub-directories easier
BASEDIR=$PWD
# Clone (bare) to main repo from main git repo
# Base because later stages want to talk only to a bare repo
echo &amp;#34;### Cloning git origin into local bare repo: ${PROJECT_NAME}.git&amp;#34;
if [ -d &amp;#34;${PROJECT_NAME}.git&amp;#34; ] ; then rm -rf &amp;#34;${PROJECT_NAME}.git&amp;#34; ; fi
git clone --bare &amp;#34;${ORIGIN_GIT}&amp;#34;
# Protect the real origin by removing it as a remote in the bare clone
echo &amp;#34;### Protect real origin by removing it as a remote in our clone&amp;#34;
( cd &amp;#34;${PROJECT_NAME}.git&amp;#34;; \
git remote remove origin; \
)
# Create an empty svn repository to migrate into
echo &amp;#34;### Create and initialise the target svn repository for the migration: ${PROJECT_NAME}.svnrepo&amp;#34;
if [ -d &amp;#34;${PROJECT_NAME}.svnrepo&amp;#34; ] ; then rm -rf &amp;#34;${PROJECT_NAME}.svnrepo&amp;#34; ; fi
mkdir &amp;#34;${PROJECT_NAME}.svnrepo&amp;#34;
svnadmin create &amp;#34;${PROJECT_NAME}.svnrepo&amp;#34;
svn mkdir --parents &amp;#34;file://${BASEDIR}/${PROJECT_NAME}.svnrepo/${PROJECT_NAME}/&amp;#34;{trunk,branches,tags} -m &amp;#39;Inititalise empty svn repo&amp;#39;
# git svn (NOTE svn mode - needs the git-svn package installed on debian)
# Clone the new local svn repository into a git repo
# The --stdlayout option tells &amp;#34;git svn&amp;#34; that we are using the &amp;#34;standard&amp;#34; {trunk,branches,tags} directories
echo &amp;#34;### git-svn clone the target svn repo as a git directory (used to import from git and then export to svn): ${PROJECT_NAME}-git2svn&amp;#34;
if [ -d &amp;#34;${PROJECT_NAME}-git2svn&amp;#34; ] ; then rm -rf &amp;#34;${PROJECT_NAME}-git2svn&amp;#34; ; fi
git svn clone &amp;#34;file://${BASEDIR}/${PROJECT_NAME}.svnrepo/${PROJECT_NAME}&amp;#34; --stdlayout &amp;#34;${PROJECT_NAME}-git2svn&amp;#34;
# Set up the bare git clone as the origin for the &amp;#34;${PROJECT_NAME}-git2svn&amp;#34; clone
echo &amp;#34;### Add our git clone as the remote origin for ${PROJECT_NAME}-git2svn&amp;#34;
( cd &amp;#34;${PROJECT_NAME}-git2svn&amp;#34;; \
git remote add origin &amp;#34;file://${BASEDIR}/${PROJECT_NAME}.git&amp;#34;; \
)
# Import changes into an import branch in the &amp;#34;${PROJECT_NAME}-git2svn&amp;#34; clone and then export to svn
# Note:
# 1. git fetch first to get branch details
# 2. Then branch to an import branch tracking the remote origin/main
# 3. Rebase that onto master (rebase --root commits all reachable, allows to rebase the root commits)
# This builds the information needed to sync to svn via dcommit.
# 4. Then use svn dcommit - include author information (to help track who made changes)
echo &amp;#34;### Import full commit history into ${PROJECT_NAME}-git2svn and then send to subversion repo&amp;#34;
( cd &amp;#34;${PROJECT_NAME}-git2svn&amp;#34;; \
git fetch origin; \
git checkout -b import origin/main; \
git rebase --onto master --root; \
git svn dcommit --add-author-from ; \
)
# Checkout a svn working dir to check the export
echo &amp;#34;### Checking out a working svn directory to check the results: svn-check&amp;#34;
if [ -d svn-check ] ; then rm -rf svn-check ; fi
svn co &amp;#34;file://${BASEDIR}/${PROJECT_NAME}.svnrepo/${PROJECT_NAME}&amp;#34; svn-check
echo &amp;#34;Check the contents/log in svn-check/&amp;#34;
&lt;/code&gt;&lt;/pre&gt;</description></item><item><title>qemu - simplest command-line for a performant VM</title><link>https://blog.bjdean.id.au/2023/06/qemu-simplest-command-line-for-a-performant-vm/</link><pubDate>Wed, 14 Jun 2023 15:19:59 +0000</pubDate><guid>https://blog.bjdean.id.au/2023/06/qemu-simplest-command-line-for-a-performant-vm/</guid><description>&lt;h2 id="tldr"&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;You need to tell qemu to: use the local host CPU (rather than emulating one) and enable hypervisor mode, include a couple of CPU cores for performance, give it a hard disk and some memory and finally (after setting up a bridge network device on your host system) use a bridged network with a MAC address unique to your network. The final bit about networking requires root access - hence the sudo:&lt;/p&gt;</description></item><item><title>Increasing / decreasing number of xargs parallel processes (at run time!)</title><link>https://blog.bjdean.id.au/2022/03/increasing-decreasing-number-of-xargs-parallel-processes-at-run-time/</link><pubDate>Fri, 18 Mar 2022 12:38:59 +0000</pubDate><guid>https://blog.bjdean.id.au/2022/03/increasing-decreasing-number-of-xargs-parallel-processes-at-run-time/</guid><description>&lt;p&gt;&lt;a href="https://manpages.debian.org/findutils/xargs.1.en.html"&gt;xargs&lt;/a&gt; makes it very easy to quickly run a set of similar processes in parallel - but did you know when you&amp;rsquo;re half-way through a long list of tasks it&amp;rsquo;s possible to change the number of parallel processes that are being used?
It&amp;rsquo;s there in the &lt;a href="https://manpages.debian.org/findutils/xargs.1.en.html#P"&gt;man page under &amp;ldquo;P max-procs, &amp;ndash;max-procs=max-procs&amp;rdquo;&lt;/a&gt; but it&amp;rsquo;s an easy feature to miss if you don&amp;rsquo;t read all the way through:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;-P max-procs, --max-procs=max-procs
Run up to max-procs processes at a time; the default is 1. If max-procs is 0, xargs will run as many processes as possible at a time. Use the -n option or the -L option with -P; otherwise chances are that only one exec will be done. &amp;lt;strong&amp;gt;While xargs is running, you can send its process a SIGUSR1 signal to increase the number of commands to run simultaneously, or a SIGUSR2 to decrease the number.&amp;lt;/strong&amp;gt; You cannot increase it above an implementation-defined limit (which is shown with --show-limits). You cannot decrease it below 1. xargs never terminates its commands; when asked to decrease, it merely waits for more than one existing command to terminate before starting another.
Please note that it is up to the called processes to properly manage parallel access to shared resources. For example, if more than one of them tries to print to stdout, the output will be produced in an indeterminate order (and very likely mixed up) unless the processes collaborate in some way to prevent this. Using some kind of locking scheme is one way to prevent such problems. In general, using a locking scheme will help ensure correct output but reduce performance. If you don&amp;#39;t want to tolerate the performance difference, simply arrange for each process to produce a separate output file (or otherwise use separate resources).
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;What does that look like? Spin up some slow processes and start with 3-way parallel execution:&lt;/p&gt;</description></item><item><title>stdbuf - Run COMMAND, with modified buffering operations for its standard streams</title><link>https://blog.bjdean.id.au/2020/11/stdbuf-run-command-with-modified-buffering-operations-for-its-standard-streams/</link><pubDate>Wed, 25 Nov 2020 12:18:37 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/11/stdbuf-run-command-with-modified-buffering-operations-for-its-standard-streams/</guid><description>&lt;p&gt;While piping together commands that only output intermittently we run into the pipe buffers created by the &lt;a href="https://manpage.me/index.cgi?q=pipe&amp;amp;sektion=2&amp;amp;apropos=0&amp;amp;manpath=Debian+8.1.0"&gt;pipe() system call&lt;/a&gt; (also see &lt;a href="https://manpage.me/index.cgi?apropos=0&amp;amp;q=pipe&amp;amp;sektion=7&amp;amp;manpath=Debian+8.1.0&amp;amp;arch=default&amp;amp;format=html"&gt;overview of pipes and FIFOs&lt;/a&gt;). This can particularly come into play when stringing together multiple pipes in a row (as there are multiple buffers to pass through).
For example in the command below while &amp;ldquo;tail -f&amp;rdquo; flushes on activity and awk will flush on output but the grep  in the middle ends up with a buffered pipe and so a quiet access.log will result in long delays before updates are shown:&lt;/p&gt;</description></item><item><title>Disk Usage</title><link>https://blog.bjdean.id.au/2020/10/disk-usage/</link><pubDate>Tue, 13 Oct 2020 15:00:05 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/10/disk-usage/</guid><description>&lt;p&gt;To review disk usage recursively - a few different options exist (when scanning manually through with df and du are not enough).&lt;/p&gt;
&lt;p&gt;I have found &lt;a href="http://dev.yorhel.nl/ncdu"&gt;ncdu&lt;/a&gt; to be fast and very easy to use.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve also used &lt;a href="http://packages.debian.org/unstable/utils/durep"&gt;durep&lt;/a&gt; from time to time.&lt;/p&gt;
&lt;p&gt;For a desktop system (or a server with a X server handy) a few options exist. Some support remote scanning, though this can be slow and problematic as a network connection is required for the duration of the scan:&lt;/p&gt;</description></item><item><title>md (software RAID) and lvm (logical volume management)</title><link>https://blog.bjdean.id.au/2020/10/md-software-raid-and-lvm-logical-volume-management/</link><pubDate>Tue, 13 Oct 2020 14:56:52 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/10/md-software-raid-and-lvm-logical-volume-management/</guid><description>&lt;h2 id="md"&gt;md&lt;/h2&gt;
&lt;p&gt;Building a RAID array using mdadm - two primary steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&amp;ldquo;mdadm &amp;ndash;create&amp;rdquo; to build the array using available resources&lt;/li&gt;
&lt;li&gt;&amp;ldquo;mdadm &amp;ndash;detail &amp;ndash;scan&amp;rdquo; to build config string for /etc/mdadm/mdadm.conf&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Simple examples:&lt;/p&gt;
&lt;h3 id="raid6-array"&gt;RAID6 array&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Set up partitions to be used (in this case the whole disk):&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# for x in /dev/sd{b,c,d,e,f}1 ; do fdisk $x ; done
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;Create the array (in this case, with one hot-spare):&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# mdadm --create /dev/md0 --level=6 --raid-devices=4 --spare-devices=1 /dev/sd{b,c,d,e,f}1
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;Configure the array for reboot (append to the end of /etc/mdadm/mdadm.conf):&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# mdadm --detail --scan
ARRAY /dev/md/0 metadata=1.2 spares=1 name=debian6-vm:0 UUID=9b42abcd:309fabcd:6bfbabcd:298dabcd
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Considerations when setting up the partitions might be that any replacement disks will need to support that same size partition. Unconfirmed but it sounds like it might be a reasonable concern: &amp;ldquo;Enter a value smaller than the free space value minus 2% or the disk size to make sure that when you will later install a new disk in replacement of a failed one, you will have at least the same capacity even if the number of cylinders is different.&amp;rdquo; (&lt;a href="http://www.jerryweb.org/settings/raid/"&gt;http://www.jerryweb.org/settings/raid/&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Supporting old Debian distros</title><link>https://blog.bjdean.id.au/2020/10/supporting-old-debian-distros/</link><pubDate>Tue, 13 Oct 2020 14:53:26 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/10/supporting-old-debian-distros/</guid><description>&lt;p&gt;For old servers that need to stay that way (for whatever reason) updates are no longer available but you can access the packages that were available for that distro by pointing apt at the archive - for example see lenny:&lt;/p&gt;
&lt;p&gt;Update &lt;code&gt;/etc/apt/sources.list&lt;/code&gt; to use &lt;code&gt;archive.debian.org&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;deb http://archive.debian.org/debian/ lenny main contrib non-free
deb-src http://archive.debian.org/debian/ lenny main contrib non-free
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And for ubuntu:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;deb http://old-releases.ubuntu.com/ubuntu/ natty main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ natty-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ natty-security main restricted universe multiverse
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Also see:&lt;/p&gt;</description></item><item><title>A tale of two burst balances (AWS EC2 and EBS performance)</title><link>https://blog.bjdean.id.au/2020/09/a-tale-of-two-burst-balances-aws-ec2-and-ebs-performance/</link><pubDate>Fri, 04 Sep 2020 18:17:46 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/09/a-tale-of-two-burst-balances-aws-ec2-and-ebs-performance/</guid><description>&lt;p&gt;When using and monitoring &lt;a href="https://aws.amazon.com/"&gt;AWS&lt;/a&gt; for &lt;a href="https://aws.amazon.com/ec2/"&gt;EC2&lt;/a&gt; instances and their attached &lt;a href="https://aws.amazon.com/ebs/"&gt;EBS&lt;/a&gt; volumes there are a couple of very important metrics to keep an eye on which can have enormous performance and availability implications.
In particular I&amp;rsquo;m writing about &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-standard-mode-concepts.html"&gt;General Purpose EC2  instance running in standard mode&lt;/a&gt; (eg. T2, T3 and T3a at the time of writing) and &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html"&gt;General Purpose EBS&lt;/a&gt; (gp2) because these are very common low-mid spec instances and the default disk storage type. If you&amp;rsquo;re already using one of the other EC2 or EBS types, chances are you&amp;rsquo;re already aware of some of the issues I&amp;rsquo;ll discuss below and those other products are designed to manage CPU and disk resources in a different load targeted way.
These metrics are important because they report on the way these AWS resources (&lt;em&gt;otherwise&lt;/em&gt; designed to mimic a real hardware) operate in a very different way.
Note that while CPU credit reports are shown in the AWS web console for an EC2 instance under it&amp;rsquo;s Monitoring tab (and so people tend to see it), the EBS credit reports are not. To see these you need to find the EBS volume(s) attached to an EC2 instance (this is linked from the EC2 Description tab) and then look at the Monitoring tab for each EBS volume.&lt;/p&gt;</description></item><item><title>Adding tasks to a background screen</title><link>https://blog.bjdean.id.au/2020/09/adding-tasks-to-a-background-screen/</link><pubDate>Thu, 03 Sep 2020 20:13:41 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/09/adding-tasks-to-a-background-screen/</guid><description>&lt;p&gt;A bunch of processes have failed - and you&amp;rsquo;d like to restart them in a &lt;a href="https://savannah.gnu.org/projects/screen"&gt;screen&lt;/a&gt; session in case you need to rerun them in an interactive shells (for instance to answer prompts from the processes) - lots of &lt;em&gt;Ctrl-A-C &amp;hellip; start command &amp;hellip;. Ctrl-A-S &amp;hellip; name the window &amp;hellip; and repeat&lt;/em&gt; later there has to be an easier way!&lt;/p&gt;
&lt;h2 id="step-1-create-a-background-screen-session-to-hold-the-runs"&gt;Step 1: Create a background screen session to hold the runs&lt;/h2&gt;
&lt;p&gt;This will open a new screen session named &amp;ldquo;ScreenSessionName&amp;rdquo; into the background (so you don&amp;rsquo;t need to &lt;em&gt;Ctrl-A-d&lt;/em&gt;):&lt;/p&gt;</description></item><item><title>single quote characters in a single-quoted string in shells</title><link>https://blog.bjdean.id.au/2020/02/single-quote-characters-in-a-single-quotes-string-in-shell/</link><pubDate>Mon, 10 Feb 2020 10:03:40 +0000</pubDate><guid>https://blog.bjdean.id.au/2020/02/single-quote-characters-in-a-single-quotes-string-in-shell/</guid><description>&lt;p&gt;A very quick and simple comment on building single-quoted strings in shell scripts which include single quotes.
Note that it&amp;rsquo;s &lt;strong&gt;not possible&lt;/strong&gt; to include a single quote in a single-quoted string - for example the &lt;a href="https://www.gnu.org/software/bash/"&gt;bash&lt;/a&gt; man page:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Enclosing characters in single quotes preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And &lt;a href="http://gondor.apana.org.au/~herbert/dash/"&gt;dash&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Enclosing characters in single quotes preserves the literal meaning of all the characters (except single quotes, making it impossible to put single-quotes in a single-quoted string).&lt;/p&gt;</description></item><item><title>LINES and COLUMNS environment magic</title><link>https://blog.bjdean.id.au/2019/12/lines-and-columns-environment-magic/</link><pubDate>Tue, 03 Dec 2019 22:17:07 +0000</pubDate><guid>https://blog.bjdean.id.au/2019/12/lines-and-columns-environment-magic/</guid><description>&lt;p&gt;Ever wondered why you can read the &lt;strong&gt;$LINES&lt;/strong&gt; and &lt;strong&gt;$COLUMNS&lt;/strong&gt; environment variables from your text shell and have them seemingly aware (or indeed, actually aware) of the size of the graphical terminal in which that shell is running?
Enter &lt;strong&gt;SIGWINCH&lt;/strong&gt; - a &lt;a href="https://en.wikipedia.org/wiki/Signal_(IPC)"&gt;signal&lt;/a&gt; sent to processes when a window size changes. This signal causes a process to retrieve it&amp;rsquo;s current window size. For example in linux it is done though an &lt;a href="http://man7.org/linux/man-pages/man4/tty_ioctl.4.html"&gt;ioctl&lt;/a&gt; call in termios.h:&lt;/p&gt;</description></item><item><title>XTerm*VT100*selectToClipboard: true</title><link>https://blog.bjdean.id.au/2019/04/xtermvt100selecttoclipboard-true/</link><pubDate>Fri, 12 Apr 2019 23:08:45 +0000</pubDate><guid>https://blog.bjdean.id.au/2019/04/xtermvt100selecttoclipboard-true/</guid><description>&lt;h2 id="the-problem"&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Working with copying text between applications which use the &lt;strong&gt;X11 PRIMARY&lt;/strong&gt; (eg. the quick-copy of selected text and pasting by clicking the middle mouse button) and those which use the &lt;strong&gt;CLIPBOARD&lt;/strong&gt; (eg. usually GUI applications using Ctrl-C to copy and Ctrl-V to paste). The &lt;strong&gt;CLIPBOARD&lt;/strong&gt; is also the buffer used (for example) when web browser javascript automatically copies selected text (which is frequently a hassle because it is added to make difficult to select text easy to copy, but it is not then available to middle-click paste into a terminal).
For more detailed information see the &lt;a href="https://www.x.org/releases/X11R7.6/doc/xorg-docs/specs/ICCCM/icccm.html#use_of_selection_atoms"&gt;X11 documentation&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Useful Commands</title><link>https://blog.bjdean.id.au/2019/02/useful-commands/</link><pubDate>Tue, 05 Feb 2019 23:30:40 +0000</pubDate><guid>https://blog.bjdean.id.au/2019/02/useful-commands/</guid><description>&lt;p&gt;A list of commands / references I&amp;rsquo;ve found useful. Also see &lt;a href="http://bjdean.id.au/wiki/System_Admin/nix#head-9e4315f50d02ddf56ac443516721ec7bd70c9838"&gt;my old wiki page&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="stdbuf---run-command-with-modified-buffering-operations-for-its-standard-streams"&gt;stdbuf - Run COMMAND, with modified buffering operations for its standard streams&lt;/h2&gt;
&lt;p&gt;See &lt;a href="https://blog.bjdean.id.au/2020/11/stdbuf-run-command-with-modified-buffering-operations-for-its-standard-streams/"&gt;stdbuf - Run COMMAND, with modified buffering operations for its standard streams&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="tracing-the-dns-glue-record-for-a-domain"&gt;Tracing the DNS glue record for a domain&lt;/h2&gt;
&lt;p&gt;To find the glue records (if any) for a domain use (for example):&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;dig +trace +additional positive-internet.com NS
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This will give a full trace on how the NS records for the domain were found, and if they end up using a &lt;a href="https://en.wikipedia.org/w/index.php?title=Glue_record"&gt;glue record&lt;/a&gt; it will be visible (only if +additional is given in the command) - for example in the lookup above we start with the global servers, then find the servers for .com. and then the next response contains the information from the .com. servers as to where to find positive-internet.com. data and this includes glue records:&lt;/p&gt;</description></item><item><title>bandwidth measurement using iperf</title><link>https://blog.bjdean.id.au/2018/09/bandwidth-measurement-using-iperf/</link><pubDate>Wed, 05 Sep 2018 04:45:08 +0000</pubDate><guid>https://blog.bjdean.id.au/2018/09/bandwidth-measurement-using-iperf/</guid><description>&lt;p&gt;I wrote about using &lt;a href="http://blog.bjdean.id.au/2017/03/bandwidth-measurement-using-netcat/"&gt;netcat to measure bandwidth between servers&lt;/a&gt; which works perfectly well in a minimal sort of way (and in particular between servers where the relatively common netcat is installed). For a slightly more user-friendly approach consider &lt;a href="https://github.com/esnet/iperf"&gt;iperf.&lt;/a&gt;
Once installed on both servers (let&amp;rsquo;s call them serverA and serverB):&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;start &lt;strong&gt;iperf&lt;/strong&gt; to listen on one server (add a port particularly if there are firewall restrictions in place which need to be adjusted/worked-with):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;serverA$ iperf -s -p 12345
&lt;/code&gt;&lt;/pre&gt;&lt;ol start="2"&gt;
&lt;li&gt;start &lt;strong&gt;iperf&lt;/strong&gt; to send data from the other server:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;serverB$ iperf -c serverA -p 12345
&lt;/code&gt;&lt;/pre&gt;&lt;ol start="3"&gt;
&lt;li&gt;&lt;strong&gt;iperf&lt;/strong&gt; displays results / status on both servers:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;serverA$ iperf -s -p 12345
------------------------------------------------------------
Server listening on TCP port 12345
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.1 port 12345 connected with 10.0.0.2 port 48728
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 989 MBytes 829 Mbits/sec
serverB$ iperf -c serverA -p 12345
------------------------------------------------------------
Client connecting to ServerA, TCP port 12345
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.2 port 48728 connected with 10.0.0.1 port 12345
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 989 MBytes 830 Mbits/sec
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Run this in both directions a few times to get a good feeling for the bandwidth between the servers. There are other options (eg. parallel dual-direction testing) to consider, so the man page is worth a read.&lt;/p&gt;</description></item><item><title>Joining log lines with sed</title><link>https://blog.bjdean.id.au/2017/09/joining-log-lines-with-sed/</link><pubDate>Tue, 12 Sep 2017 23:54:59 +0000</pubDate><guid>https://blog.bjdean.id.au/2017/09/joining-log-lines-with-sed/</guid><description>&lt;p&gt;It&amp;rsquo;s often the case when analysing system logs that you want to create a summary bringing together data from different lines of a log - for example capturing an initial and final state of a transaction, or (as in this example) capturing a date-time stamp and some data from a multi-line log entry.
In this example I have a log file containing periodic extracts of &amp;lsquo;&lt;em&gt;mysqladmin extended-status&amp;rsquo;&lt;/em&gt; with a date-time line to record when the status was taken - for example (removing most of the lines with &amp;ldquo;&amp;hellip;&amp;rdquo; for brevity):&lt;/p&gt;</description></item><item><title>Conference, ChromeBook, A VM and Me</title><link>https://blog.bjdean.id.au/2017/08/chromebook-conference-a-vm-and-me/</link><pubDate>Mon, 07 Aug 2017 00:43:37 +0000</pubDate><guid>https://blog.bjdean.id.au/2017/08/chromebook-conference-a-vm-and-me/</guid><description>&lt;p&gt;I&amp;rsquo;m at a conference, I have my ChromeBook (vanilla, no local linux installs or any such thing), I have internet and I&amp;rsquo;ve set up a VM out there in the cloud,
I&amp;rsquo;m regularly closing my laptop and wondering between talks, so it would be handy to be able to resume my ssh session without having to restart and without having to type &amp;ldquo;screen -r&amp;rdquo; every time.
My ChromeBook ssh client doesn&amp;rsquo;t let me set up a &amp;ldquo;ssh here and run this command&amp;rdquo; so instead, given that I&amp;rsquo;m only logging into my VM with a single session at a time I can add this to the end of my ~/.bashrc file:&lt;/p&gt;</description></item><item><title>Which ssh publickey was used to access an account</title><link>https://blog.bjdean.id.au/2017/07/which-ssh-publickey-was-used-to-access-an-account/</link><pubDate>Mon, 10 Jul 2017 01:23:25 +0000</pubDate><guid>https://blog.bjdean.id.au/2017/07/which-ssh-publickey-was-used-to-access-an-account/</guid><description>&lt;p&gt;When you have more than one public key set up to be able to access a single account (ie more than one public key listed in the authorized_keys you may want to check which public key was used to make a login). Since &lt;a href="http://www.openssh.com/txt/release-6.3"&gt;openssh 6.3&lt;/a&gt; (released 2013) the public key fingerprint is logged - for example the below shows a set of made up &amp;ldquo;Accepted publickey&amp;rdquo; entries from an ssh auth.log:&lt;/p&gt;</description></item><item><title>bandwidth measurement using netcat</title><link>https://blog.bjdean.id.au/2017/03/bandwidth-measurement-using-netcat/</link><pubDate>Thu, 02 Mar 2017 01:19:55 +0000</pubDate><guid>https://blog.bjdean.id.au/2017/03/bandwidth-measurement-using-netcat/</guid><description>&lt;p&gt;For plain bytes/second bandwidth testing - ie without taking things like encryption overhead and compression improvements into account - the &lt;a href="http://nc110.sourceforge.net/"&gt;netcat&lt;/a&gt; command-line utility is pretty handy.
Once installed on both servers (let&amp;rsquo;s call them serverA and serverB):&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;start &lt;strong&gt;netcat&lt;/strong&gt; to listen on one server and pipe the output through &lt;strong&gt;wc -c&lt;/strong&gt; to both count the bytes (for confirmation) but also so that the bytes are not written to a filesystem or the terminal (which would cause a bottleneck and likely reduce the apparent bandwidth). By default &lt;strong&gt;nc&lt;/strong&gt; will quit when the first network connection it accepts is closed:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;serverA$ nc -l -p 12345 | wc -c
&lt;/code&gt;&lt;/pre&gt;&lt;ol start="2"&gt;
&lt;li&gt;start &lt;strong&gt;netcat&lt;/strong&gt; to send data from the other server - using &lt;strong&gt;dd&lt;/strong&gt; to send as quickly as possible (using &lt;strong&gt;/dev/zero&lt;/strong&gt; is fast). Using &lt;strong&gt;-q 0&lt;/strong&gt; will cause &lt;strong&gt;netcat&lt;/strong&gt; to quit as soon as it sees an end of file (EOF):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;serverB$ dd if=/dev/zero bs=$((2**20)) count=$((2**10)) | nc -q 0 serverA 12345
&lt;/code&gt;&lt;/pre&gt;&lt;ol start="3"&gt;
&lt;li&gt;On the sending server (&lt;strong&gt;serverB&lt;/strong&gt;) the output will show the number of bytes transmitted and the time it took to do that:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 9.68678 s, 111 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;ol start="4"&gt;
&lt;li&gt;And on the other server (serverA) the number of bytes will be printed (confirming the transmission was complete):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;1073741824
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Run this in both directions a few times to get a good feeling for the bandwidth between the servers.&lt;/p&gt;</description></item><item><title>/usr/bin/base64 - copying and pasting code / patches betweeen terminals</title><link>https://blog.bjdean.id.au/2016/08/usrbinbase64-copying-and-pasting-code-patches-betweeen-terminals/</link><pubDate>Tue, 09 Aug 2016 06:29:49 +0000</pubDate><guid>https://blog.bjdean.id.au/2016/08/usrbinbase64-copying-and-pasting-code-patches-betweeen-terminals/</guid><description>&lt;h2 id="the-scenario"&gt;The Scenario&lt;/h2&gt;
&lt;p&gt;A couple of terminals open, connected to a mix of your own workstation, local development servers and remote servers running on a different network and frequently behind a variety of security barriers.
You want to copy a smallish chunk of code from a file across the network, or the output of a diff to apply a patch - but characters like whitespace and newlines which should not change will frequently be modified through the copy. You end up with whitespace changes where you don&amp;rsquo;t want them (perhaps later causing source code control merges to fail, and patches will fail straight away).&lt;/p&gt;</description></item><item><title>Getting WordPress Up and Going</title><link>https://blog.bjdean.id.au/2015/07/getting-wordpress-up-and-going/</link><pubDate>Sat, 18 Jul 2015 13:05:38 +0000</pubDate><guid>https://blog.bjdean.id.au/2015/07/getting-wordpress-up-and-going/</guid><description>&lt;p&gt;Setting up &lt;a href="http://bjdean.id.au/blog/" title="Brad's Old Blog"&gt;WordPress&lt;/a&gt; server there were a couple of minor wrinkles to sort out. I&amp;rsquo;ve &lt;a href="http://bjdean.id.au/blog/" title="Brad's Old Blog"&gt;run a blog before&lt;/a&gt; before and that fell by the wayside when I started using &lt;a href="http://bjdean.id.au/wiki/" title="Brad's Wiki"&gt;a personal wiki&lt;/a&gt; instead. But this seems like a good opportunity to see how one of the very popular blogging platforms works and what&amp;rsquo;s involved in keeping that running under the hood.
I work primarily with &lt;a href="http://debian.org/" title="Debian"&gt;Debian&lt;/a&gt; systems, so that was a natural place to start. The &lt;a href="https://packages.debian.org/search?keywords=wordpress" title="wordpress deb"&gt;wordpress&lt;/a&gt; package makes it very easy to get the base dependencies going with a known supported version, so if you&amp;rsquo;re running a recent release of Debian that seems like a reasonable place to start as well. That said, this of course means that the package is reconfigured along Debian guidelines and I found that I needed to spend a little time working out how this was done before it made sense.&lt;/p&gt;</description></item><item><title>Protecting Joomla : User-Registration Spam Relay</title><link>https://blog.bjdean.id.au/2015/07/protecting-joomla-user-registration-spam-relay/</link><pubDate>Sat, 18 Jul 2015 10:35:05 +0000</pubDate><guid>https://blog.bjdean.id.au/2015/07/protecting-joomla-user-registration-spam-relay/</guid><description>&lt;h1 id="the-problem-a-default-setting"&gt;The Problem: A Default Setting&lt;/h1&gt;
&lt;p&gt;By default user registration is &lt;strong&gt;enabled&lt;/strong&gt;.
It&amp;rsquo;s important to realise that even though links to the user registration page may not have been included in the design of a Joomla site the components are still present and they will be regularly targetted by automatic spiders searching for vulnerable sites. Check access logs  for requests to paths like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;/index.php/shop-login&lt;/li&gt;
&lt;li&gt;/index.php/shop-login?view=registration&amp;amp;layout=complete&lt;/li&gt;
&lt;li&gt;/index.php/component/users/?view=registration&lt;/li&gt;
&lt;li&gt;/index.php/component/user/?task=register&lt;/li&gt;
&lt;li&gt;/index.php?option=com_user&amp;amp;view=register&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With user registration enabled scripts can use a Joomla site as an &lt;a href="https://en.wikipedia.org/wiki/Open_mail_relay" title="Open Mail Relay"&gt;open mail relay&lt;/a&gt; by registering users with target email addresses and inserting spam/attack payload into the user details. The Joomla site will send a confirmation email to the target email address, and any email tracing of the source of the email will lead directly to the weakened Joomla server.&lt;/p&gt;</description></item></channel></rss>