Dario Hamidi

Know your tools. Learn Bash.

Bash: completion, the complete builtin

In the previous article we've looked at how Bash completion works at the simplest level.

We've also seen that the complete builtin comes with 26 options to influence its behavior.

Let's break them down piece by piece, until we have a complete understanding of complete.

Grouping options

While the number of options to complete seems overwhelming, every option actually falls into one of a handful of groups:

  • generators supply possible words to complete (-G for globs, -W for word lists, -A for Bash internals plus its many aliases, -F for generating them with a shell option, -C for generating them from a command)
  • filters modify the list of completion options, these include: -P for adding a prefix, -S for adding a suffix, and -X for excluding matches
  • options change the behavior of complete, e.g. quoting and sorting. They are all set using -o
  • control operators are used for installing (-D for default, -E for empty line) and removing (-r) completions and printing all completion rules (-p).

Application: sprucing up completions with fzf

Given this particular set of building blocks, we can combine them to do something interesting: use fzf for selecting completions!

Here is the plan:

  • list all completion functions currently defined,
  • generate a new function for each that sets COMPREPLY to a single entry selected from the existing values of COMPREPLY using fzf,
  • install the new completion functions.

Metaprogramming: _fzfify

Let's start with step two: generating new bash functions from existing ones. Bash has eval and source, so this should be easily possible.

Given a function fn, we want to create a wrapper function that looks like this:

_fzf_fn() {
  fn "$@"
  local result=$(printf "%s\n" "${COMPREPLY[@]}" | sort | uniq | fzf)

We'll do this through a function called _fzfify: it receives the function to wrap as an argument and defines a function like the one shown above:

_fzfify() {
  local fn="$1" # the function we want to wrap
  # a template for defining our wrapped function
  local template='_fzf_FN() {
  FN "$@"
  local result=$(printf "%s\n" "${COMPREPLY[@]}" | sort | uniq | fzf)
  # eval the template, with FN replaced by the wrapped function name
  eval "${template//FN/$fn}"

Let's test whether this works:

$ x() { COMPREPLY=(a b c); }
$ _fzfify x
$ _fzf_x # opens up fzf
$ declare -p COMPREPLY
declare -a COMPREPLY=([0]="a")

Installing all completions

With this new tool in hand, the task becomes easy: complete -p prints out a list of completion commands. For lines that contain -F, we just replace whatever function name comes after the -F with _fzf_ as the prefix and take note of the function.

For each function we've discovered that way, we'll run _fzfify to define our new completion rules.

_fzf_complete() {
  complete -p | awk '
    $2 == "-F" && $3 !~ /^_fzf_/ {
      print($1 " " $2 " _fzf_" $3 " " $4 " " $5);
    $2 != "-F" || $3 ~ /^_fzf/ {
      print $0
    END {
      for (i in fzfify) { printf("_fzfify %s\n", i) }

AWK is the perfect tool for this task: for lines that have -F as the second field, we insert our function prefix and record the wrapped function name. Other lines are printed unchanged. After all lines have been processed, we output calls to _fzfify.

Next, we can evaluate this in the context of the current shell:

source <(_fzf_complete)

Now, hitting <TAB> will start fzf for completing commands.

Bash: completion, introduction

If you have ever used the TAB key in your terminal, you have interacted with your shell's programmable completion facilities.

Sometimes you probably have found existing completions to be lacking sometimes and wondered about how you can add your own. The Bash manual explains this in great detail, but not in a very accessible manner.

The Process

Bash is going through a series of steps to arrive at the list of completion candidates that it is ultimately displaying:

  --> compspec
    comspec actions: -f, -d, -G, -W
  --> completion function or command
  --> completion filter
  --> add prefixes/suffixes
  --> present to user

The compspec provides initial matches for a given word and is set up with the complete builtin.

A completion function or command follows a simple protocol: it receives completion context information through the environment and returns the list of completions, one per line, on stdout.

A special case of this is when the completion function is a Bash function: in this case possible completions can be added to the COMPREPLY array.

complete and compgen

Bash comes with two builtin functions that control how completion: complete and compgen.

In theory you only need complete: it is the entry point for the completion framework and tells Bash how to attempt completion for a command.

The compgen builtin is just a convenience for the common case of completion: given a set of available options and a string the user typed, which options start with the string the user typed?

A minimal example

The complete builtin is a real beast, using almost the full English alphabet for all its single-letter options:

complete: complete [-abcdefgjksuv] [-pr] [-DEI] [-o option] [-A action] [-G globpat] [-W wordlist] [-F function] [-C command] [-X filterpat] [-P prefix] [-S suffix] [name ...]

Let's ignore most of this and focus on a minimal example: we have a function that accept's a human description of a date, like next Thursday and we want to complete the options for that function.

Under the hood this is just GNU date:

d() {
  LC_TIME=C date --date="$*" +%F

And then we want to be able to invoke it like this:

d next Fri
d last Thu
d next month

In the simplest case we just want completion to match the input against a list of words and complete already supports this simple case using the -W option:

complete -W "$(printf "%q\n" {'next ','last '}{Monday,Mon,Tuesday,Tue,Wednesday,Wed,Thursday,Thu,Friday,Fri,Saturday,Sat,Sunday,Sun})" d

The braces generate all possible permutations of the two sets [next, last] and [Monday, Mon, ... Sun], so in essence our wordlist will look like this:

next Monday
next Mon
last Sunday
last Sun

Note that we used the %q format specifier to make sure the spaces in our completion entries are properly quoted.

Now typing d <TAB> will cycle through the list and d n<TAB> will only show the next ... entries.

Refactoring into a function

The complete command we've used before is very unwieldy, so let's move completion for d into a separate function. That also allows us to add more logic to it if we want to support more of date's syntax.

_comp_d() {
  local wordlist=$(printf "%q\n" {'next ','last '}{Mon{,day},Tue{,sday},Wed{,nesday},Thu{,rsday},Fri{,day},Sat{,urday},Sun{,day}})
  local IFS=$'\n' # separate wordlist entries by newline
  COMPREPLY=( $(compgen -W "$wordlist" "$2") )

Let's unpack this:

  1. we create our wordlist like before
  2. we set IFS to \n, so that compgen knows that newlines separate our wordlist entries (since a single entry contains spaces)
  3. tell Bash about completion candidates by setting COMPREPLY

compgen -W "$wordlist" "$2" takes our wordlist just like before and narrows it down to entries starting with $2, the word under the cursor.

Anatomy of a completion function

A completion function is invoked with three arguments:

  • $1 is the command for which completion is attempted,
  • $2 is the word under the cursor,
  • $3 is the word preceding it.

Here are some examples (_ indicates the cursor position):

command     $1  $2    $3
d next Fr_  d   Fr    next
d_          d         d
d next_     d   next  d

Additionally Bash sets a bunch of variables starting with COMP. Let's inspect them:

_comp_dump() {
  printf "\n"
  declare -p ${!COMP*}
complete -F _comp_dump dump
$ dump example one two<TAB>
declare -- COMP_CWORD="3"
declare -- COMP_KEY="9"
declare -- COMP_LINE="dump example one two"
declare -- COMP_POINT="20"
declare -- COMP_TYPE="37"
declare -- COMP_WORDBREAKS="
declare -a COMP_WORDS=([0]="dump" [1]="example" [2]="one" [3]="two")

In the next article we'll explore how to make use of this information and how to generate useful completions for common development tools.

Bash: a workflow for developing scripts

REPL-driven development

Like in any programming environment, developing larger chunks of code follows the common cycle of making a change, running your script and observing the result, and finally repeating the process. Shell scripts aren't different in this regard.

One important difference however is that since fundamentally the shell is a highly interactive environment, we can easily achieve a nice REPL experience by tweaking a few things in the code we are working on and adding a few definitions to our interactive environment.

Outline of our plan

In broad terms, given a set of files, we want to reload all of the ones that have changed since we last ran a command.

To achieve this, we:

  • will keep track of each file's sha1 checksum,
  • after running a bash command, we check whether the checksums for any file have changed,
  • for each file with a changed checksum, we invoke source

A REPL session

Here is an example session of using our REPL tooling, before diving into how it works:

 0 12:43:43 ~/demo
$ cat test.sh
example_fn() {
  printf "example\n"

 0 12:43:46 ~/demo
$ repl.sh
repl:[]  0 12:43:53 ~/demo
$ repl.watch ./test.sh
repl.watch: ./test.sh
repl.load: ./test.sh
repl:[./test.sh]  0 12:43:58 ~/demo
$ example_fn
repl:[./test.sh]  0 12:44:09 ~/demo
$ vim test.sh
repl.load: ./test.sh
repl:[./test.sh]  0 12:44:20 ~/demo
$ example_fn
the function has changed
repl:[./test.sh]  0 12:44:23 ~/demo
$ echo doing some else will not cause a reload
doing some else will not cause a reload
repl:[./test.sh]  0 12:44:36 ~/demo

REPL walkthrough

You can find the full script here.

Let's walk through it line by line:

Setting the stage

#!/usr/bin/env bash

[[ -v REPL_FILES ]] || declare -a REPL_FILES=()

We expect this script to be loaded with source, so that it can interact with the current shell environment. That means it can be loaded multiple times.

For tracking our state we need two variables: REPL_FILES is a list of files we want to watch and REPL_CHECKSUM is the last known checksum of all the files.

Using [[ -v we can check whether the variables are already defined (because repl.sh was loaded already), and only define them if necessary.

Hooking into Bash

repl.install() {
  local command
  declare -ga PROMPT_COMMAND
  for command in "${PROMPT_COMMAND[@]}"; do
    if [[ "$command" == "repl.load_if_changed" ]]; then


This is the entrypoint into the REPL functionality. Often overlooked, PROMPT_COMMAND can actually be an array instead of a plain string. This makes it easy for us to detect whether we've installed the REPL integration already or not.

First, we convert PROMPT_COMMAND into an array. The existing contents will just become the first element of the array. Then we see if repl.load_if_changed is already part of PROMPT_COMMAND, and if we don't find it in the list, we just add it to the end.

With that done, every time we hit Enter in Bash, Bash runs through all the commands in this list and executes them before presenting a prompt to the user again.

Additionally, we want to display some extra information in the prompt, to show that we are in REPL session. More on that later.

Detecting changes

repl.load_if_changed() {
  local filename new_checksum
  local -A old new
  repl.parse_checksum old <<<"$REPL_CHECKSUM"
  repl.parse_checksum new <<<"$new_checksum"

  for filename in "${!new[@]}"; do
    if [[ "${new[$filename]}" != "${old[$filename]:-}" ]]; then
      repl.load "$filename"


To detect whether files have changed, we take a checksum of the current set of watched files and compare it to the last known checksum stored in REPL_CHECKSUM.

Under the hood we're using sha1sum for generating the checksums, so the value in new_checksum and REPL_CHECKSUM looks like this:

32b2b0c0a5ec7fe502f9ab319fbee761b38b7a48  repl.sh

We parse this text into two associative arrays (old and new) and then iterate over all the keys in new.

Note how we can obtain all keys in an array using ${!new[@]}.

If we detect a difference between checksums, we load the file for which the difference was detected.

Why iterate over new and not over old? The list of files on our watchlist can change, so if we iterated over old, we would not trigger a reload when a new file is added to the list.

After processing all list elements, we update REPL_CHECKSUM to reflect the current state of the world.

Parsing checksums

As you might have noticed, repl.parse_checksum somewhat magically populated the old and new arrays.

repl.parse_checksum() {
  local sha1 filename
  local -n destination_dict="$1"
  while read sha1 filename; do
    [[ -z "$filename" ]] && continue

The important bit here is local -n. This turns destination_dict into a nameref, so any operation on destination_dict is actually applied to the variable named by destination_dict's value.

Since the checksum data is tabular, we can use bash's builtin read to just grab the first and second column and build a map from filename to checksum.

Watching files

We still need a way to watch files for changes. This is what the repl.watch function does:

repl.watch() {
  local f r

  for f in "$@"; do
    for r in "${REPL_FILES[@]}"; do
      if [[ "$r" == "$f" ]]; then
        continue 2

    printf "repl.watch: %s\n" "$f" >&2
    repl.load "$f"

We iterate over all the arguments, binding the current one to f. If we already find the file f in the list of watched files, we skip this iteration.

Bash solves breaking out of or continuing nested loop iterations elegantly by allowing you to specify the nesting level with break and continue. In this case continue 2 means continue to the next iteration of the outer loop.

If the file is not on our watchlist yet, we add it and print a message to stderr to alert the user to the fact this file is now watched.
Also, we load it because the user expressed their intent to work on this file.

Providing context

repl.install_prompt() {
  if [[ "$PS1" =~ repl: ]]; then

  PS1="\[repl:\$(repl.prompt_info)\] $PS1"

repl.prompt_info() {
  local IFS=,
  printf '[%s]' "${REPL_FILES[*]}"

If a REPL shell looks like any other shell, it is easy to forget about files being automatically loaded.
We can improve the user experience by showing the user the files they are currently working with in the REPL.

This is basically what repl.prompt_info does: it formats the list of files in REPL_FILES by joining the all with a comma and wrapping the result in brackets.

The function repl.install_prompt then adds the output of repl.prompt_info to the prompt, but only if we haven't done so (assuming the user doesn't have repl: somewhere in there prompt already.

Entering a REPL

At the end of the repl.sh script we can see the following:

case ${0##*/} in
  repl.sh) repl.enter;;
  -bash) repl.install;;

The ${0##*/} expands to the filename part of the path stored in $0. If repl.sh is invoked as a command, $0 will hold the full path to the file. If however repl.sh is loaded into an existing shell session using source, then $0 will be the string -bash.

We can use this to implement different behaviors: if loaded in an interactive shell, we just install the REPL and that's it.

If invoked as a command, we'll start a new interactive shell, with the REPL loaded and set up:

repl.enter() {
  bash --rcfile <(printf "%s\n" \
    "$(< $HOME/.bashrc)" \
    'source repl.sh' \
    'repl.install' \
  ) -i

We are using Bash's command substitution to create a new, temporary init-file for Bash that consists of the user's .bashrc followed by two commands to activate the REPL.

Bash: re-using code

Changes are that you have a lot of bash scripts in your project(s) and that some of those scripts contain bits and pieces that you would like to re-use.

The prospect of making your shell scripts reusable might seem daunting, but Bash actually comes with a couple of mechanisms that make this easier than you might have thought.

Using the source

The easiest way to re-use a piece of Bash code is to load it into the current process. This is exactly what source does.
Actually, you can think of source as eval, but applied to files instead of strings.

The source builtin has a couple of features that make it interesting for the purpose of re-use:

  • it searches through all the directories on PATH to find the file to source,
  • you can pass arguments to the file being sourced.

So for the purpose of treating source-able files as "modules" like you know them from other programming languages, we even get an "advanced" feature: parametrized modules.

The most prominent example of parametrized modules is probably Ruby's ActiveRecord Data Migrations:

class AddPartNumberToProducts < ActiveRecord::Migration[7.0]
  def change

The [7.0] bit after Migration is actually a method call on the ActiveRecord::Migration object and returns a new class, from which AddPartNumberToProducts is inheriting.

In Bash we can be similarly flexible by passing arguments to source.

PATH lookup

The critical thing to understand about how source finds files to source is that it only looks for a single name, no slashes allowed. Basically source performs the same kind of lookup as your shell does when finding commands.

Let's look at an example:

$ tree -L 2
├── bin
│   └── hello-world
└── modules
    └── messages

Our main script is hello-world and we would like to use a module called messages in there to print out messages in a standardized format.

Here is hello-world:

#!/usr/bin/env bash

source messages

main() {
  say "hello, world"

main "$@"

We're loading the module messages using source and expect it to provide a function called say.

Running this script as it is, we'll get an error:

$ bin/hello-world
bin/hello-world: line 3: messages: No such file or directory
bin/hello-world: line 6: say: command not found

Loading messages failed and as a consequence say isn't defined.

Fix 1: specifiying an absolute path

There are multiple ways to address this issue. One is to avoid path lookup by using an absolute path, leveraging the fact that $0 contains the path to the current script file:

#!/usr/bin/env bash

source $(realpath -m $0/../../modules/messages)

# ... rest unchanged

The call to realpath is necessary in order to provide a fully resolved path to source, otherwise source errors.

While this works and our script indeed prints hello, world now, it:

  • is cumbersome to write and remember,
  • sourcing more than one file like this means a lot of repetition

Fix 2: changing PATH

If we change PATH to include the modules directory, our script becomes much simpler:

$ head -3 bin/hello-world
#!/usr/bin/env bash

source messages
$ PATH=$PWD/modules:$PATH bin/hello-world
hello, world

Simpler code comes at the cost of managing your environment. In practice this doesn't appear to be a problem, because:

  • developers already use tools to manage their environment variables,
  • the environment in CI is explicitly managed as well,
  • and so is the environment on deployment targets.

Additionally, the path-setting logic can be deduplicated in a simple wrapper script that sets up PATH correctly before invoking the next script.

parametrizing modules

Now on to the exciting bit: adding parameters to our module. Let's say we want allow users of the messages module to specify a style that is applied to all messages by default, for example making text appear in bold.

We want the usage of our module to look like this:

source messages style=bold

All the arguments after messages are available in the messages script as positional parameters, so $1 would be style=bold.

Let's add simple argument parsing to our module and modify say to store the style in a variable:

case "$1" in
    printf "messages: unknown parameter: %s\n" "$1" >&2
    exit 1;;

say() {
  if [[ -n "$MESSAGES_STYLE" ]]; then
    printf "${MESSAGES_STYLE}%s\033[0m\n" "$*"
    printf "%s\n" "$*"

If the first parameter passed to our module is style=bold we record the ANSI escape sequence for telling the terminal that text should be displayed in bold. In say we check whether a style if set, and if it is, we output the necessary escape sequences.

In case any other argument is passed, we just print an error message and exit the program.

Bash: quoting code

Quoting, in reverse

While quoting inputs to Bash requires some care, little is talked about quoting output in a way that is safe for Bash to evaluate.

This opens the gate to metaprogramming: if programs like Bash can generate correctly quoted Bash code, we can safely evaluate the output of those programs.

This functionality silently snuck into popular tools, making Bash a valid output format, which opens up exciting possibilities.

The testing function q

Let's define a function q (for quote) that will evaluate its first argument and compare it to the second. We can use this to test the behavior of various quoted strings. Here is q:

q() {
  [[ $(eval $1) == "$2" ]] || {
    printf "eval(%s) != %s\n" "${1@Q}" "$2" >&2
    return 1

Here's q in action:

exit:0 $ q 'printf 1' 1
exit:0 $ q 'printf 2' 1
eval('printf 2') != 1
exit:1 $

When evaluating the first argument doesn't match the second, we get an error message and q returns 1.


Bash comes with various ways of quoting code, each of them yielding slightly different results.

declare -p

This one we've covered in depth already here, and it's listed here just for completeness.

Using declare -p we actually get back another declare command that we can feed back into bash to redeclare a variable.

Let's test that it works:

exit:0 $ x='hello, world'
exit:0 $ declare -p x
declare -- x="hello, world"
exit:0 $ q "$(declare -p x); echo \$x" 'hello, world'

Since declare -p outputs a command, we need to explicitly print the value of x. Note that the substitution happens before q invoked, so the arguments that q sees are actually this:

q 'declare -- x="hello, world"; echo $x' 'hello, world'

printf's %q

The builtin printf function has additional format specifiers compared to the C stdlib's printf. This one is interesting specifically:

%q quote the argument in a way that can be reused as shell input

Let's see it in action:

exit:0 $ printf '%q\n' '$x'
exit:0 $ printf '%q\n' '$(rm -rf /)'
\$\(rm\ -rf\ /\)
exit:0 $ q "echo $(printf '%q' '$(date)')" '$(date)'
exit:0 $

parameter expansion: the Q attribute

Bash supports various modifiers for strings when expanding variables. These modifiers are indicated with an @, e.g, ${var@U} will upcase the value of var.

One such modifier is Q for quoting strings in a way that is safe for evaluating them as shell input.

Here is how that looks like:

exit:0 $ cmd='$(date)'
exit:0 $ echo $cmd
exit:0 $ echo ${cmd@Q}
exit:0 $ q "echo $cmd" '$(date)'
eval('echo $(date)') != $(date)
exit:1 $ q "echo ${cmd@Q}" '$(date)'
exit:0 $

We can see that passing an unquoted version of $cmd will actually expand to echo followed by the current date, which is obviously not equal to the string $(date).

Using ${cmd@Q} we do get back the string $(date) however.


JSON has become the lingua franca of data exchange in the last decade and jq is to JSON like awk is to unstructured text.

It supports format specifiers to convert JSON literals for use as part of other common formats. One of these format specifiers is @sh, quoting for output in a shell.

We can use this to convert a JSON object into a series of variable assignments and then evaluate them in the shell.

Let's use this to define a function for invoking each script defined in a JavaScript project's package.json. We'll be using this one as an example:

  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "start": "next start",
    "lint": "next lint"
 "#": "rest of the file omitted"

Using jq we can convert these entries into a series of function definitions:

$ jq -r '.scripts | to_entries[] | .key + "() { eval " + (.value | @sh) + "; }"' <package.json
dev() { eval 'next dev'; }
build() { eval 'next build'; }
start() { eval 'next start'; }
lint() { eval 'next lint'; }

This is now valid Bash code, which we can evaluate:

$ eval "$(jq -r '.scripts | to_entries[] | .key + "() { eval " + (.value | @sh) + "; }"' <package.json)"
$ declare -pf lint
lint () {
  eval 'next lint'

Let's define next to just print its command line, so that we can plug this into our q function:

exit:0 $ next() { printf "next %s\n" "$*"; }
exit:0 $ lint
next lint
exit:0 $ q lint 'next lint'
exit:0 $

Bash: declare in depth: Part 5: variable scoping


  • declare outside of a function defines a global variable,
  • declare inside of a function defines a local variable,
  • declare -g inside of a function defines a global variable.

dynamic vs lexical scoping

Before we can look more deeply at how variables are scoped in bash, a quick refresher on dynamic vs lexical scoping is in order.

The value and visibility of a dynamically scoped variable depend on the call stack, whereas a lexically scoped variable is only visible in its lexical (read: source code context) environment.

Consider this example in Python, which uses lexical scoping:

x = 1

def foo():
  x = 2

def bar():
  x = 3

print("x = {}", x)

What will the last line print? The answer is "1", because the x in foo is different from the x in bar, and both are different from the top-level x.

If Python were dynamically scoped, we'd print 3 at the end instead.

A bit of history

Bash is one of the few programming languages left in daily use that features dynamic scoping and dynamic scoping only.

Perl started out with dynamic scoping but added lexical scoping already in 1994

The other big user of dynamic scoping was Emacs. In version 24.1 lexical binding was introduced and it quickly became popular.

Perhaps most famously, JavaScript still features a vestigial form of dynamic scoping in the form of the this keyword: it is always in scope, yet what it is bound to depends on the current call stack.

Scoping in bash

Bash only supports dynamic scoping. This is best illustrated with another example:

#!/usr/bin/env bash

let counter=5

increment() {
  let counter++
  printf "increment: counter = %s\n" "$counter"

reset() {

printf "counter = %s\n" "$counter"

Both increment and reset operate on the same counter variable, so this program prints:

increment: counter = 6
counter = 0

We can limit the scope of the assignment in reset by making counter a local variable in increment.
This means that reset will still have access to a variable called counter, but this is essentially a new variable made available in the environment for the duration of the call to increment.

Here's the new version of increment:

increment() {
  local -i counter=$counter
  let counter++

  printf "increment: counter = %s\n" "$counter"
  printf "increment: counter = %s\n" "$counter"  

This yields the following output:

increment: counter = 6
increment: counter = 0
counter = 5

We can see that reset only reset the copy of counter introduced by local -i and the global version of counter is untouched.

Essentially, every time you call local, you push a new variable onto a stack. Every time the current function call ends, that stack is popped. In the example above, our variable stack would look like this:

|   6   | <-- entry used while `increment` is active
|   5   | <-- global value

Application: tunable parameters in interactive systems

Judging by the fact that the this keyword in JavaScript is a constant source of confusion and that every major user of dynamic scoping eventually introduces lexical scoping, one might ask what good dynamic scoping is at all?

I believe Emacs, a large-scale interactive system with built-in documentation for everything is a good example of the usefulness of dynamic scoping. Emacs core functions do what you expect and if you need to change their behavior, there is a variable you can temporarily override to get the desired behavior. In effect, every optional parameter to your function instead becomes a variable in top-level scope that you can override at your convenience.

A prominent example of this same principle is the IFS variable in Bash: it controls how words are split/joined and you can safely override it in a function or temporary environment if you need behavior that's not the default.

We can apply the same idea to our scripts. For example, here is a function that joins an array, using , by default:

# Set join_with to choose which separator to use for individual arguments
join_with=', '

# join produces a single string from all its arguments, by linking them with the value of `join_with`
join() {
  local result=

  while [[ "$#" -gt 0 ]]; do
    [[ "$#" -gt 0 ]] && result+="$join_with"
  printf "%s\n" "$result"

In essence this allows us to use a form of keyword arguments in Bash:

$ join a b c
a, b, c
$ join_with=/ join a b c

Bash: declare in depth: Part 4: the oddballs

We've covered almost all of declare's options, but there are two that stand out for being of questionable usefulness: -l and -u.

These flags change the case of a variable to lowercase (-l) and uppercase (-u) respectively when they are assigned.

In total, I counted three ways for changing casing in Bash:

  • declare with -u or -l
  • parameter expansion with ^^ and ,,: ${to_upper^^} and ${to_lower,,}
  • using an @ parameter during expansion: ${to_upper@U} and {to_lower@L}

Interaction with namerefs

So how does declare -u work together with namerefs?

Does it upcase the name of the variable referred to by the nameref?

Does it upcase the assigned value?

Let's find out!

declare -nu ref=var
printf "var=%d, VAR=%d\n" "$var" "$VAR"
# prints var=1 VAR=3

Applied to a nameref, the name of the variable referred to is changed to uppercase.

Application: metaprogramming and enforcing naming conventions

The only application of this that I can think of is enforcing naming conventions of variables when metaprogramming, e.g. global variables should always be in uppercase.

Here's defconst, a function which defines a global variable, whose value cannot be changed and whose name is always going to be in uppercase.

defconst() {
  local -nu ref="$1"
  readonly "${!ref}"
  printf "%s" "${!ref}" # the name the reference points to

defconst pi 3.14 # prints PI
printf "PI=%s\n" "$PI" # prints 3.14
PI=bar # error: PI: readonly variable

Bash: declare in depth, Part 3: arrays

Bash supports two types of arrays: numerically indexed arrays and associative arrays (i.e. indexed by a string).

To create an array, pass -a or -A (-A for associative arrays) to declare (or any of its variable-defining cousins like local and readonly).

Some people think that needing arrays is the point where you should switch to another language. In some contexts this can certainly be true (e.g. nobody on the team knows how arrays work in Bash), but comes at a cost: shells are still the most concise form for starting programs and connecting them together. Switching to a language like Python requires setting up libraries on all target systems and the resulting code for orchestrating processes still looks very clumsy compared to a shell script.

Arrays: a reference

Under the hood arrays in Bash are actually implemented as linked lists. This is different from what most dynamic languages call "arrays", so be aware of that if you ever need to build large arrays.

Both arrays and associative arrays share a lot of functionality:

  • create them either with declare or local
  • list all keys in the array: ${!array[@]}
  • list all elements in the array: ${array[@]}
  • remove an element with unset: unset array[index]
  • get a specific element: ${array[elem]}
  • get the number of elements: ${#array[@]}

There are some important differences though:

Special syntax for creating array (x=()) will always create a numerically indexed array:

$ x=()
$ declare -p x
declare -a x=()

The index of numerical arrays is always evaluated using Bash's rules for arithmetic:

$ i=0
$ x[i+1]=2
$ declare -p x 
declare -a x=([1]="2")

Since the index of a numerically-index array can be calculated, bash also offers syntax for appending to the array:

$ x+=(three)
$ declare -p x
declare -a x=([1]="2" [2]="three")

Arrays as linked lists

Arrays being implemented as linked lists can lead to some surprising behavior.
When removing elements from an array indices are not renumbered.

$ declare -a words=(a list of words)
$ declare -p words
declare -a words=([0]="a" [1]="list" [2]="of" [3]="words")
$ unset words[2]
$ declare -p words
declare -a words=([0]="a" [1]="list" [3]="words")

Arrays and export

Arrays cannot be exported to subshells, but Bash is not telling you this, choosing to silently fail instead:

$ declare -a numbers=(one two three)
$ export numbers # doesn't fail
$ bash -c 'declare -p numbers'
bash: line 1: declare: numbers: not found

While this is not ideal, exporting arrays is actually possible by serializing them first (with declare -p!).

$ export state=$(declare -p numbers)
$ bash -c 'eval "$state"; declare -p numbers'
declare -a numbers=([0]="one" [1]="two" [2]="three")

Bash: declare in depth, Part 2: quoting and eval

Quoting bash scripts correctly is considered difficult by many, and rightfully so if you don't know about Bash's builtin facilities for safely quoting code.

Why would you need to quote bash code? For meta-programming!

In total, bash has three ways for you to quote code:

  • printf using %q will quote any string suitable as input to Bash,
  • the Q operator in parameter expansion will quote a variable: ${var@Q}
  • declare can dump both functions and variables of all types as bash code!

Quoting with declare -p

Let's see what we can do with declare -p:

$ now() { date; }
$ declare -pf now
now ()
$ words=(hello world)
$ declare -p words
declare -a words=([0]="hello" [1]="world")

So, declare -p dumps a variable and declare -pf dumps a function. The output can be fed into bash again:

$ bash -c 'now' # not defined, will error
bash: line 1: now: command not found
$ declare -pf now | bash -c 'eval "$(< /dev/stdin)"; now' # define function from stdin
Sat Oct 16 10:56:22 AM EEST 2021

Application: integrating with a user's shell

This is all fine and dandy, but what use is it?

Maybe you have noticed that many tools now modify your shell's configuration file, adding a line like:
eval "$(rbenv init -)". The first time I encountered this was actually when using ssh-agent.

The problem is this: your program needs to modify the environment in the shell, but it doesn't have access to the shell directly.

The solution: dump instructions for the top-level shell to interpret. Often these are dynamically generated.

Your new problem: how to do this safely, i.e. without causing unintended side-effects due to quoting errors, invalid code, etc.

This is where declare -p comes in handy: in your subprogram you can set up the environment as you need it and then dump what you need to be done in the parent shell on stdout.

Quickly finding files

When working with many code repositories, I often am looking for a specific file (I know its name), but I don't know where exactly it is located in the filesystem (sometimes not even in which repository).

Modern developer tooling comes with a fuzzy finder for files in your project, but searching across project boundaries is either cumbersome (open all projects in your IDE/editor) or slow (using find).

GNU coreutils come with a little tool called locate. Locate and its ill-named companion updatedb build a compressed database of filenames and provide lightning-fast search across filenames.

The catch: by default updatedb is set up to index your whole file system, which means you need to run it as root and it is going to be very slow, since the number of files is huge.

Luckily locate can be configured to use a different database, likewise updatedb can index only specific directories.

Let's create a script that allows us to quickly cd into the directory of a file found with locate. Since we want to cd somewhere, we need to define a shell function that does it for us, as a subshell cannot change the current directory of a parent shell.

Put the following into a file called find-file somewhere on your path and make it executable:

#!/usr/bin/env bash

# Build our search index
reindex() {
  # index $HOME, ignoring .git and node_modules
  updatedb --output=$HOME/.cache/locate/db \
           --localpaths="$HOME" \
           --findoptions="-name .git -prune -o -name node_modules -prune"

# Search for a file; if there's no index, build it
find-file() {
  [[ -e "$HOME/.cache/locate/db" ]] || reindex
  locate --database=$HOME/.cache/locate/db "$@"

# Dump initialization code to stdout
init() {
  # this is the function we want to export to the user's shell
  cf() {
    local file=$(find-file "$@")
    local dir=${file%/*}
    [[ -n "$dir" ]] && cd "$dir"
  declare -pf cf

main() {
  case "$1" in
    -r|--reindex) reindex; shift;;
    -i|--init) init; exit 0;;

  find-file "$@"

main "$@"

Now we can test the initialization code:

$ find-file --init
cf ()
    local file=$(find-file "$@" | fzf);
    local dir=${file%/*};
    [[ -n "$dir" ]] && cd "$dir"

And after evaluating that, we can use cf to change directories:

$ eval "$(find-file --init)"
$ cf # launches file finder and puts you in the right directory

Bash: using regular expressions

Working with strings

Bash comes in with a few built-in features during parameter expansion to work with strings:

Generic find and replace: ${var//source/dest} replaces all occurrences of source in $var with dest

Removing prefixes and suffixes: ${var##prefix} removes the longest prefix from $var, where prefix can be any Bash pattern
(replace ## with %% to remove a suffix instead). This is useful for working with paths:

printf "filename: %s\n" "${p##*/}"  # prints "filename: libexample.so"
printf "directory: %s\n" "${p%%/*}" # prints "directory: /usr/local/lib"

But sometimes this is not enough.

Using regular expressions

Luckily Bash has you covered!

Bash supports POSIX extended regular expressions (probably familiar already to you from grep -E). Here's how the parts fit together:

  • the builtin [[ can be used to perform a regular expression match
  • the BASH_REMATCH array holds the match info (the full match plus any capture groups)

Let's use this to trim whitespace from the beginning and end of a string:

trim() {
  local text="$1"

  # perform the match
  [[ "$text" =~ ^[[:space:]]*(.+)[[:space:]]*$ ]]

  # print result of the first capture group
  printf "%s\n" "${BASH_REMATCH[1]}"

trim $(printf "  \n hello\n\n")
# hello

The important bit here is that [[ only treats the righthand side of =~ as a regular expression if it is not quoted.

If you do need to quote a lot of special characters, the recommended way is to store the pattern in a variable and substitute it unquoted:

# pattern for parsing log lines like this: [INFO] [2021-10-14T14:50:00Z] Important message
log_pattern='\[([^]]+)\] \[([^]]+)\] (.*)'
[[ "[INFO] [2021-10-14T14:50:00Z] Important message" =~ $log_pattern ]]
declare -p BASH_REMATCH
# [1]="INFO"
# [2]="2021-10-14T14:50:00Z"
# [3]="Important message"

Bash: declare in depth, Part 1

If you run help declare in Bash, you will see that this builtin supports an overwhelming 14 different options for defining variables.

Actually, declare is a family of functions and you have used some of its cousins like export and local already.

Some of these options allow for powerful metaprogramming (like namerefs), others help you make your scripts more robust. Let's look into some of the more commonly useful options.


You can declare a variable as read-only (or constant), by using the -r option:

declare -r x=1
printf "x=%s\n" "$x" # => x=1
x=2                  # -bash: x: readonly variable

Since this is very common, it comes with a handy alias: readonly

readonly x=1

works just like the above.

exporting variables

I'm sure you have some variation of this line in your shell's configuration file:

export PATH="$HOME/.yarn/bin:$PATH"

Using export sets a flag on the given variable (e.g. PATH), which causes the shell to put this variable into the environment of any processes started by it.

A variable is also export (for the invocation of one command only) if you place the assignment at the beginning of the line.

bash -c 'printf "x=%s\n" "$x"'     # prints x=
declare -x x # export x
bash -c 'printf "x=%s\n" "$x"'     # prints x=1
y=2 bash -c 'printf "y=%s\n" "$y"' # prints y=2
# y is undefined here

exporting functions

Running help declare you can learn about the -f option:

      -f        restrict action or display to function names and definitions

This allows us to export functions to subshells!

now() {
  date +%s

bash -c 'now' # bash: line 1: now: command not found
declare -xf now
bash -c 'now' # 1634108076

How does this work?

Bash puts the function definition into an environment variable that is exported to subprograms:

bash -c 'printenv | grep now'
# BASH_FUNC_now%%=() {  date +%s

Putting it all together

Combining readonly with export we can easily create little embedded DSLs that are interpreted directly by Bash.

The plan roughly looks like this:

  • create a script that will interpret our DSL
  • in that script, define all available DSL commands, make them readonly and export them
  • source the actual DSL program

Why mark exported functions as readonly? Well, if a user of the DSL unknowingly defines a function that is vital for the DSL to work, it would wreak havoc on the whole script.


Let's build a tiny DSL emulating make, which could be useful for managing dependencies in your dotfiles.

This is how we would like to specify our dependencies and how to satisfy them:

# MiniMakefile
target rbenv:all needs rbenv rbenv-init

test rbenv 'which rbenv'
provide rbenv '
  curl -fsSL https://github.com/rbenv/rbenv-installer/raw/HEAD/bin/rbenv-installer | bash

test    rbenv-init 'grep -q "rbenv init" ~/.bashrc ~/.bash_profile'
provide rbenv-init 'printf "eval \"$(~/.rbenv/bin/rbenv init -)\"\n"'

Once we have our minimake implementation, we'll be able to run:

minimake rbenv:all and it will automatically install rbenv and configure your shell if necessary.

We'll look into the implementation of minimake another time!

Bash: Indirectly referencing variables

Sometimes it comes in handy to modify a variable in a function by name.

Usually the first tool people reach for in this case eval:

# sets a variable to a new value
setvar() {
  eval $(printf "%s=%q" "$1" "$2")

setvar x 2
printf "x=%s\n" "$x" # prints x=2

This works but making sure that everything is quoted correctly is cumbersome.

Bash supports a safer way for this: namerefs

To use a nameref you must declare a variable as a nameref using the -n argument.
After that, any changes made to the nameref are actually applied to the variable it is referring to.

Here's setvar written using namerefs:

setvar() {
  local -n varname="$1"

setvar x 3
printf "x=%s\n" # prints x=3

# it also supports arrays:
declare -a x=()
setvar 'x[0]' 3
setvar 'x[1]' hello
setvar 'x[2]' world
declare -p x
# prints: declare -a x=([0]="3" [1]="hello" [2]="world")

Adding directories to PATH-like variables

Besides PATH, there are other colon-separated variables that are used by various programs for lookup purposes, such as man's MANPATH or Ruby's RUBYLIB.

Often these are modified in your ~/.bashrc like this:

export PATH="$HOME/.yarn/bin:$PATH"

That works until you source your ~/.bashrc again, for example because you edited it: you'll end up with the same entry twice on your PATH.

Let's use namerefs to set PATH-like variables idempotently: calling add_to_path with the same path twice will not change the resulting path.

add_to_path() {
  local -n pathvar="$1"
  local dir="$2"
  # set IFS to : to split by colons
  local IFS=:
  local -a pathparts=($pathvar)

  for part in "${pathparts[@]}"; do
    if [[ "$part" == "$dir" ]]; then


Trying it out in the shell, we can see that it works as expected:

$ P=a:b
$ add_to_path P first
$ [[ "$P" == "first:a:b" ]] && printf "OK\n"
$ add_to_path P first
$ [[ "$P" == "first:a:b" ]] && printf "OK\n"

hello, world

hello, world

The Bash shell is used by many people every day as their many interface to their computer.

Yet, how many people actually know what Bash is capable off?

It is easy to learn the first 20% of Bash to do useful things and then stop learning.

Here I'll share some of the lesser-known features of Bash and how to integrate them into your workflow.

Stay tuned!