Converting Your Shell Scripts To An API Using Sampo

Converting Your Shell Scripts To An API Using Sampo
sampo.sh puts your shell scripts into an API

After writing shell scripts almost daily for the past 15+ years, I can tell you that much of the enterprise relies on their functionality.  This is, in part, why I started sampo a few years ago--as a way to easily put your shell scripts behind an API.  

I can make an API call like this:

% curl http://localhost:1042/example
This is an example of an external script.
It's running a script with three echo commands in it.
This could be any arbitrary code you want.
%

And here is the script it is running:

#!/usr/bin/env bash
set -e
set -u
set -o pipefail

echo -e "This is an example of an external script."
echo -e "It's running a script with three echo commands in it."
echo -e "This could be any arbitrary code you want."

The API has no problem running shell scripts because it is just shell code calling shell code. 🙂 Yes, you can do this with other languages, but the tight integration is not there like you have with shell on shell.  And there are no dependencies, to boot.

A Real Life Example

Converting A Backup Shell Script Into An API Endpoint

Perhaps this is best illustrated with a real example.  Let us pretend you have a shell script called backup_blog.sh.  The shell script works wonders and is already set up and running on a systemd timer.  But if you want to see what it is doing, you need to check the system itself with systemctl status backup_blog.  The script also produces output to a log file that we could look at on-system or ship off to some other more convenient place.

But you have been asked to gather information on legacy systems like the one the blog is running so it can be analyzed for an update.  This is an effort to see what is in the environment, what is keeping it running, and how to migrate things elsewhere.

In its simplest form, you could run sampo on the node where the shell script lives.  Just give it an endpoint accessible by the device querying it and you are done.

% ls -l
-rwxr-xr-x  1 root  wheel  12476 Mar 11 08:01 backup_blog.sh
% mkdir -p sampo/scripts # assume sampo/sampo.sh exists
% ln backup_blog.sh sampo/scripts/backup_blog.sh
% echo match_uri '^/backup$' run_external_script "${SAMPO_BASE}"/scripts/backup_blog.sh > sampo/sampo.conf
% socat TCP-LISTEN:1042,reuseaddr,pf=ip4,bind=127.0.0.1,fork system:sampo/sampo.sh
%
put an existing script in sampo/scripts, add an endpoint to sampo.conf, run sampo.sh via socat

The code above:

  • made a hardlink of the existing script and put it in sampo/scripts/
  • added an endpoint in sampo/sampo.conf (you would normally edit this file directly)
  • started sampo/sampo.sh using socat (the example assumed sampo.sh was already there)

Now from a client that can access that node, you can simply call

user@laptop $curl http://bloghost.internal.net:1042/backup

/var/www/blog ~
+ sudo systemctl is-active myblog
- Exporting content
+ Exporting content
A SystemError occurred.
~
mysqldump: Couldn't execute 'FLUSH TABLES': Access denied; you need (at least one of) the RELOAD or FLUSH_TABLES privilege(s) for this operation (1083)

user@laptop $
API call worked, but the shell script itself had a failure (and did not return a machine-readable piece of structured data)

Making The Script Return JSON

That kind of text would probably make an API barf it right back at you.  After some edits to the backup_blog.sh script, it can return something like this instead.

user@laptop $ curl http://bloghost.internal.net:1042/backup
{
	"backups": {
		"database": true,
		"RsyncContentDir": true,
		"RsyncSqlDumps": true,
		"RsyncGhostExport": "failed"
	}
}
user@laptop $
backup_blog.sh is edited to return JSON instead of STDOUT/STDERR

That is a best-case scenario and required editing the script in sort of a weird way, but it does work.

echo "{"
echo '  "backups": {'
...
...
if rsync -rltd user@bloghost:/var/www/ /backups >/dev/null; then
  echo '"RsyncGhostExport": "successful"'
else
  echo '"RsyncGhostExport": "failed"'
fi
...
...
echo "  }"
echo "}"
Ultimately, this is quite hacky--echoing bits and pieces of JSON and piecing them together.

This script was edited so it echos the opening/closing brackets of a JSON dictionary.  Each step of the script is wrapped in an if / else clause, and prints some more JSON dependent upon the result of the command.

For relatively simple scripts, this works OK, but it takes time and you can still end up with edge cases where stuff still prints to STDOUT/STDERR and totally breaks the simulated JSON output.

What Is Next?

Next up for sampo is to tackle that.  Make a wrapper that takes any script and returns valid JSON from it.  The trick here will be to just make a stdout: and stderr: key and populate those accordingly as one giant string.  This would allow for the client consuming it to be able to parse it without problem, and then any massaging of the data could be done on the client's end.

Caveats

Sampo is not production-ready.  To quote someone special:

"Anyone who uses this in production might meet a gruesome end with a rusty fork."

There are a slew of caveats with sampo 1.x.x:

  • there is no encryption or certificates of any kind
  • the API does not meet the HTTP specification
  • only GET requests are supported

What Is It Actually Good For Then?

For now, I see it as a useful piece of software for personal projects.  Since sampo can run on Docker, Podman, Kubernetes, or in a local shell, you can deploy it anywhere.  You can also deploy it quickly as you just need these three things:

  • sampo.sh
  • sampo.conf
  • scripts/

Maybe you have a NAS that has a bunch of scripts you like to call occassionaly.  Drop sampo on there, and then make API calls your scripts from any place you like.  

If nothing else, it is a fun project to work on and maybe it will go somewhere someday.