After writing shell scripts almost daily for the past 15+ years, I can tell you that much of the enterprise relies on their functionality. This is, in part, why I started sampo a few years ago--as a way to easily put your shell scripts behind an API.
I can make an API call like this:
% curl http://localhost:1042/example This is an example of an external script. It's running a script with three echo commands in it. This could be any arbitrary code you want. %
And here is the script it is running:
#!/usr/bin/env bash set -e set -u set -o pipefail echo -e "This is an example of an external script." echo -e "It's running a script with three echo commands in it." echo -e "This could be any arbitrary code you want."
The API has no problem running shell scripts because it is just shell code calling shell code. 🙂 Yes, you can do this with other languages, but the tight integration is not there like you have with shell on shell. And there are no dependencies, to boot.
A Real Life Example
Converting A Backup Shell Script Into An API Endpoint
Perhaps this is best illustrated with a real example. Let us pretend you have a shell script called
backup_blog.sh. The shell script works wonders and is already set up and running on a systemd timer. But if you want to see what it is doing, you need to check the system itself with
systemctl status backup_blog. The script also produces output to a log file that we could look at on-system or ship off to some other more convenient place.
But you have been asked to gather information on legacy systems like the one the blog is running so it can be analyzed for an update. This is an effort to see what is in the environment, what is keeping it running, and how to migrate things elsewhere.
In its simplest form, you could run
sampo on the node where the shell script lives. Just give it an endpoint accessible by the device querying it and you are done.
The code above:
- made a hardlink of the existing script and put it in
- added an endpoint in
sampo/sampo.conf(you would normally edit this file directly)
socat(the example assumed
sampo.shwas already there)
Now from a client that can access that node, you can simply call
Making The Script Return JSON
That kind of text would probably make an API barf it right back at you. After some edits to the
backup_blog.sh script, it can return something like this instead.
That is a best-case scenario and required editing the script in sort of a weird way, but it does work.
This script was edited so it
echos the opening/closing brackets of a JSON dictionary. Each step of the script is wrapped in an
else clause, and prints some more JSON dependent upon the result of the command.
For relatively simple scripts, this works OK, but it takes time and you can still end up with edge cases where stuff still prints to STDOUT/STDERR and totally breaks the simulated JSON output.
What Is Next?
Next up for sampo is to tackle that. Make a wrapper that takes any script and returns valid JSON from it. The trick here will be to just make a
stderr: key and populate those accordingly as one giant string. This would allow for the client consuming it to be able to parse it without problem, and then any massaging of the data could be done on the client's end.
Sampo is not production-ready. To quote someone special:
"Anyone who uses this in production might meet a gruesome end with a rusty fork."
There are a slew of caveats with
- there is no encryption or certificates of any kind
- the API does not meet the HTTP specification
GETrequests are supported
What Is It Actually Good For Then?
For now, I see it as a useful piece of software for personal projects. Since
sampo can run on Docker, Podman, Kubernetes, or in a local shell, you can deploy it anywhere. You can also deploy it quickly as you just need these three things:
Maybe you have a NAS that has a bunch of scripts you like to call occassionaly. Drop
sampo on there, and then make API calls your scripts from any place you like.
If nothing else, it is a fun project to work on and maybe it will go somewhere someday.