User Guide — Shell Sentinel
Overview
Shell Sentinel is a terminal-native companion that keeps a persistent SSH/SFTP session against a remote server and lets you manage infrastructure through natural language. The assistant understands your intent, suggests safe commands and documents every action.
Requirements
- Colour-capable terminal (recommended:
xterm-256color
). - Python 3.10 or newer.
- Access to a server exposing SSH/SFTP.
Installation
- Create a virtual environment:
python3 -m venv .venv
source .venv/bin/activate
- Install dependencies:
make install
Language configuration
The app reads SMART_AI_SYS_ADMIN_LOCALE
. If the variable is not defined it
falls back to the system locale (default: English).
export SMART_AI_SYS_ADMIN_LOCALE=en # English
export SMART_AI_SYS_ADMIN_LOCALE=es # Spanish
export SMART_AI_SYS_ADMIN_LOCALE=de # German
Launching the app
make run
A retro welcome screen appears and closes automatically after 5 seconds or when you press any key.
Interface layout
- Conversation panel: renders Markdown responses from the assistant.
- Input area: multi-line editor; submit with
Ctrl+S
by default. - Footer: shows the SSH status, active provider/model and thinking state.
Core commands
Commands and aliases are available in English, Spanish and German.
/connect <host> <user> <password|key_path> [port]
— open the session (default port 22)./disconnect
— close the active connection./help
— show the command summary./status
— inspect agent, provider, model and streaming status./exit
— display the confirmation dialog before quitting.
Working with the assistant
Type natural-language instructions. When the message is not a slash command, the Strands agent processes it. Examples:
-
List the top CPU processes
-
Upload /tmp/script.sh to /home/ubuntu/bin/script.sh
The agent reuses the active SSH/SFTP session to execute remote commands or transfer files and records every action in the logs.
Configuration reference
-
conf/app_config.json
stores styling, prompts and shortcuts using placeholders like{{ui.output_panel.title}}
that resolve per locale. -
Translations live in
conf/locales/<lang>/strings.json
. Add new text with_("key")
in the codebase and provide entries for every supported language. -
conf/agent.conf
defines the LLM provider, tool options and timeouts.
Copyconf/agent.conf.example
before tweaking it.
LM Studio requireslms server start
; Cerebras expectsCEREBRAS_API_KEY
(orapi_key_env
) and leverages the official SDK with SSE streaming.
Useremote_command.max_output_chars
to limit how much stdout/stderr gets forwarded to the agent.
Troubleshooting
-
Colours look wrong: verify
TERM
and switch toxterm-256color
if necessary. -
Assistant not responding: review
conf/agent.conf
, provider credentials andlogs/app.log
. -
"No active SSH connection": run
/connect
before asking the agent to execute remote actions.
Additional resources
- Contributor handbook: key policies for documentation, localisation and release cadence.
- Product overview manual: architecture, stack highlights and operational responsibilities.