
Automate Everything: LLMs and Bash Scripts Mean Saved Time
Published on in development , technology , GenAI , automation
The landscape of software development is constantly evolving, and with the advent of sophisticated Large Language Models (LLMs), we’re seeing new possibilities emerge for automating tasks. However, relying on AI agents to execute complex workflows in real-time often proves to be a poor choice. These workflows are slow, costly, and unpredictable, with potential failures at any point. They are inherently non-deterministic, making them difficult to test and validate.
In contrast, there’s a far more reliable and efficient method for maximizing automation: using LLMs to create deterministic scripts. These scripts are predictable, repeatable, and can be thoroughly reviewed and tested before execution. You can simply automate tasks, ensuring reliability and control while avoiding the pitfalls of real-time AI agent workflows.
This approach leverages the LLM’s ability to understand code, create documentation, and generate code to produce predictable, repeatable scripts in languages like Bash, Python, or Node based on your actual project needs. It’s like having the ability to create your own robots on-the-fly when you need them. The idea of automation is not new, but the flexibility, ease, and speed to automate is. The key to doing it right is focusing the LLM on the creation phase, allowing you, the human, to review, test, and integrate a robust automation tool that performs the exact same steps every time it’s run. This puts you in the driver’s seat, empowered by AI-powered aides that make implementing sophisticated automation faster and safer than before.
There are however, times when leveraging the “creative” aspect of LLMs can be used to great success. Create the automations like mentioned above but then enable a human-in-the-loop experience. Simply present a human (you, probably) to make the final call, with a way to modify/approve/reject the LLM answer and carry on the automation.
Hacker Mentality Goes a Long Way
I love the idea of getting things done quickly, and getting rid of the boring or tedious aspects of work. I also love the idea that the best documentation is a (bash) script that does the work for me. Now I’m a fan of a good project setup doc, but if it involves copy/pasting commands in a terminal, or interfacing with weird GUIs that dramatically increase the chance of human error (I’m thinking of you, GCP console), then I’m less of a fan. And now, with the advent of higher quality LLMs, creating good READMEs and then turning them into automated scripts has literally never been easier.
Getting started is also very easy. Leverage your expertise and knowledge of the project and start with obvious pain points. Write a soapbox for the LLM of your choice (I’ve had really good results from Google’s latest Gemini 2.5 Pro Model), paste in as much (or little) context as you want (e.g. source code, docs, git commit messages, etc) and Bob’s your uncle. It’s really that easy. Be opinionated, and maintain a high bar for quality (you don’t want to automate with mediocre, edge-case riddled scripts) and iterate with the LLM and testing/running locally. It’s dirt cheap, fast and rewarding to go from doing a tedious manual process to a robust automated script in the matter of minutes.
Personally, I lean to bash scripts when its just running on my/other devs machines. Bash has easy access to the plethora of applications across Linux/BSD/MacOS, typically themselves highly refined and opinionated and good documentation. Bash scripts are pretty portable but if you’re looking to integrate automations in a CI/CD pipeline or fully automated environment, feel free to pick the language/framework that works for you. Typescript project? Typescript automations. Python? Well, use Python then. It’s really that simple. And for extra points, get it to spit out documentation about the scripts which immortalizes why it exists, how to use it and a great building block of context to extended/create new automations in the future.
Get started, like now
Get started where it makes sense for you/your team/project. But if you’re out of ideas, I’ve found a lot of success in various scripts I use when writing this blog. For starters, a personal favourite is automating git commit messages. You can see the source if you’re interested in the _scripts/auto-commit.sh
file in this blog’s public repo on GitHub. The magic is just how simple it is. The script:
- grabs changes and does some simple text processing (e.g. escaping, minimizing spacing etc)
- passes that and some other details (like past 10 commit messages,
ls
response etc) to LLM with a pretty straightforward prompt - gets a response back and then either automatically commits the message/changes, or waits for the human (me) to edit/approve or reject the message depending of the flag used when calling the script
I’ve always been pretty lazy, and my commit message “standard” is very lax, so the LLMs responses are actually almost always better than what I’d write myself. I rarely ever need to edit the message because of low quality, but on occasion the LLM might inject backticks or unsupported markdown/formatting that needs to be removed. Having a way to manual intervene where it makes sense is clutch, and turns a 90% success rate into 100% at the cost of a couple human seconds every time its called. However, if I’m feeling extra cheecky, I pass an --auto
flag and I save those couple seconds.
The benefits just from that simple script are multiple. I commit more frequently without being punished (e.g. my commit messages are done in a second, without using any mental resources lol), I commit better messages (useful for git blames, searching, communicating with others) and I get a feeling of joy knowing I solved a problem that couldn’t really be solved before LLMs, at least not this easily, quickly and cheaply. And anyone else on project can also use it!
Maximizing Automation: Identifying Opportunities and Building Pipelines
LLMs don’t just help write individual scripts; they can help you think bigger about automation. To maximize the benefits, consider these points:
- Identify More Automation Candidates: Beyond the obvious repetitive tasks, LLMs can help uncover hidden automation opportunities. Describe your daily or weekly manual processes to the LLM, provide your projects source code/documentation, and prompt it to suggest steps that could be scripted. Provide examples of data manipulation, file system operations, or external API interactions you perform manually. The LLM can often see patterns and suggest ways to break down larger, semi-manual tasks into smaller, scriptable components.
- Break Down Complex Workflows: Automating everything is probably not the right expectation. Start small and build out robust scripts in the same spirit exemplified by Unix programs (do small things well). LLMs excel at taking a high-level goal and breaking it down into a sequence of logical steps. You can then work with the LLM to generate individual scripts for each step, or when it makes sense, an orchestrating script (e.g., a main Python script calling Bash utilities). This allows you to build complex, end-to-end automation pipelines piece by piece.
- Build Your Automation Toolkit: Think of the scripts generated by the LLM as building blocks. Over time, you can create a collection of reliable, deterministic scripts for common tasks (image processing, commit management, deployment, data backups, reporting). LLMs can help you document this toolkit and even suggest ways to combine existing scripts for new automation needs.
By using LLMs not just for coding single tasks, but for identifying opportunities and designing interconnected systems of scripts, you can significantly increase the scope and depth of your automation likely outside what’s obvious to any one person. However, that’s not to say you have to do so; you could also go in with an idea in mind and an opinionated way of executing it. Ultimately, use common sense and your own expertise to guide what you automate.
Doing It Right: Ensuring Reliability, Security, and Maintainability
Generating a script is only the first step. To truly maximize automation effectively, you need to ensure the resulting scripts are reliable, secure, and maintainable. This is where the human element remains critical, augmented by the LLM’s capabilities:
- Prioritize Deterministic Tasks: As discussed, the “right” way often means focusing on tasks where the outcome should be consistent every time. Use LLMs to generate scripts for processes with clear inputs and expected outputs (like file transformations, data migrations, structured content publishing). Reserve non-deterministic AI execution for tasks where variability is acceptable (like generating creative content or initial drafts), that is, typically low-risk, fail-safe processes.
- Rigorous Review and Testing: Never run AI-generated code blindly, especially for critical tasks. Treat the LLM’s output as a strong suggestion or a first draft. Carefully review the script’s logic, ensure it handles edge cases (which LLMs can often help you think through), and test it thoroughly in a safe environment before deploying it to production. LLMs can also assist in writing unit or integration tests for the generated scripts.
- Address Security Concerns: AI models can sometimes generate code that is less secure or contains vulnerabilities. If your automation involves system access, API keys, or sensitive data, scrutinize the generated script with security in mind. Ask the LLM to review its own code for potential security issues or suggest more secure coding practices. The responsibility for security ultimately rests with you, and so does your reputation.
- Ensure Maintainability: LLM-generated code can sometimes be less readable or not adhere to your team’s coding standards. Refine the generated scripts – add comments (the LLM can help here too), refactor confusing sections, and ensure consistency with your existing codebase. A script you can’t easily understand or modify down the line isn’t truly maximizing efficiency in the long run.
- The Human as Architect and Editor: Doing it right means recognizing that your role shifts. You become less of a manual executor and more of a process architect, prompt engineer, code reviewer, and system orchestrator. Your expertise is needed to define the problem, guide the AI, validate the solution, and maintain the resulting automation infrastructure.
More Real-World Examples
I’ve created a few other automations for managing this blog, like:
- Image Optimization
_scripts/manage_images.sh
: This is a perfect deterministic task. “Doing it right” here means generating the script, testing it on various image types and edge cases (corrupted files?), and ensuring the output quality (JPEG_MAX
,OPTIPNG_LEVEL
,PNGQUANT_QUALITY
) meets my standards. - Site Management
_scripts/manage_site.sh
: Generating category pages and sitemaps are deterministic tasks. “Doing it right” involves ensuring the script correctly parses the Jekyll front matter (handling potential errors), generates correct URLs and file paths (permalink
,baseurl
), and properly cleans up only obsolete files, not current ones. - Auto-commit
_scripts/auto_commit.sh
: As mentioned earlier, this is a non-deterministic example. “Doing it right” here could mean having the script suggest the commit message and require human confirmation before committing, or setting a high confidence threshold if automating the commit completely. The human (you) decides the acceptable level of AI autonomy based on the task’s criticality.
Take Away
LLMs offer unprecedented potential to maximize automation in software development. By focusing their power on the creation of reliable, deterministic scripts, you can build robust workflows that save significant time and reduce manual errors. Doing it right means strategically identifying appropriate tasks, rigorously reviewing and testing the AI-generated code, considering security and maintainability, and embracing your elevated role as the architect and guardian of your automated systems. Start automating smarter, and watch your productivity grow.