Stavros Korokithakis Details Professional LLM Workflow for Software Development

Stavros Korokithakis Details Professional LLM Workflow for Software Development

Developer Stavros Korokithakis outlines a concrete system for writing software using LLMs, emphasizing terminal-based tools and iterative refinement. His approach treats the model as an integrated coding partner, not a conversational chatbot, focusing on practical command-line implementation.

A detailed breakdown of how to integrate Large Language Models into a professional software development workflow has been published by veteran developer Stavros Korokithakis. The guide moves beyond simple prompting, presenting a structured, tool-assisted methodology that treats the LLM as a core component of the IDE.

The core shift advocated by Korokithakis is treating the LLM as a direct component of the development environment, accessible via command-line tools for speed and precision. He details a setup using ollama for local model inference paired with a custom script, arguing this bypasses the latency and interface limitations of web-based chat clients. This local, terminal-centric approach is framed not as a novelty but as a fundamental productivity upgrade for routine coding tasks.

His published workflow is methodical. It begins with using the LLM to generate an initial code skeleton or solve a well-defined sub-problem. The developer then immediately transitions to a traditional edit-compile-debug loop, using the LLM iteratively to explain errors, suggest fixes, or refactor code. The key, he stresses, is rapid iteration—making small, verifiable requests and integrating the output directly into the working codebase without treating the model's first response as final.

What Happened: A Toolchain, Not Just a Prompt

Korokithakis has publicly documented his specific toolchain and mental model. He runs models like CodeLlama or DeepSeek-Coder locally using Ollama, interfacing with them via a lightweight Python wrapper script. This script handles context management and sends prompts directly from the terminal, allowing output to be piped directly into files or other Unix tools.

The practical use cases he enumerates are granular and developer-centric: generating boilerplate code for new file structures, writing unit tests for existing functions, explaining complex error messages from a compiler or linter, and refactoring code for clarity or performance. He explicitly avoids using the LLM for high-level architectural design or novel algorithm creation, positioning it instead as an accelerant for the mechanical aspects of coding.

Why This Matters: The Shift from Chat to Integration

This matters because it represents a maturation of LLM-assisted development. Moving the interaction from a separate browser tab into the terminal and editor signifies a transition from a disruptive, conversational tool to an integrated, workflow-native agent. The reduction in friction—no copying and pasting, no switching contexts—directly impacts adoption velocity and daily utility.

For engineering teams, the implication is that LLM proficiency may soon be less about crafting the perfect prompt and more about configuring and utilizing a local, low-latency model service as a standard development dependency. It pushes the value proposition beyond code generation into continuous, in-the-flow assistance for debugging, documentation, and testing. The efficiency gains are found in the aggregation of micro-tasks, not in outsourcing entire feature development.

Stavros Korokithakis Details Professional LLM Workflow for Software Development

The People and Competitive Context

Korokithakis is a seasoned software engineer and blogger whose practical guides have garnered significant attention on platforms like Hacker News. His approach sits within a broader movement of developers optimizing local LLM tooling, which contrasts with the cloud-based, enterprise-focused AI coding assistants like GitHub Copilot Enterprise or the newly launched Claude Code.

The competitive landscape is bifurcating. On one side are the integrated, commercial SaaS platforms (GitHub Copilot, Amazon Q Developer, Tabnine). On the other is a growing ecosystem of open-source models (from Meta, Mistral, DeepSeek) and local orchestration tools (Ollama, LM Studio, Continue.dev). Korokithakis's workflow champions the latter, prioritizing privacy, cost control, and customization over seamless cloud integration. This reflects a significant developer-led trend towards owning the AI toolchain.

What Happens Next: Specialization and Automation

The next logical step from this foundational workflow is increased specialization and automation. We can expect more developers to publish and share their finely-tuned wrapper scripts and configuration profiles for specific languages or frameworks. The community will likely develop a set of best practices for context management—how much of the existing codebase to send the model for optimal results.

Furthermore, this local, scriptable approach is the necessary precursor to more advanced autonomous agent workflows. A reliable, fast local LLM endpoint is the engine that allows for the creation of agents that can perform multi-step development tasks, like running generated code, interpreting test results, and iterating without human intervention. Korokithakis’s method provides the stable, low-level pipeline upon which those higher-order systems can be built. Watch for MCP (Model Context Protocol) servers and tools that plug directly into this local paradigm, expanding the LLM's access to the developer's environment.

Source and attribution

Hacker News
How I write software with LLMs

Discussion

Add a comment

0/5000
Loading comments...