AI-Powered Development Workflow
From Prompt to Slash Command
Recently I transformed my prompt for developing a new feature or bugfix into a slash command, which is typically defined as a process, but now I extract the whole process as a slash command. I think this is the correct direction, but this process is still evolving, and I will try and improve all phases in this process.
Requirement Refinement
Whenever developing new software before the AI era, we had multiple steps from the beginning to delivery. At the onset of any software project, the first thing is to understand the requirements. Sometimes users/clients need a feature, but under the hood, he/she just wants to solve some problem. We must dig beneath the surface to understand what the real problem is, which is also applicable in vibe coding. When we ask AI to implement something, actually we may not know exactly what we want. So the first step is to do the requirement refinement, looking through the surface into the essence. Currently I know there are some principles to help us identify the requirement, such as SAP design thinking, but there must exist something else which achieves the same goal. This phase involves multiple iterations between user and AI to clearly identify where the real issue stands.
Proposal Generation
After the requirement is clear, the next step is to ask AI to propose several options. In real cases when we have an idea, there should be more than one path to get to that point. Working with AI, we must still follow this principle. Those proposals may focus on different aspects, so the specification may also vary. In this phase, also ask AI to assess each proposal, to analyze the risks and trade-offs, analyzing the risk including risk type, impact, scope, costs, benefits and resolution. Clearly conveying the key cons and pros makes it easy to choose which one fits best.
Comprehensive Research
After the proposal is confirmed, we need to begin comprehensive research to gather relevant materials and documentation. This involves investigating various technical aspects including but not limited to system architecture, programming languages, UI/UX design patterns, frameworks, libraries, caching strategies, event bus implementations, and data persistence solutions. Additionally, we should research best practices, performance considerations, security implications, scalability requirements, and integration points with existing systems. The research phase should also include evaluating competing solutions, studying case studies of similar implementations, and identifying potential technical constraints or dependencies that might affect the chosen approach.
Design & Specification
The next step is the design phase, which requires creating a holistic specification to guide the AI agents in implementing the feature. During this phase, ask the AI to consider constraints including the system architecture, programming language, UI/UX, data persistence, and other relevant factors. Choose the best option and follow established best practices. Ensure the design is repeatable, executable, and optimized for performance by evaluating both time complexity (e.g., O(1) vs. O(N)) and space complexity to achieve efficient resource utilization. Also it must enforce the modular design; all parts will be loosely coupled and each module provides a neat interface to co-work with each other. With this proposal, which is the best programming language?
Execution & Review
Execution phase: upon the specification, analyze and choose the most suitable agent to start work, test and after that, we have a code review.
Cleanup & Commit
Last we need to do some cleanup and commit/push the code.
super-skill-claude-artifacts plugin
With all the analysis, I created a marketplace and created a plugin to streamline the whole process. With the plugin installed, it simplifies my dev/fix process.
Currently in this repo super-skill-claude-artifacts , I have created 2 plugins
- context-keeper
- super-dev
As we all know, when developing with Claude Code, when context is full, it will auto-compact, and then we will lose some context, so context-keeper comes in place. It will register a preCompact hook, summarize and store the memory in current project .claude/summaries. and also another sessionStart hook will execute to load the last context to prevent any loss. And if you configured knowledge-mem mcp tool, you can also save the memory to knowledge(this feature is not tested yet, implemented using mcp-use).
And also super-dev is the real development process, when a coordinator in its core, it arrange the sub-agents, track the progress, monitor the process to make sure everything is ok. To reduce the context consumption, I use mcp-use to create scripts, which follow Code execution with MCP: Building more efficient agents that using scripts to call mcp tools to do the real search and research, then only return the result to Claude.
P.S: At beginning found a project mcporter: which has similar functionality to mcp-use, but finally I choose to mcp-use because it has dedicated connectors such as HttpConnector, WebsocketConnector and StdioConnector to connect to existing mcp servers.
Also install and enforce ast-grep in the workflow, using Abstract Syntax Tree (AST) patterns to search the code base for precise syntax-aware searches, refactoring, and quality checks during development and review phases.
And also find a simlar project osgrep that can also do code search, and integrate with claude code, will try later. And also find a simlar cousor has a WarpGrep that also do the code search, there are two projects Acemcp and Acemcp Node.js 实现 implement it as a mcp server
Fuether Thought on context management
updated on 12/2/2025.
this week I stopped coding and spent sometime to read some aticales about the context engineering, memory management. as i read more, i felt it is not easy to do context management.
But one thing i did is good, in my workflow, i use a coorinator to direct the whole process, memory save before compact. but things make more complicated.
- When I performed the summarization I simply asked the LLM to “summarize” with no guidance on what to keep or discard; introduce explicit retention rules.
- Store the memory as structured JSON instead of plain Markdown so downstream tools can parse it reliably.
- Rename the
.claude/summeriesdirectory to.claude/memoriesto reflect its true purpose. - Rename plugin commands:
load-context→load-memorylist-context→list-memory- add
save-memoryRenameprecomact.pytosave-memory.py; invoke it from thepreCompacthook.
- Consolidate memories in a single, cross-platform store; evaluate
graphitiand other open-source backends whilenowledge-memremains Linux-unsupported. - Have the coordinator write a dedicated
progress.jsonfile that tracks every phase, agent assignment, and outcome. - Insert an explicit Code-Review phase after phases 8 & 9; spawn a dedicated
code-revieweragent and reference community repos for best practices.