Work Style in the AI Era
Hallucination
You subscribed to a Claude/OpenAI plan, vibe-coded an app or website, and it ran without errors. You felt like you gained superpowers and could do anything you wanted. But wait—what is the truth?
There is a mismatch between your skills and the capabilities of AI. AI has been evolving rapidly since the start of 2025. It can understand more code details, create sophisticated documents, code projects, and images; it can even produce artifacts beyond what you imagined. So what is the value of humans?
For months, I used code CLIs/IDEs to create several applications and websites, but honestly, reflecting on the whole process, I didn’t gain much on the technical side. If asked to create an app or even a few pages on my own, I might not be able to do it. Yet it’s not true that there has been no growth at all.
Over the past few projects, I summarized and developed a workflow to help with development—from requirement refinement, design, and coding to code review, commit, and push. If a feature is well-defined, AI can implement the whole thing without interruption.
So what can we do? What’s left for us?
Many people are thinking about this. I’ve pondered it for a long time but still don’t have a clear picture. One thing is clear: we may no longer need to mechanically memorize all kinds of material. As humans, we have traversed many rounds of evolution and remain top hunters in the world. When machines can do more tasks than humans, operating machines will help us sustain. Currently, AI can help us remember all kinds of information; the same principle applies here—we need to operate AI better to boost our productivity.
But what is the optimal method to approach that point? Well, humans are unique—thinking, logic, and creativity distinguish us from other species—so spend more time researching these areas.
Since I realized this, I began reading books in philosophy, psychology, logic, and math, hoping to find the right pathway to think more deeply about the world. Though I haven’t found a clear answer, as many people have said, this path should not be wrong.
Software for AI
In past decades, when computers were created, most code was written by humans. Since it was written by humans, maintenance was also done by humans, and readability was a key metric to evaluate code quality. Times are changing: now more and more applications and services are created, built, and maintained by AI agents. Is the code readable for AI agents?
In the current AI coding trend, we see a common practice: the best frameworks are readable for both humans and, more importantly, suitable for AI learning. Take Tailwind CSS as an example. In the past, when creating CSS styles, human developers tended to use code to define colors or other attributes in ways only meaningful or understandable to developers. Tailwind uses a more intuitive approach: distinct words to denote attributes. Since AI is trained on huge human corpora, it learns the patterns of how people express concepts, which makes AI prefer the same patterns—for example, using “red” to represent a color instead of “#f00.” The same thing happens with the shadcn/ui library: it uses common words to describe UI components instead of highly technical names. This is only one aspect. In modern IT, coding is just one stage in the pipeline: we have design, testing, operations, and marketing. So what is the language for AI agents in this era?
In the current stage before AGI, all LLMs are trained from human corpora, so making frameworks more readable for humans will also improve readability for AI. But what’s next?
Since AI doesn’t have a persona, we can’t be sure what it likes or dislikes—so is there any way to find out? Making content readable for AI—not only code but also other content—is crucial for success today. Alternatively, if we don’t want AI to use our intellectual assets, we might try to make it difficult for AI. But how can we distinguish cognitive differences between AI and humans? Scientists are still trying to make AI think like humans, so it’s very difficult to create content that seems friendly to humans but difficult for AI. Not to mention, there is a data-cleaning process in AI training.
Thinking deeper: if AI can train itself, or an AI can train new iterations of itself, it will likely scrape data from all open access sources. At that point, obscuring content becomes meaningless. Then, could AI create a language for itself? And if yes, how can humans interact with it?
Let’s get back to the current stage. If today’s IT infrastructure and processes should be improved to meet the rise of AI software, then hardware, OS, databases, network stack, UI—everything—should undergo a major iteration toward AI-friendliness. How should APIs evolve? How should database structures be improved? This presents many unresolved issues to consider.
Paradox
while we are thinking about the AI-friendly infrastructure, there is a trend back in 2025 that new apps in the Apple Store or Google Play Store are surging; compared to 2024, it has a 70% surge. while we will eventually enter into a new AI paradigm, AI agents may be the primary interface, but before that most people interact with AI through all kinds of apps, so this surge is not a surprise. not sure how long this period will last, i am also obsessed with this trend, willing to start to build something.
Although the optimal interface in the AI era remains uncertain, a wide range of applications offers a straightforward means of engagement. Nevertheless, the path ahead is obscure; how can we navigate the unseen mists?
Back to Blog