AI models are increasingly being used for decisions that previously required empirical knowledge. This raises the question of how far delegation to AI is allowed to go without our own judgment falling by the wayside. My impression from the last few months: The model does not know its user, not the context, not the architecture decision from two years ago, not the dynamics in the team. Anyone who still asks "What should I do?" and simply accepts the AI's answer has outsourced reflection.
In my view, there is a gap between what we know about our working environment, projects and customers and what a model can know that no context window can completely close. Even with codebase snapshots, tickets and memory files, the AI is never in the position of the user. It doesn't know why the codebase is so cut, it doesn't know the outage from the last release, and it doesn't sense whether the management is looking for more risk or more stability. Even better prompting does not close this gap according to my observations.
The productive use for me looks different. Instead of "What should I do?", my question is typically: "I want to achieve X, help me think through the options." The model provides pros, cons and variants, but the decision remains with me.
Three questions help me before I accept a suggestion:
Critical scrutiny itself can be delegated to the AI. We use two skills for this in the team. First Principles Thinking forces the model to break down assumptions, recognize the underlying truths and argue anew. Grill Me lets the AI mercilessly dismantle the project and ask about unclear assumptions. Both work for architectural concepts as well as for proposal designs.
First Principles Thinking and Grill Me are examples of skills, i.e. ready-made sets of instructions that we give AI models to trigger precisely this kind of questioning. Skills can be shared. What one person develops once can then be made available to the entire team. Some are called up specifically, others are activated automatically in the background as soon as the topic is appropriate. This turns a single correction into a system that applies to a department or the entire company.
In my practice, it is always more beneficial if the AI refutes an argument and a correction is still possible than if a reviewer or potential customer does this in the end.
From my observations, AI tools are changing where leadership and engineering skills are needed. If you want to use AI agents effectively, you need the same skills as when managing people. A vague task leads to a technically functioning result that nevertheless does not solve what was intended. AI agents alone do not prevent this.
The analogy can be continued: Modern AI tools such as Claude or ChatGPT offer memory functions, i.e. persistent notes that the agent maintains beyond individual conversations. In this way, it learns standards, guard rails and the mistakes it should not make a second time. A good manager shapes a team over time. In my experience, the same patience pays off with the agent: If you tell the AI "remember this for the future" after every correction, you build up an agent over time that works less generically. The tool becomes a kind of trained employee.
In my view, not for every decision. If you work through a linter hint, rename a variable or postpone an appointment, you don't need a sparring session. The situation is different for decisions that are difficult to reverse or on which a lot depends: Architecture selection, vendor selection, or reorganization. In my experience, every minute of reflection pays off here, because the costs of a wrong decision are orders of magnitude higher than the cost of the preliminary review.
| Decision type | Examples | Recommended depth |
|---|---|---|
| Reversible, small radius of effect | Rename variable, bugfix, move date | Gut feeling |
| Reversible, noticeable correction effort | Library selection, small refactoring, tool test | Five-minute sparring with the AI |
| Difficult to reverse or high impact | Architecture, vendor lock-in, migration, reorganization | Deliberate sparring session, first principles or grill me skill |
For me, AI in corporate use is not a question of "yes or no", but above all a question of "how". Those who use the model as an oracle surrender their judgment. Those who use it as a sparring partner keep their bearings and deliver comprehensible results, both in the code and in the strategy. From my point of view, the lever is not in the technology, but in the attitude. The model helps me to think things through, I make the decision myself. This is more strenuous than adopting a ready-made answer, but for me it is currently the only way in which AI increases my value.
I'm curious to see how this assessment will change in the coming months. Memory functions will become more sophisticated, agents will become more capable and skills will become more standardized. Some of the things I describe as special today may become commonplace tomorrow. What remains is the question of who will ultimately make the decisions and bear the responsibility.
We support companies with the introduction of AI in engineering and management contexts, from use case evaluation to employee training. Get in touch with us.