4 min read

AI is no oracle: Why I believe it serves as a sparring partner for a better model of collaboration

AI is not an oracle: sparring partner instead of client | prodot
7:11

AI models are increasingly being used for decisions that previously required empirical knowledge. This raises the question of how far delegation to AI is allowed to go without our own judgment falling by the wayside. My impression from the last few months: The model does not know its user, not the context, not the architecture decision from two years ago, not the dynamics in the team. Anyone who still asks "What should I do?" and simply accepts the AI's answer has outsourced reflection.

What the AI cannot know

In my view, there is a gap between what we know about our working environment, projects and customers and what a model can know that no context window can completely close. Even with codebase snapshots, tickets and memory files, the AI is never in the position of the user. It doesn't know why the codebase is so cut, it doesn't know the outage from the last release, and it doesn't sense whether the management is looking for more risk or more stability. Even better prompting does not close this gap according to my observations.

From oracle to sparring partner

The productive use for me looks different. Instead of "What should I do?", my question is typically: "I want to achieve X, help me think through the options." The model provides pros, cons and variants, but the decision remains with me.

Frame 7A comparison that helps me: AI behaves like the navigation system in a car. It knows the map better than the driver, has traffic jam information and alternatives at the ready. But I still drive myself. If you blindly follow the sat nav, you'll end up driving into a barrier. If you use it as a signpost and keep your eyes open at the same time, you will reliably reach your destination faster.

Three questions help me before I accept a suggestion:

  • What is the problem to be solved, and what does a good result look like? If you don't have this clear, you will evaluate every suggestion from the AI against a vague goal and will find it difficult to assess whether the solution fits the task at all.
  • Does this fit our context? Best practice from the group is often counterproductive in SMEs, as is a greenfield approach in an established system landscape.
  • Is now the right time? A hasty refactoring two weeks before the start of production brings more risk than benefit.

AI as a critic

Critical scrutiny itself can be delegated to the AI. We use two skills for this in the team. First Principles Thinking forces the model to break down assumptions, recognize the underlying truths and argue anew. Grill Me lets the AI mercilessly dismantle the project and ask about unclear assumptions. Both work for architectural concepts as well as for proposal designs.

First Principles Thinking and Grill Me are examples of skills, i.e. ready-made sets of instructions that we give AI models to trigger precisely this kind of questioning. Skills can be shared. What one person develops once can then be made available to the entire team. Some are called up specifically, others are activated automatically in the background as soon as the topic is appropriate. This turns a single correction into a system that applies to a department or the entire company.

In my practice, it is always more beneficial if the AI refutes an argument and a correction is still possible than if a reviewer or potential customer does this in the end.

L1150034_2

Guidance becomes a core competence

From my observations, AI tools are changing where leadership and engineering skills are needed. If you want to use AI agents effectively, you need the same skills as when managing people. A vague task leads to a technically functioning result that nevertheless does not solve what was intended. AI agents alone do not prevent this.

The analogy can be continued: Modern AI tools such as Claude or ChatGPT offer memory functions, i.e. persistent notes that the agent maintains beyond individual conversations. In this way, it learns standards, guard rails and the mistakes it should not make a second time. A good manager shapes a team over time. In my experience, the same patience pays off with the agent: If you tell the AI "remember this for the future" after every correction, you build up an agent over time that works less generically. The tool becomes a kind of trained employee.

When is reflection worthwhile?

In my view, not for every decision. If you work through a linter hint, rename a variable or postpone an appointment, you don't need a sparring session. The situation is different for decisions that are difficult to reverse or on which a lot depends: Architecture selection, vendor selection, or reorganization. In my experience, every minute of reflection pays off here, because the costs of a wrong decision are orders of magnitude higher than the cost of the preliminary review.

Decision type Examples Recommended depth
Reversible, small radius of effect Rename variable, bugfix, move date Gut feeling
Reversible, noticeable correction effort Library selection, small refactoring, tool test Five-minute sparring with the AI
Difficult to reverse or high impact Architecture, vendor lock-in, migration, reorganization Deliberate sparring session, first principles or grill me skill

The test: Can you explain the decision in your own words?

_DSC1663A simple test that I use for myself: Hindsight reveals whether the model was tool or client. As soon as someone asks, a reviewer in a pull request or a stakeholder in a meeting: "Why did you make this decision?" Anyone who can name the context, goals and balanced trade-offs in their own words has used AI as a tool. Those who have only copied and pasted what was written in the chat window have been guided by the oracle. The difference becomes apparent when the decision has consequences: a bug in production, a bad investment, an architecture that will no longer be viable in two years' time.

Conclusion

For me, AI in corporate use is not a question of "yes or no", but above all a question of "how". Those who use the model as an oracle surrender their judgment. Those who use it as a sparring partner keep their bearings and deliver comprehensible results, both in the code and in the strategy. From my point of view, the lever is not in the technology, but in the attitude. The model helps me to think things through, I make the decision myself. This is more strenuous than adopting a ready-made answer, but for me it is currently the only way in which AI increases my value.

I'm curious to see how this assessment will change in the coming months. Memory functions will become more sophisticated, agents will become more capable and skills will become more standardized. Some of the things I describe as special today may become commonplace tomorrow. What remains is the question of who will ultimately make the decisions and bear the responsibility.


We support companies with the introduction of AI in engineering and management contexts, from use case evaluation to employee training. Get in touch with us.

Sie haben Fragen?
Wir helfen Ihnen gerne weiter.