blog

Agentic Engineering: Why the DACH Mid-Market Is Stalled at Step One

DACH mid-market agentic engineering — architecture diagram

May 15, 2026

TL;DR. Agentic engineering rebuilds the software development lifecycle around AI agents grounded in your company's own knowledge. It is not AI features bolted onto an existing product. In the DACH mid-market almost nobody has started, and the blocker is leadership and culture, not GDPR. The build follows three steps after buy-in: extract codebase knowledge, map security risk, build a real test suite. Only then deploy an agent.

In May 2026, a product leader I work closely with walked into a German software firm. Decades-old product. A room full of senior developers. Near-zero AI experience across the entire product department.

Not a startup that missed the memo. A company with deep software knowledge that simply never tried.

We assume everyone is agentic now. We live in the bubble. The DACH mid-market is the actual reality, and what holds it back is not the EU AI Act, not GDPR, not data residency. It is leadership, communication, and culture.

This piece is the long version of a conversation I had on the Studio podcast with Valentin Binnendijk, who co-founded TrekkSoft and STARTUPS.CH and has spent two decades turning analogue businesses into SaaS companies. He now builds hyper-lean product teams.

What is agentic engineering, and how is it different from vibe coding?

Agentic engineering is a defined, multi-step software development lifecycle run by AI agents against an existing product, with integrations, real data, security guardrails, and tests. Vibe coding is fast prototyping for non-programmers on a green field. Same underlying models. Completely different discipline.

The confusion between the two is doing real damage. Teams try vibe coding on a twenty-year-old codebase, watch it break, and conclude AI is not ready. The tool was never meant for that job.

A useful frame: build your front end on Lovable, connect it to Shopify's core through the API. That is the right division of labour. Vibe coding for the surface, agentic engineering for the system underneath.

Why the DACH mid-market is stalled at step one

The excuses are predictable. We're special. Our codebase is too complex. I tried it once and it didn't work.

The honest version is different. The team is afraid of losing control, afraid the agent will do better than they did, afraid of having to learn a new tool from zero. These are not technical problems. They are leadership problems.

We're building the AI Monitor 2026 benchmark study with ETH Zürich and the University of St. Gallen precisely because "we're doing AI" is not a maturity level. Across DACH mid-market, the pattern is the same: AI readiness tracks leadership and culture, not regulation. [SOURCE NEEDED: AI Monitor stat or proxy]

Why did the bottleneck move from engineering to leadership?

For two decades, the engineering team was the constraint on every software company. Not anymore. Once the transformation is complete, your only real limit is your token budget. Before it's complete, the constraint is leadership and product management.

Developers stay risk-averse for good reasons. They've been blamed for delays and underestimated complexity for years. Ask them to build a feature the normal way and they can scope it: one sprint, two weeks. Ask them to build it with agents and they cannot give you that number, because they do not yet know how the agent behaves. So they default to the tools they trust.

The only thing that breaks the default is a CEO or R&D leader who says, explicitly, you are allowed to try and allowed to fail, as long as you learn something. Without that top-down permission, nothing moves.

How do you put agentic engineering on a CFO's P&L?

Two cases, and a CFO will recognise both. The cost case: same output with fewer people, or double the output with the same headcount, after a transformation period. The survival case: public software companies built on one narrow use case have been punished with valuation cuts in the 40 to 50 percent range, because they have no answer for what value looks like in an agentic future. [SOURCE NEEDED: SaaS multiples compression data]

The survival case is the stronger one. It reframes the spend from "innovation budget" to "staying in business."

When we scaled Aioma to 30 people in six months, the math was simple: every additional engineer was a linear cost. With an agentic stack, that line bends. That is the part the CFO actually cares about.

The three-step build after leadership buy-in

Once leadership has committed, the actual work starts. Three steps before you deploy a single agent.

Extract the knowledge. Your product has grown over decades. AI is already good at reading all of it: the code, every dependency, the documentation, the Slack history, the Jira and support tickets. Some knowledge only lives in senior developers' heads, so have them talk through the system, transcribe it, and feed it in. The output is a structured context hub, and it has to grow continuously. Without this, the agent is a brilliant junior with morning amnesia.

Map the security risk. Bad actors have AI too. New models are getting good at finding zero-day vulnerabilities that were never discovered before. [SOURCE NEEDED: zero-day AI discovery paper] Before you let agents change anything, you want a clear security overview so you know what to fix first.

Build the test suite. Fifty unit tests at five percent coverage will not cut it. You need a comprehensive end-to-end suite, ideally thousands of scenarios based on real usage data, possibly a digital twin of the system. Agents change a lot, fast. Tests are what tell you it still works.

Only after those three steps do you deploy your first agent, and you start sequentially: one module, one API, one front end. Not everywhere at once.

This is the architecture Simon Scheurer and I are building into Teklens: a knowledge hub that ingests GitHub, Jira, Confluence, Slack, docs, and any API into one searchable graph, with an agent runtime grounded in it via RAG, and use cases (workflow assistants, SaaS modernisation, due-diligence packets) built on top. The point of the diagram is that you do not start at the use case. You start at the hub.

Why agentic engineering is a GTM problem, not just an R&D one

This is the part most operators miss. Agentic engineering is not only a development story. It is a revenue story.

If you release one version a year, you get one attempt a year to find product-market fit in the agentic layer of your product. A competitor releasing monthly gets twelve. They will out-iterate you until you are irrelevant. The whole point of the transformation is to compress that loop so you can test positioning and value fast enough to matter.

The value proposition itself has to change too. Customers no longer want software where they input data and get a report back. They want outcomes and completed workflows. That is a repositioning problem, and repositioning is a GTM job, not an engineering one.

The internal compression matters as well. When support feeds escalations straight to an agent, when product management prioritises 50 items instead of triaging down to 5, the whole revenue engine speeds up. Lean agentic product teams change your unit economics, and unit economics change how you sell. See Revenue Systems & GTM Engineering for the longer argument.

Where does agentic engineering break?

Three failure modes, in order of frequency.

The talent gap. The skill set is new for everyone, so there is no deep pool to hire from. Either you bring in someone with real experience or you accept a long internal learning curve. There is no shortcut.

The single-prompt fantasy. "Claude code, fix it" on a twenty-million-line codebase will fail, and the team will blame the technology instead of the framing. The expectation has to be a multi-step process with iteration, not a magic command.

The "can I just buy it" trap. The most common failure I see: companies that want to purchase agentic engineering like a licence and forget about it. It does not work that way. It is trained, not bought. Daily improvement, like a team member.

The mind shift, from the CEO down

Stop bolting AI onto your product as a list of use cases. Build the operating system underneath: context, security, testing. Then let agents loose.

The shift runs from the CEO who needs to learn vibe coding to the R&D leader who needs to be convinced it's working, down to the developer adopting new tools and accepting daily change. If that shift isn't given, it fails.

The train has left the station. The bus is still running.

Want the weekly version of this? Subscribe to The Science of GTM newsletter for the breakdown of how product, GTM, and AI operate as one system. Or join the AI GTM Lab cohort waitlist if you want the playbook live.

FAQ

What is agentic engineering?

Agentic engineering is a defined, multi-step software development lifecycle run by AI agents against an existing product, with integrations, real data, security guardrails, and tests. It differs from vibe coding, which is fast prototyping for non-programmers on a green field.

Is GDPR the main blocker for AI adoption in the DACH mid-market?

No. Field experience and the AI Monitor benchmark study both point to leadership, communication, and culture as the primary blockers. Regulation is a solvable constraint. The harder problem is getting leadership commitment and overcoming developer risk-aversion.

Where do you start with agentic engineering in an existing software company?

Step one is leadership buy-in with budget for tooling and coaching and explicit permission to fail. After that, the engineering work follows three steps: extract codebase knowledge into a context hub, map the security risk, and build a comprehensive test suite. Only then deploy your first agent, one module at a time.

Why does agentic engineering matter for go-to-market, not just engineering?

Because it changes release cadence, value proposition, and unit economics. Faster iteration lets you test positioning before competitors do. Customer expectations are shifting from software-you-operate to outcomes-delivered. Leaner agentic teams change how you sell.

Can you buy an agentic engineering solution off the shelf?

No. Agentic engineering has to be trained and customised to your product, codebase, and process, then improved continuously. Treating it as a one-time purchase is the most common reason transformations fail.

“The collaboration with Marc was highly professional at all times and was also a lot of fun. I can also recommend Marc to other companies that are looking for innovative ways in B2B marketing.”

Eberhardt Weber
/
CEO @ Emporix AG

“Marc has made a decisive contribution to externalising United Security Providers' internal expertise and enriching CRM data. The ability to not only reach our target group, but also to maintain a valuable dialogue on an ongoing basis, has significantly strengthened our market presence.”

Yves-Alain Gueggi
/
CEO @ United Security Providers AG