10 Key Insights from Thoughtworks' 34th Technology Radar

The Thoughtworks Technology Radar has returned with its 34th volume, offering a biannual snapshot of the software development landscape. Packed with 118 blips covering tools, techniques, platforms, and languages, this edition is dominated by AI while also forcing a return to foundational practices. Security concerns around agents and the rise of harness engineering set the tone for what lies ahead. Here are the ten things you need to know.

1. A Record 118 Blips in the 34th Edition

The 34th volume of the Technology Radar contains 118 distinct blips—each a mini-review of a technology element that Thoughtworks practitioners have used or observed. This number reflects the accelerating pace of change in the industry, particularly around AI. The radar is curated from real project experience, making it a practical guide rather than a theoretical list. Teams can use it to identify which tools or techniques are worth adopting, which to assess, and which to hold or avoid. The sheer volume means there’s something for everyone, from AI safety engineers to frontend developers.

10 Key Insights from Thoughtworks' 34th Technology Radar
Source: martinfowler.com

2. AI Dominates, But with a Twist

Unsurprisingly, a significant portion of the radar focuses on AI-related topics. However, the twist is that AI is not only pushing the envelope forward but also forcing a re-examination of older practices. The radar highlights how LLM-assisted development is reshaping everything from code generation to testing. Yet alongside new AI frameworks, there’s a clear signal that mature techniques—like mutation testing and DORA metrics—remain critical. The message: don’t abandon solid engineering in the rush to adopt AI.

3. Revisiting Software Craftsmanship Foundations

One of the most striking themes is the return to foundational software craftsmanship. Concepts like clean code, deliberate design, testability, and accessibility are being revisited through an AI lens. The radar notes that these are not nostalgic references but necessary counterweights to the speed at which AI tools generate complexity. For example, pair programming and zero trust architecture are cited as techniques that gain new relevance when AI introduces unpredictable code. This is a call to double down on quality as complexity grows.

4. The Command Line Makes a Comeback

After years of being abstracted away by GUIs and IDEs, the command line is resurging as a primary interface. Agentic AI tools are driving developers back to the terminal because they operate best in text-based environments. The radar observes that this shift is not a step backward but a strategic adaptation. The command line offers granular control and scripting flexibility that graphical interfaces lack, making it ideal for orchestrating AI agents. This trend has implications for tool design and developer training.

5. Jim Gumbley Joins the Radar Writing Team

Security expert Jim Gumbley has been added to the editorial team for this edition. His deep knowledge of threat modeling, including his work on the Threat Modeling Guide, brings a much-needed security perspective. Given the serious security concerns around LLMs, his presence is timely. Gumbley’s contributions ensure that security blips are grounded in real-world defense strategies, not just theoretical warnings. The radar now has a stronger security voice, which is essential in an era of AI-driven attacks.

6. The ‘Permission Hungry’ Agent Problem

The radar coins the term “permission hungry” to describe a central dilemma of AI agents. Tools like OpenClaw and Claude Cowork (which supervise real tasks) and Gas Town (which coordinates agent swarms) require broad access to private data, external communication, and live systems. The payoff can be huge, but the access appetite collides with unsolved security problems. The radar warns that safeguards haven’t caught up with ambition—much like a novice skier pointing themselves at a black diamond run. This is a critical area for engineering and policy.

7. Prompt Injection Remains an Unsolved Risk

One of the most dangerous security gaps highlighted is prompt injection. Models still cannot reliably distinguish trusted instructions from untrusted input. This makes every agent that accepts external prompts a potential vulnerability. The radar emphasizes that while technical fixes like input sanitization help, the problem is fundamentally unsolved. Developers must design systems with the assumption that prompt injection will occur, using isolation and least-privilege principles. This blip is a stark reminder that AI safety is still in its infancy.

8. Harness Engineering Emerges as a Key Discipline

Harness engineering—designing the controls, guardrails, and measurement systems for AI agents—takes center stage in this radar. Inspired by Birgitta’s article on the subject, many blips focus on the guides and sensors needed to keep AI systems within safe bounds. The concept borrows from safety engineering in aviation and autonomous vehicles, applying similar rigor to AI. This goes beyond simple logging; it involves creating feedback loops that detect drift, misuse, and failure modes before they cause harm. Harness engineering is becoming a must-have skill.

9. Essential Tools for Building a Fitting Harness

The radar includes several specific recommendations for harness engineering, such as tools for behavioral monitoring, access control, and observability. These blips serve as a toolkit for teams implementing AI safely. For instance, tools that enforce rate limits, sandbox file access, or audit agent decisions are now more critical than ever. The radar meeting itself was a major source of ideas, and Thoughtworks expects this list to grow as more organizations operationalize AI with proper safeguards.

10. Looking Ahead: The Next Radar

The 34th edition sets a clear direction for the next six months. As AI agents become more integrated into workflows, the topics of harness engineering and security will only expand. Thoughtworks anticipates that the next radar, due in fall 2025, will contain even more blips on sensors, guardrails, and safety frameworks. Organizations should start now by revisiting their own foundations—pair programming, zero trust, clean code—while investing in the harness tools that will protect their AI systems. The future belongs to those who balance innovation with discipline.

In summary, the 34th Technology Radar is a wake-up call: AI is driving both opportunity and risk. The industry must revisit its roots, strengthen security, and build proper harnesses for agents. Whether you’re a developer, architect, or CTO, these ten insights provide a roadmap for navigating the next wave of software development.

Recommended

Discover More

Orchestrating Multi-Agent Systems: A Practical Guide to Scalable AI CooperationLifetime Microsoft Office 2021 Pro License: One-Time Payment of $29.97 – Available Until May 18Enterprise Autonomous AI Agents: NVIDIA and ServiceNow's Collaborative LeapNVIDIA-VAAPI-Driver 0.0.17 Enhances Hardware Decoding on GB10 SystemsPython 3.15.0 Alpha 2: What Developers Need to Know