Evo Podcast: Agentic AI & The New Developer Work Flow

On November 2025 I had the chance of joining a few brilliant minds from Finland to discuss over the new development era that has come over as Agentic AI. The podcast is accessible on Spotify and Apple Podcast.

Below are my notes which I had prepared before the discussion, some of which I did not find the time to discuss more. Each topic starts with questions which I aimed to answer in my own words. I can guarantee that none of these are AI made.

Where should the boundaries be

Tasks AI can do vs tasks that require human oversight

I believe the boundary should exist with no questions. It might be obvious, but I can already assume cases, people teams, or even companies, start feeling so merged into the new world that they start outsourcing all the tasks including thinking, planning, and understanding the problem.

The boundary may not have a clear place, it may differ for every team, but I think a sensible place is to know that we, humans, know the problem and see the big picture. It is important to instruct the agent properly, and to instruct them you need to know what you are trying to solve. What strategies can be chosen. What are the trade offs for your team or company. What can be done in the given deadline. And so many other questions that exist outside the realm of computers.

I would call a team successful in adopting agents if they can explain and plan for resolving the problem without any agents in the first place.

Who is responsible for AI-generated work?

Ownership of code, accountability when AI contributes

I see an AI agent like the Auto Pilot in airplanes. Well I am not a pilot myself, but I think the main purpose of autopilot is to “help” pilots, and nothing more. It is not designed to replace pilots or the crew, and obviously not designed to take you to a trip. It is there to be “operated” and not to “operate” by its own. Even though it may sounds like it is operating for some specific cases, it is still following the commands it has been given.

When it comes to choosing different strategies, unexpected events, incidents, or even like communication with passengers, calming them in stressful situations, helping with their needs, and so many other things, the auto pilot is just not made for this type of responsibilities.

And for sure it has not been designed to fly you to wherever you just describe for it. Even though it may be technically capable of flying you to somewhere, it is still up to you to choose the destination, the time, the arrangement for the hotel, etc.

To me an AI agent is more or less similar to auto pilot. It can do magic, only if we know how to operate it. We are the ones who define how a problem to be solved, and the agent is there to help us not get stuck into some details that are not important, or some boilerplates that can be done 10 times faster by them.

And same as the pilot being responsible for a flight, a captain being responsible for the cargo, the engineer is responsible for the AI it is using. Whatever is generated, it is you, the human, who has to decide to iterate, to accept, to reject, or even to change the strategy. And there are still so many areas in Software Engineering and Development that AI is just not as trained as other areas, so it makes it even harder to control the output

How do we manage data privacy with AI tools?

Risks of pasting sensitive code or customer information into LLMs

This is actually my favourite topic, especially as I think when there is a hype, maybe in anything, it would be more important than ever to keep our critical thinking ongoing, and this question is one of those one.

I think we should be more cautious about the new tool, and that’s because we hardly know how it works “internally”. We know they work, we know how to build them, but we are still discovering how they transform information to the output, and that’s why once in a while we get surprised by a new prompt that breaks a famous language model that was never discovered. A single line in the system prompt can change many things.

And this gets even more important when working with AI Agents, as one of their key features is the fact that they can interact with their environment. They can use tools. They can gather information from basically anywhere, if they are allowed to. This is where we need to have “rules”, to have “policies”, to define what is allowed to be given to a tool that can be potentially jailbroken, can start sharing the information with parties in a way that we cannot see until it is too late. This is the issue that has been known since the early days of LLM and it has continued so far.

I have seen good steps from the AI Companies to guard some of the obvious parts, like not trying to read .env files. But even that can be bypassed and I have seen it has been bypassed just the other night.

So it is important that we, as the operators of these agents, define rules, define boundaries, and have a constant monitoring to ensure they are not violated, even though it may get harder and harder as the use of these tools are getting more spread to everyone.

AI-driven productivity: helpful or risky?

2–3 PRs/day becoming 15 PRs/day, how to maintain quality and safety

Should AI act like a collaborator, reviewer, or just an assistant?

It can be either, depending on how you see the situation. It can easily gets out of control if we forget our responsibilities. If you just throw some agents to the repository and think the instructions are the only thing you have to care, you are very welcome to ask for people to come help you refactor the code in a very near future, and you should not be surprised if they start telling you this is beyond fix.

But it can be helpful, it can make us significantly faster and more productive. We can write better code, design better architectures, review and understand the code better, and that already makes us more efficient teams

I don’t see Agents as a new entity, but a new very powerful and capable tool. You still need the pilot, the captain, the system architect, the great keen quality testers. But I have a strong feeling that we all get better at what we do. This may mean some people who fail to adopt to the new world to stay behind, but the ones who accept it, to learn it, they can shape definitely a better world.

What does the developer workflow look like in 1–2 years?

Predictions, concerns, opportunities

Very hard to say. Some people believe they will extinct us by 2027 or so. Some may be more hopeful. I think for the near future we will not see a linear path but with ups and downs. At some point we may think we are free of any need to other people, and we go into the deep. And I believe sooner or later we learn that the reality is not that, and we may get back up again.

This means probably a huge amount of lay offs in the beginning, a lot of hesitant and doubt among newcomers to this industry. Maybe a panic among students and even universities. But I hope that we pass this phase fast, before it is too late, not just for the software industry, but for the humanity.