AI coding tools are now part of many software teams, but trust in those tools still lags behind their increased use. Questions remain about reliability and how to measure real gains.
AI News spoke with Idan Zalzberg, Chief Technology Officer at Agoda, about how the travel platform is approaching the changes happening in the sphere of development. His comments show how a technology company is testing where AI helps engineers move faster and where human review still matters.
Engineers remain accountable for AI-generated code
One of the clearest decisions Agoda has made is that AI does not change who is responsible for the code that enters production. “First, it’s critical for us to maintain that engineers are accountable and responsible for their work, whether they used AI or not,” Zalzberg said. “AI is just a tool, and engineers in different teams should have the oversight and measures needed to ensure the code it generates is correct and maintainable.”

The principle shapes how Agoda treats AI-generated output in its development process. Code produced with the help of AI is still tied to the engineer who prompted the tool.
“As such, code generated by AI is still attributed to the specific engineer who used AI to produce it,” he said.
The approach helps avoid the risk that automation might weaken accountability in engineering teams. At Agoda, the responsibility for the code remains unchanged even if the method of writing it changes.
AI tools may speed up coding tasks, but Agoda has not created a separate process to validate AI-generated code. The company runs all code through the same checks used for any other software. “In terms of practices to ensure AI-generated code is high quality, we rely on the same mechanisms we use for human-written code,” Zalzberg said.
Safeguards and tools like linters, static analysis tools, automated testing, and gradual rollouts detect errors before software reaches users. “Some of these processes are themselves assisted by AI, which helps us apply them at larger scale regardless of how the code was written,” he said.
Reliability remains a challenge for AI coding tools
Despite progress in AI models, reliability remains a concern for developers. Zalzberg said the unpredictability of generative systems changes how engineers must evaluate results.
“Unlike ‘standard’ software, we can’t always tell exactly what the code will do by simply reading it,” he said.
“We can’t assume running the same code multiple times will produce identical results. The way to address this is by becoming experts in evaluation, which is an important skill for AI engineers,” he said.
” Zalzberg said.
Engineers as supervisors and decision-makers
Zalzberg argues that the core role of engineers has not changed. “It’s true that engineers not write all the code themselves and instead often instruct an AI assistant to generate parts of it,” he said. “However, that was never the core role of an engineer. The main responsibility has always been making technical decisions at different levels to address business problems.”
Responsibilities include choosing the architecture and defining data structures. AI may assist with writing pieces of code, but the broader design requires human judgement.
“When AI produces code, engineers still need to challenge it and ensure the output aligns with the technical direction of the project. Because they interact with the underlying code differently, they need to be excellent at asking questions and using curiosity to ensure the output is correct,” Zalzberg said.
Measuring AI coding tools
Many developers report saving time when using AI coding tools, but translating those gains into broader engineering outcomes can be difficult. Agoda saw early signs of improvement when it first introduced AI tools. “When we first rolled out AI tools, we were able to run controlled experiments that showed roughly a 27% productivity uplift. Today, AI use is so widespread that we not have a clean benchmark for comparison,” he said.
“We track our overall velocity improvements over time, like pull request throughput.”
Benefits did not appear immediately. Engineers needed time to integrate AI tools into their workflows and existing codebases.
“In many areas it took a considerable amount of time before improvements became visible,” he said.
Governance of AI
Beyond engineering productivity, Zalzberg believes mature AI use requires clear governance around how models are used inside a company.
At Agoda, a step toward that goal has been the creation of a gateway that connects teams to different AI models. “We developed a single AI gateway for all models so we can monitor use centrally,” he said.
Another priority is deciding which decisions should remain under human control. “AI-driven decisions should either be reviewed by humans in some form or limited to areas where mistakes have no material consequences,” Zalzberg said.
For example, AI might help rank hotel recommendations but should not directly set the price customers pay for a room. “Being intentional about what AI controls – and what it does not – is critical.”
AI coding is a work in progress
At Agoda, learning involves experimentation with oversight. The company watches how AI tools perform and how engineers adapt. Zalzberg says trust in AI grows slowly as teams learn where the technology works well and where it requires caution.
“Trust is something that needs to be earned.”
(Photo by Agoda)
See also: AI coding assistants may influence which languages developers use
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

