CERC and Google ADK: the logic behind the choice
TL;DR — CERC chose Google ADK as the core framework of its AI agent platform because it needed three things at once: explicit orchestration, governance compatible with a regulated environment, and native integration with the company’s strategy on Google Cloud. More than adopting a framework, the decision sought to reduce the gap between development, deployment, operations, and observability. The result is a more predictable foundation for building agents in production, with architectural standardization without sacrificing future interoperability.
Introduction
The decision was not about a framework. It was about architecture.
When talking about AI agents, it is common to see direct comparisons between Google ADK, LangChain, LangGraph, LangFlow, and LangSmith as if all these technologies competed for the same space.
In practice, that view is oversimplified.
These tools operate at different layers of the stack. Some help compose integrations. Others structure execution flows. Others support prototyping. Others provide observability, evaluation, and tracing. Comparing them as if they were equivalent leads to fragile technical decisions and, in enterprise environments, that comes at a high cost.
At CERC, that kind of simplification is not enough.
We operate critical financial infrastructure in a regulated environment where traceability, predictability, and governance are not differentiators. They are baseline requirements. In this context, the choice of a technology for AI agents cannot be driven solely by experimentation speed or developer preference. It must respond to real compliance, auditability, scale, and operations demands.
It was in this context that we defined Google ADK as the core framework of our AI agent platform.
This article presents the logic behind that choice, the role of the strategic partnership with Google Cloud Platform (GCP), and the architectural vision that supports the decision: in production, the most important question is not which framework looks most interesting in isolation, but which combination of framework and platform reduces the most friction across the entire system lifecycle.
“In enterprise environments, the problem is rarely just building the agent. The problem is operating the agent with control.”
The landscape: different tools, different responsibilities
Before explaining CERC’s decision, it is worth organizing the landscape objectively.
A production AI agent platform does not depend on a single technology. It depends on a set of capabilities: component composition, flow control, tool execution, state management, observability, evaluation, and production runtime.
That is why these tools should be understood by architectural role, not just by popularity.
Google ADK: explicit orchestration for production
Google’s Agent Development Kit (ADK) is a code-first framework designed for building multi-agent systems with a focus on production.
Its main differentiator lies in how it handles orchestration: it is not implicit. It is modeled explicitly in code. This means that coordination between agents, execution order, parallelism points, and context passing can all be read, versioned, and tested as executable architecture.
Instead of hiding the flow in lengthy prompts or hard-to-trace behaviors, ADK favors more predictable structures.
Among its capabilities:
- Multi-agent topologies
- Sequential, parallel, and iterative execution
- Structured outputs
- Session-scoped state management
- Integration with external tools
- Memory and artifact persistence
- Continuous evaluation
- Direct integration with Vertex AI Agent Engine
A simplified example of orchestration in ADK:
from google.adk.agents import SequentialAgent, ParallelAgent, LlmAgent
router_agent = LlmAgent(
name="RouterAgent",
instruction="Classify the request and prepare the initial context.",
output_key="route_result"
)
analysis_agent = LlmAgent(
name="AnalysisAgent",
instruction="Perform the analysis of the request.",
output_key="analysis_result"
)
retrieval_agent = LlmAgent(
name="RetrievalAgent",
instruction="Retrieve relevant information.",
output_key="retrieval_result"
)
computation_agent = LlmAgent(
name="ComputationAgent",
instruction="Perform the necessary calculations.",
output_key="computation_result"
)
execution_agent = LlmAgent(
name="ExecutionAgent",
instruction="Execute the planned action.",
output_key="execution_result"
)
synthesis_agent = LlmAgent(
name="SynthesisAgent",
instruction="""
Combine results from:
- Routing: {route_result}
- Analysis: {analysis_result}
- Retrieval: {retrieval_result}
- Computation: {computation_result}
- Execution: {execution_result}
"""
)
root_agent = SequentialAgent(
name="MultiAgentWorkflow",
sub_agents=[
router_agent,
ParallelAgent(
name="ParallelProcessing",
sub_agents=[
analysis_agent,
retrieval_agent,
computation_agent,
execution_agent
]
),
synthesis_agent
]
)
This type of structure makes the flow visible. Orchestration ceases to be an inference and becomes an architectural artifact.
One important note: determinism is in the coordination flow, not in the LLM’s internal reasoning. In other words, the execution order can be predictable, even if the content generated by an agent remains probabilistic. For production, this separation is extremely useful.
LangChain: the component ecosystem
LangChain is one of the most widespread ecosystems in LLM-based applications, especially for its vast collection of integrations and reusable abstractions.
Its role is very strong at the composition layer:
- Model abstractions
- Tool calling
- Retrieval
- Memory
- Prompt templates
- Connectors with databases, APIs, and enterprise systems
Simple example:
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Fetch current weather for a city."""
return f"72°F and sunny in {city}"
llm = ChatOpenAI(model="gpt-4o").bind_tools([get_weather])
result = llm.invoke("What's the weather in Tokyo?")
LangChain’s value lies in accelerating exploration, integration, and assembly of capabilities.
LangGraph: flow control with graphs and state
LangGraph operates at the orchestration layer within the LangChain ecosystem.
While LangChain delivers components, LangGraph organizes execution as a stateful graph, enabling loops, branching, persistence, and retries.
from langgraph.graph import StateGraph, END
workflow = StateGraph(AgentState)
workflow.add_node("research", research_agent)
workflow.add_node("analyze", analysis_agent)
workflow.add_node("decide", decision_node)
workflow.add_edge("research", "analyze")
workflow.add_conditional_edges("analyze", route_decision, {
"needs_more_research": "research",
"ready": "decide"
})
workflow.add_edge("decide", END)
app = workflow.compile()
Its differentiator is especially apparent when the flow needs to re-evaluate steps, repeat cycles, and decide paths based on state.
LangFlow: speed for visual prototyping
LangFlow is a visual layer aimed at building pipelines in drag-and-drop format.
It is useful for learning, ideation, demonstrations, and quick flow validation before translating to code. Its focus is on accelerating experimentation.
LangSmith: observability and evaluation
LangSmith solves another problem: observability, tracing, testing, and evaluation of LLM applications.
When an agent returns a wrong answer, calls the wrong tool, or retrieves the wrong section in a RAG flow, tracing the reason requires instrumentation. LangSmith helps precisely with that, offering structured tracing, evaluation datasets, and regression monitoring.
Why CERC chose Google ADK
The choice of ADK was not an isolated feature comparison. It was a response to concrete company requirements.
1. Explicit orchestration for a regulated environment
In a regulated financial infrastructure, it is not enough for an agent to “work.” It is necessary to understand how it arrived at a given behavior.
When an auditor, a risk team, or a compliance team asks why a decision was made, the answer cannot depend on manual context reconstruction or interpretation of an implicit flow.
ADK offers an important advantage in this scenario: orchestration is explicit.
This allows the flow to be:
- Visible in code
- Versioned in Git
- Tested in CI/CD
- Reviewed as architecture
- Audited with greater clarity
In practice, a SequentialAgent can define the processing order, a ParallelAgent can open multiple simultaneous analysis fronts, and a final agent can consolidate results. That design is not hidden. It is formalized.
For CERC, this clarity matters because it reduces operational opacity.
2. Parallelism to reduce latency in real flows
In several backoffice scenarios, agents need to query multiple sources: internal databases, rule engines, APIs, document sources, or decision-support repositories.
When this happens sequentially, latency grows quickly.
In the use cases we are evolving, this behavior has already appeared clearly. In sequential flows, total time can easily exceed 10 seconds. With ADK’s ParallelAgent, these executions become concurrent, bringing response time down to around 3 seconds.
We are not yet using this pattern in the company’s core transactional layer. But the results in backoffice already show why this is relevant. At scale, parallelism is not just an optimization. It defines whether the experience will be usable or prone to timeouts.
3. State isolation to prevent cross-request contamination
In agentic systems, state leakage between requests is a serious risk.
When context, memory, or artifacts from one execution contaminate another, the system may produce incorrect responses or even trigger tools based on wrong premises. In critical environments, this is unacceptable.
ADK favors per-execution isolation through its instantiation model and session management. This helps reduce the risk of cross-request contamination and improves the system’s operational predictability.
4. Alignment with CERC’s strategy on Google Cloud
The choice of ADK was also strategic.
CERC already operates a significant portion of its infrastructure on Google Cloud Platform. Adopting ADK as the core of the agent layer brings this new capability closer to the ecosystem where the company already operates data, security, identity, observability, and runtime.
This convergence has a direct impact on operations.
With Vertex AI Agent Engine, agent deployment and execution take place within a managed platform, integrated with Google Cloud’s mechanisms. This reduces the need to build from scratch a proprietary runtime, scalability, sessions, and observability layer for agents.
In other words: the decision reduces platform complexity.
5. Standardization without closing doors
An important aspect of the decision is that choosing ADK does not mean assuming that a single framework solves everything, or that CERC’s architecture is closed to the rest of the ecosystem.
Quite the contrary.
Our decision was to standardize on ADK for production, while maintaining the view that different tools can coexist at other layers of the stack or in future interoperability scenarios.
This gives the company an important balance between governance and flexibility.
The role of Vertex AI Agent Engine
An important architectural distinction must be made here.
Vertex AI Agent Engine is the managed runtime layer of the platform. ADK is the orchestration framework we chose as the production standard.
These two decisions are complementary, but not identical.
At CERC, the separation is clear:
- Platform: Vertex AI
- Production standard framework: Google ADK
This distinction is important because it avoids a common confusion in AI projects: assuming that the choice of runtime must automatically define the entire development architecture. It does not have to be that way.
What we decided was to use ADK as the orchestration core and Vertex AI as the layer that complements operations, including runtime, evaluation, observability, and integration with the Google Cloud ecosystem.
| Layer | Technology | Role at CERC |
|---|---|---|
| Orchestration & Execution | Google ADK | Multi-agent topology, parallelism, flow control, and tool execution |
| Retrieval (RAG) | ADK + Tools | Integration with Vertex AI Search and external APIs |
| Memory & State | ADK Session State | Persistence across agents and sessions |
| Observability | Vertex AI + Standard Logging | Tracing, metrics, and debugging |
| Evaluation | Vertex AI Evaluation | Automated testing and quality |
| Deploy & Runtime | Vertex AI Agent Engine | Managed infrastructure and scale |
This composition reflects an objective view: no single tool excels at all the needs of an enterprise agentic system. What does is an architecture where each layer assumes a clear role.
The strategic partnership with Google Cloud
The choice of ADK is directly connected to CERC’s alignment with Google Cloud. But it is worth being clear about this in the right way: this is not about automatic lock-in. It is about architectural coherence.
Unified infrastructure
When databases like BigQuery and Cloud SQL, services like Cloud Run, storage in Cloud Storage, and the agent layer all operate within the same ecosystem, operations tend to be more consistent.
This convergence brings practical gains:
- Single identity model with IAM
- Aligned security controls
- More consistent telemetry
- Operations with enterprise SLAs
- Lower governance and compliance friction
In a regulated environment, reducing operational fragmentation has real architectural value.
Vertex AI as a lifecycle platform
The value of Google Cloud is not just in running agents.
Vertex AI also expands the capacity to evolve the platform over time, with resources such as:
- Model Garden for model selection
- Vertex AI Search for grounding and RAG
- Evaluation Pipelines for continuous validation
- Example Store for usage-driven evolution
- Agentspace for agent discovery and organization
This makes a difference because the discussion shifts from “how do I run an agent?” to “how do I operate and evolve an agent platform with less friction?”
Interoperability with A2A
Another strategic point is interoperability.
The A2A (Agent-to-Agent) protocol reinforces a more open ecosystem vision, allowing agents from different origins to communicate in a standardized way.
This does not change the fact that, today, CERC’s decision is to standardize on ADK for production. But it shows that this standardization need not mean architectural isolation in the future.
What this choice delivers for CERC
In the end, the decision for ADK delivers something more important than a technology preference.
It reduces the gap between:
- Architecture
- Development
- Deployment
- Operations
- Governance
This friction reduction is one of the main objectives of any enterprise platform.
In practice, this means:
- More explicit flows
- More predictable behavior
- Greater clarity for auditing and compliance
- Lower operational complexity
- A more coherent foundation for scaling agents in production
That is the central point of the decision.
Conclusion
CERC did not choose Google ADK because it believes the future of AI agents will be dominated by a single framework.
It chose it because, in the company’s current context, it offers a particularly strong combination of:
- Orchestration control
- Architectural clarity
- Parallelism support
- State isolation
- Integration with the Google Cloud strategy
- Less friction between engineering and operations
In enterprise environments, competitive advantage rarely comes from the flashiest tool in the lab. It comes from the ability to turn technology into predictable, governable, and sustainable operations.
That is what guided our decision.
Strategic insight In enterprise environments, the best choice is not the one that promises the most features in isolation. It is the one that reduces the most friction between development, deployment, operations, and governance.
“The future of AI agents is not just about smarter models. It is about more mature engineering.”
References
- Google ADK Documentation
- Google ADK GitHub (Python)
- Vertex AI Agent Engine Overview
- LangChain Documentation
- LangGraph Documentation
- LangFlow Documentation
- LangSmith Documentation
- Vertex AI Agent Builder
- Agent2Agent Protocol
In a regulated financial environment, building AI agents requires more than fast prototyping. It requires architecture, control, and real capacity for production-scale operations.