Building with A2A
Table of Contents
Previously we talked about Google’s Agent2Agent (A2A) protocol. In that post, we covered the concepts behind A2A and why it matters for agent interoperability.
Now, let’s try using A2A to build a simple agent, just like we tried building our own MCP server.
In this tutorial, we’ll create a tides/water levels agent that communicates with the NOAA Tides and Currents API. Our agent will provide two key skills:
- Listing available water level stations
- Providing water level data for a specific station
We’ll implement both an A2A agent server and a client to demonstrate the protocol in action.
Prerequisites #
ℹ️ NOTE: I’m using macOS for this setup.
Python #
We’re going to use Python. Any version 3.11 or above should work. Install it like so:
brew install python
Package manager #
Let’s use uv
for our package manager. Please use the official instructions.
Local LLM #
We’ll use Ollama to run a local LLM. Our agent will use this to interpret user requests and format data responses.
brew install ollama
After installation, pull the Llama 3.1 8B model:
ollama pull llama3.1:8b
Setting up the project #
Let’s start by creating our project structure:
uv init tides-a2a
cd tides-a2a
First, let’s set up our project dependencies in pyproject.toml
:
Project dependencies in pyproject.toml
[project]
name = "tides-a2a"
version = "0.1.0"
description = "A2A agent for NOAA tides and water level data"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"ollama>=0.4.8",
"pydantic>=2.11.5",
"python-a2a>=0.5.6",
"python-dotenv>=1.1.0",
"requests>=2.32.3",
"thefuzz>=0.22.1",
]
Create a .env
file with our configuration:
AGENT_HOST=localhost
AGENT_PORT=8000
Talking to the NOAA tides API #
Now, let’s create a small wrapper to the NOAA Tides and Currents API. This API provides real-time water level data for coastal stations across the United States.
We’ll start by defining the data models for our API responses in tides_api/tides_api_types.py
. This ensures we have a typed interface to the API, making our code more robust.
Here’s a simplified version of our data models:
Data models for the NOAA API responses
from datetime import datetime
from pydantic import BaseModel, field_serializer, model_validator
from typing import Literal
# Model for a tide station
class Station(BaseModel):
id: str
name: str
state: str
# Response containing a list of stations
class GetStationsResponse(BaseModel):
count: int
stations: list[Station]
# Parameters for water level data requests
class GetWaterLevelDataForStationParams(BaseModel):
station: str
begin_date: datetime | None
end_date: datetime | None
date: Literal["latest", "today", "recent"] | None
product: Literal["water_level"]
datum: Literal["MHHW", "MHW", "DTL", "MTL", "MSL", "MLW", "MLLW", "NAVD88", "STND"]
time_zone: Literal["gmt", "lst", "lst_ldt"]
format: Literal["json", "csv", "xml"]
units: Literal["english", "metric"]
# Validation and serialization methods...
# Individual water level data point
class WaterLevelData(BaseModel):
t: str # Time - Date and time of the observation
v: str # Value - Measured water level height
s: str # Sigma - Standard deviation
f: str # Data Flags
q: Literal["p", "v"] # Quality level
# Station metadata in the response
class WaterLevelResponseMetadata(BaseModel):
id: str
name: str
lat: str
lon: str
# Complete response for water level data
class GetWaterLevelDataForStationResponse(BaseModel):
metadata: WaterLevelResponseMetadata
data: list[WaterLevelData]
Now, let’s implement the API helper in tides_api/tides_api_helper.py
to interact with the NOAA API:
API helper implementation for the NOAA API
from typing import Literal
from datetime import datetime
import requests
from pydantic import ValidationError
from tides_api.tides_api_types import (
GetStationsResponse,
GetWaterLevelDataForStationParams,
GetWaterLevelDataForStationResponse,
)
NOAA_DATA_BASE_URL = "https://api.tidesandcurrents.noaa.gov/api/prod/datagetter"
NOAA_METADATA_BASE_URL = "https://api.tidesandcurrents.noaa.gov/mdapi/prod/webapi"
class TidesApiHelper:
def get_water_level_stations(self, use_metric_units: bool) -> GetStationsResponse | None:
"""Get list of NOAA stations suitable for water level data"""
response = requests.get(
f"{NOAA_METADATA_BASE_URL}/stations.json",
params={
"type": "waterlevels",
"units": "metric" if use_metric_units else "english",
},
)
if response.status_code != 200:
print(f"API Error: {response.status_code} -> {response.text}")
return None
try:
parsed_response = GetStationsResponse.model_validate_json(response.text)
# Sort stations by name for easier browsing
parsed_response.stations = sorted(
parsed_response.stations, key=lambda x: x.name, reverse=False
)
except ValidationError as e:
print(f"Error validating stations response: {e}")
return None
return parsed_response
def get_water_level_data_for_station(
self,
station_id: str,
date_option: Literal["latest", "today", "recent"] | tuple[datetime, datetime],
use_local_timezone: bool,
use_metric_units: bool,
):
"""Get water level data for a station"""
# Configure date parameters based on the option provided
if isinstance(date_option, tuple):
date_from, date_to = date_option
date = None
else:
date_from = None
date_to = None
date = date_option
# Build the API parameters
params = GetWaterLevelDataForStationParams(
station=station_id,
begin_date=date_from,
end_date=date_to,
date=date,
product="water_level",
datum="MTL", # Mean Tide Level datum
time_zone="lst_ldt" if use_local_timezone else "gmt",
format="json",
units="metric" if use_metric_units else "english",
)
# Call the API
response = requests.get(
NOAA_DATA_BASE_URL, params=params.model_dump(mode="json")
)
if response.status_code != 200:
print(f"API Error: {response.status_code} -> {response.text}")
return None
# Parse and validate the response
try:
parsed_response = GetWaterLevelDataForStationResponse.model_validate_json(
response.text
)
except ValidationError as e:
print(f"Error validating water level data response: {e}")
return None
# Transform the data for easier consumption
data = [
{
"timestamp": data_item.t,
"is_local_timezone": use_local_timezone,
"value": float(data_item.v),
"std_dev": data_item.s,
"units": "meters" if use_metric_units else "feet",
}
for data_item in sorted(
parsed_response.data, key=lambda x: x.t, reverse=True
)
]
return data
With these files in place, we now have a clean interface to the NOAA Tides API.
Defining the A2A agent #
Now that we have our API interface ready, let’s create the A2A agent implementation. The agent will expose our tide data functionality through the A2A protocol.
We’ll need to define:
- An Agent Card that describes the agent’s capabilities
- Skills that the agent can perform
- A task router that directs incoming requests to the appropriate skill
Let’s build our agent in tides_agent.py
.
Task routing #
Since we want our agent to understand natural language requests, we’ll use a local LLM to determine which action the user wants to perform. Let’s define a model for this response:
Action selection response model for LLM routing
class ActionSelectionResponse(BaseModel):
action: Literal["get_stations_list", "get_tide_data", "unknown"]
station_id: str | None = Field(
description="this must be a seven-digit numeric string", default=None
)
station_name: str | None = None
We’ll also define the prompts for our LLM:
LLM prompts for action selection and data formatting
SYSTEM_PROMPT = """
You are an agent that can perform two actions:
1. Get a list of NOAA tide stations that are capable of measuring water level
2. Given a station, get tide (water level) for that specific NOAA station
Please attempt to choose the most appropriate action based on the user's message.
If the user's action is to get stations list, then please return the following JSON:
{
"action": "get_stations_list"
}
If the user's action is to get tide data for a specific station, then please return the following JSON:
{
"action": "get_tide_data",
"station_id": "<station_id>",
"station_name: "<station_name>
}
If neither of the above actions are appropriate, please return the following JSON:
{
"action": "unknown"
}
"""
USER_INPUT_PROMPT = """
The user has provided the following message:
{user_message}
Please choose the most appropriate action based on the user's message.
"""
WATER_LEVEL_DATA_STRINGIFY_PROMPT = """
Given this water level data, please return a human-friendly description of it.
Just return the description, do not include any additional text.
Please do not even guess the station ID unless it is explicitly provided by the user as a seven-digit numeric string.
Here is the data and the schema:
Data: {water_level_data}
Schema: {water_level_data_schema}
"""
Agent implementation #
Now let’s define our A2A agent class by inheriting from A2AServer
:
A2A agent implementation for listing water level stations
class TidesAgent(A2AServer):
@skill(
name="List water level stations",
description="List NOAA tide stations that are capable of measuring water level",
tags=["tides", "water level", "stations"],
examples="What water level stations are available?",
)
def list_water_level_stations(self):
"""List all available NOAA water level stations"""
tides_api_helper = TidesApiHelper()
try:
stations_list = tides_api_helper.get_water_level_stations(
use_metric_units=False
)
except Exception as e:
return self._create_error_response(
f"Error fetching stations list: {str(e)}"
)
if not stations_list:
return self._create_error_response("Failed to get stations list")
parts = [f"Found {stations_list.count} stations."] + [
{
"type": "text",
"text": "\n".join(
[
f"{station.name} ({station.id})"
for station in stations_list.stations
]
),
}
]
return self._create_success_response(parts)
The @skill
decorator registers a function as an available skill for the agent, and includes metadata about the skill.
Now let’s add our second skill for getting water level data:
A2A agent implementation for retrieving water level data
@skill(
name="Get water level at a specific station",
description="Get water level at a specific NOAA tide station",
tags=["tides", "water level"],
examples="What's the water level for San Francisco?",
)
def get_water_level_data_for_station(
self, station_name: str | None, station_id: str | None
):
"""Get water level data for a specific NOAA tide station"""
if not station_id and not station_name:
return self._create_error_response("No station ID or name was provided")
tides_api_helper = TidesApiHelper()
# Get the list of stations first
try:
stations_list = tides_api_helper.get_water_level_stations(
use_metric_units=False
)
except Exception as e:
return self._create_error_response(
f"Error fetching stations list: {str(e)}"
)
if not stations_list:
return self._create_error_response("Failed to get stations list")
# Resolve station ID if only name was provided
resolved_station_id = None
if station_id:
resolved_station_id = station_id
elif station_name:
# Fuzzy match the station name
fuzzy_match_result = process.extractOne(
query=station_name,
choices=[station.name for station in stations_list.stations],
)
fuzzy_matched_station_name = None
if isinstance(fuzzy_match_result, tuple):
fuzzy_matched_station_name = fuzzy_match_result[0]
if isinstance(fuzzy_matched_station_name, str):
for station in stations_list.stations:
if station.name.lower() == fuzzy_matched_station_name.lower():
resolved_station_id = station.id
break
if not resolved_station_id:
return self._create_error_response("No station ID was supplied or derived")
# Get water level data
try:
water_level_data = tides_api_helper.get_water_level_data_for_station(
station_id=resolved_station_id,
date_option="latest",
use_local_timezone=True,
use_metric_units=False,
)
except Exception as e:
return self._create_error_response(
f"Error getting water level data: {str(e)}"
)
if not water_level_data:
return self._create_error_response("Failed to get water level data")
# Use LLM to create a human-friendly description of the water level data
try:
water_level_data_stringify_response = ollama.chat(
model=OLLAMA_MODEL,
messages=[
{
"role": "user",
"content": WATER_LEVEL_DATA_STRINGIFY_PROMPT.format(
water_level_data=water_level_data,
water_level_data_schema=WaterLevelData.model_json_schema(),
),
},
],
)
except Exception as e:
return self._create_error_response(
f"Error generating description: {str(e)}"
)
parts = [
{
"type": "text",
"text": water_level_data_stringify_response.message.content,
}
]
return self._create_success_response(parts)
Finally, let’s add our main task handler that will route incoming requests along with some utility methods:
A2A agent task routing and utility methods
def _create_error_response(self, message: str) -> Task:
"""Helper to create a consistent error response"""
task = Task()
task.status = TaskStatus(state=TaskState.FAILED)
task.artifacts = [{"parts": [{"type": "error", "message": message}]}]
return task
def _create_success_response(self, parts: list[dict | str]) -> Task:
"""Helper to create a consistent success response"""
task = Task()
task.status = TaskStatus(state=TaskState.COMPLETED)
task.artifacts = [{"parts": parts}]
return task
def handle_task(self, task: Task) -> Task:
"""Main task handler that routes to appropriate skills"""
if task.message is None or not (msg_content := task.message.get("content")):
return self._create_error_response("No message provided")
msg_text: str = msg_content.get("text", "")
# Use LLM to determine what action the user wants to perform
try:
action_selection_response = ollama.chat(
model=OLLAMA_MODEL,
messages=[
{
"role": "system",
"content": SYSTEM_PROMPT,
},
{
"role": "user",
"content": USER_INPUT_PROMPT.format(user_message=msg_text),
},
],
format=ActionSelectionResponse.model_json_schema(),
)
except Exception as e:
return self._create_error_response(str(e))
if action_selection_response.message.content is None:
return self._create_error_response("No action selection response content")
# Parse the LLM response into our structured model
try:
parsed_action_selection_response = (
ActionSelectionResponse.model_validate_json(
action_selection_response.message.content
)
)
except Exception as e:
return self._create_error_response(str(e))
# Route to the appropriate skill based on the detected action
if parsed_action_selection_response.action == "unknown":
return self._create_error_response("Unknown action requested")
if parsed_action_selection_response.action == "get_stations_list":
return self.list_water_level_stations()
if parsed_action_selection_response.action == "get_tide_data":
return self.get_water_level_data_for_station(
station_name=parsed_action_selection_response.station_name,
station_id=parsed_action_selection_response.station_id,
)
# Should not reach here if action validation is working correctly
task.status = TaskStatus(state=TaskState.UNKNOWN)
return task
Agent card and server launch #
Now let’s add the main function to create our agent card and start the server.
The Agent card provides all the metadata about the agent. It’s used for agent discovery and to let clients know what tasks the agent can handle.
Agent card and server setup implementation
def main():
load_dotenv()
print("Starting Tides agent...")
agent_host = os.getenv("AGENT_HOST")
agent_port = os.getenv("AGENT_PORT")
if not agent_host or not agent_port:
raise ValueError("AGENT_HOST and AGENT_PORT must be set in the environment")
# Define the agent card - this is the A2A 'identity' of our agent
agent_card = AgentCard(
name="Tide agent",
description="Agent provides information about tide levels at coastal locations.",
url=f"http://{agent_host}:{agent_port}/",
version="1.0.0",
authentication=None,
capabilities={
"water_levels": True,
},
skills=[
AgentSkill(
name="List water level stations",
description="List NOAA tide stations that are capable of measuring water level",
tags=["tides", "water level", "stations"],
examples=["What water level stations are available?"],
),
AgentSkill(
name="Get water level at a specific station",
description="Get water level at a specific NOAA tide station",
tags=["tides", "water level"],
examples=["What's the water level for San Francisco?"],
),
],
)
# Initialize our agent with the agent card
tides_agent = TidesAgent(agent_card=agent_card)
# Start the A2A server
run_server(agent=tides_agent, host=agent_host, port=int(agent_port))
if __name__ == "__main__":
main()
Implementing the client #
Now that we have our agent server implementation, let’s create a client to interact with it. The client will be simple - it will send messages to the agent and display the responses.
Create tides_client.py
with the following content:
A2A client implementation for interacting with the tides agent
import os
from dotenv import load_dotenv
from python_a2a import A2AClient, ErrorContent, Message, MessageRole, TextContent
def main():
load_dotenv()
print("Tides client")
print("------------")
agent_host = os.getenv("AGENT_HOST")
agent_port = os.getenv("AGENT_PORT")
if not agent_host or not agent_port:
raise ValueError("AGENT_HOST and AGENT_PORT must be set in the environment")
client = A2AClient(endpoint_url=f"http://{agent_host}:{agent_port}")
agent_card = client.get_agent_card()
print(f"Agent card: {agent_card}")
print(
"\nAsk about tide/water level at a specific station or ask for a stations list:"
)
print(
"(example: 'What's the water level for San Francisco?' or 'What water level stations are available?')"
)
query = input("> ").strip()
if not query:
return
message = Message(content=TextContent(text=query), role=MessageRole.USER)
response = client.send_message(message)
print("\nTides agent response")
print("--------------------")
if isinstance(response.content, ErrorContent):
print(f"Encountered an error: {response.content.message}")
elif isinstance(response.content, TextContent):
print(response.content.text)
else:
print("No usable response received.")
if __name__ == "__main__":
main()
Running it all #
Now that we have all our code in place, let’s run our agent and client. We’ll need two terminal windows.
In the first terminal, start the agent server:
python tides_agent.py
You should see:
Starting Tides agent...
In the second terminal, run the client:
python tides_client.py
You should see the agent card and a prompt for your query. Try asking for the list of stations:
> What water level stations are available?
Or ask for the water level at a specific station:
> What's the water level for San Francisco?
Tides agent response
--------------------
On 2025-05-25 at 2:06 PM, the water level was recorded to be approximately 0.203 feet above mean sea level with a standard deviation of about 0.128 feet.
Next steps #
You can find the code for this example here.
To further explore A2A, you might want to try:
- Adding more skills to the tide agent, such as forecasting or historical data
- Creating additional agents that can work with your tide agent
- Implementing authentication for secure agent communication