The release of Spring AI has fundamentally changed how Java developers interact with Large Language Models (LLMs). While generating text and summarizing documents is impressive, the real power of modern AI lies in Agency.
An “Agent” is an AI system that doesn’t just talk; it acts. It can query your database, check the stock market, book a flight, or restart a server.
In this tutorial, we will dive deep into Spring AI Function Calling. You will learn how to overcome the “knowledge cutoff” and “isolation” limitations of LLMs by giving them tools to interact with the outside world.
The Problem: LLMs are “Brain in a Jar” #
Imagine hiring a brilliant consultant who has memorized every book in the library up to the year 2023. They are incredibly smart. However, if you ask them: “What is the weather in New York right now?” or “What is the current price of AAPL stock?”, they fail.
LLMs suffer from two main limitations:
- Static Knowledge: Their training data has a cut-off date.
- Isolation: They cannot access your private database or the internet by default.
In the past, we solved this via RAG (Retrieval-Augmented Generation), where we shove documents into the prompt context. But RAG is for data, not actions.
Function Calling is the solution for actions.
What is Spring AI Function Calling? #
Spring AI Function Calling allows you to register Java functions (standard java.util.function.Function) as “tools” that the LLM can invoke.
It works on a loop mechanism often referred to as the Tool Execution Loop:
- User Prompt: You ask: “What’s the weather in London?”
- LLM Analysis: The LLM (e.g., GPT-4) analyzes the intent. It realizes it doesn’t know the weather, but it sees a tool named
CurrentWeatherService. - JSON Request: Instead of replying with text, the LLM generates a JSON object containing the parameters needed for that function (e.g.,
{"city": "London", "unit": "C"}). - Spring AI Intercept: The Spring AI framework detects this request. It executes your Java method using the parameters provided by the LLM.
- Execution & Result: Your Java code calls a weather API and returns “15°C, Cloudy”.
- Final Answer: Spring AI feeds this result back to the LLM. The LLM now has the context to generate the final natural language response: “The current weather in London is 15°C and cloudy.”
This entire process happens seamlessly behind the scenes.
Prerequisites #
To follow this tutorial, ensure you have:
- Java 17 or higher.
- Spring Boot 3.2.x or higher.
- OpenAI API Key (We will use OpenAI for this demo, though Spring AI supports other providers like Anthropic and Mistral).
Step 1: Project Configuration #
First, let’s set up the dependencies. Using Maven, add the Spring AI Bill of Materials (BOM) and the OpenAI starter.
pom.xml:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>1.0.0-SNAPSHOT</version> <!-- Check for latest version -->
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>
</dependencies>
Add your API key to application.properties:
spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.chat.options.model=gpt-4-turbo
Tip: Always use environment variables for API keys in production.
Step 2: Defining the Function (The Tool) #
We need a scenario. Let’s build a classic Weather Agent. Even though this is a “Hello World” example, we will structure it using production-grade practices.
2.1 The Request and Response Objects #
LLMs need strict schemas to understand inputs and outputs. Java Records (introduced in Java 16) are perfect for this because they are concise and immutable.
package com.springdevpro.agent.tools;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonPropertyDescription;
// The Input for the LLM to populate
public record WeatherRequest(
@JsonProperty(required = true)
@JsonPropertyDescription("The city name, e.g., San Francisco, London")
String city,
@JsonPropertyDescription("The temperature unit, fill as 'C' for Celsius or 'F' for Fahrenheit")
String unit
) {}
// The Output the LLM will read
public record WeatherResponse(String city, String temperature, String description) {}
Crucial Note: The @JsonPropertyDescription annotations are not just comments; they are exported to the LLM context. The quality of these descriptions determines how accurately the AI uses your tool.
2.2 The Business Logic #
Now, implement the Function interface. In a real-world app, this would call an external REST API (like OpenWeatherMap). For this tutorial, we will mock the data to keep it self-contained.
package com.springdevpro.agent.tools;
import java.util.function.Function;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MockWeatherService implements Function<WeatherRequest, WeatherResponse> {
private static final Logger log = LoggerFactory.getLogger(MockWeatherService.class);
@Override
public WeatherResponse apply(WeatherRequest request) {
log.info("External Tool Called: MockWeatherService for city: {}", request.city());
// Mock logic - in production, use RestClient here
String temp = "20";
String condition = "Sunny";
if (request.city().toLowerCase().contains("london")) {
temp = "15";
condition = "Rainy";
}
String unit = request.unit() != null ? request.unit() : "C";
return new WeatherResponse(request.city(), temp + unit, condition);
}
}
Step 3: Registering the Function as a Bean #
This is where the magic happens. We need to tell Spring AI that this function exists and expose it to the Chat Model.
In your configuration class (or main application class), register the bean using the FunctionCallback wrapper (or simply return the Function and use the @Description annotation, depending on your Spring AI version preferenc. Here we use the standard @Bean approach which is most robust).
package com.springdevpro.agent.config;
import com.springdevpro.agent.tools.MockWeatherService;
import com.springdevpro.agent.tools.WeatherRequest;
import com.springdevpro.agent.tools.WeatherResponse;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Description;
import java.util.function.Function;
@Configuration
public class ToolsConfiguration {
@Bean
@Description("Get the current weather conditions for a specific city")
public Function<WeatherRequest, WeatherResponse> currentWeather() {
return new MockWeatherService();
}
}
The @Description annotation is critical. This string is sent to OpenAI. If you leave it blank or vague, the model won’t know when to use this function.
Step 4: Building the AI Agent Controller #
Now, let’s create a REST endpoint that accepts a user query and utilizes our new tool. We will use the ChatClient, which provides a fluent API for interaction.
package com.springdevpro.agent.api;
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class AgentController {
private final ChatClient chatClient;
public AgentController(ChatClient.Builder builder) {
// We configure the default functions available to this client
this.chatClient = builder
.defaultFunctions("currentWeather")
.build();
}
@GetMapping("/ask")
public String askAgent(@RequestParam String question) {
return chatClient.prompt()
.user(question)
.call()
.content();
}
}
Analysis of the Code #
ChatClient.Builder: Spring AI auto-configures this builder..defaultFunctions("currentWeather"): This binds the bean namecurrentWeather(which we defined in Step 3) to this client instance.- Automatic Routing: When
call()is executed, Spring AI sends the prompt to the LLM. If the LLM decides to call the function, Spring AI handles the execution, result retrieval, and the follow-up call to the LLM automatically.
Step 5: Testing Your AI Agent #
Run your Spring Boot application. Open your browser or Postman and trigger the endpoint.
Scenario A: General Knowledge (No Function Call) #
Request: GET /ask?question=Tell me a joke
Result: “Why did the scarecrow win an award? Because he was outstanding in his field!”
Explanation: The LLM handled this purely with its internal training data.
Scenario B: Triggering the Function #
Request: GET /ask?question=Is it raining in London right now?
Console Logs:
INFO ... MockWeatherService : External Tool Called: MockWeatherService for city: London
Result: “Yes, it is currently raining in London with a temperature of 15C.”
What just happened?
- The LLM saw the intention “weather in London”.
- It paused generation and requested the
currentWeathertool with{"city": "London"}. - Your Java code ran.
- The result
{"city": "London", "temperature": "15C", "description": "Rainy"}was sent back to the LLM. - The LLM interpreted the JSON and wrote the natural language sentence.
Advanced Concepts: Mastering Spring AI Function Calling #
To move from “Tutorial” level to “DevPro” level, you need to understand the nuances of function calling.
1. Handling Hallucinated Arguments #
Sometimes, an LLM might try to call your function with missing arguments or valid-looking but incorrect data (e.g., passing a zip code when you asked for a city name).
Best Practice: Use JSR-303 Bean Validation (annotations like @NotNull, @Size) inside your Request Record isn’t supported directly by the JSON parser in this context yet, so ensure your Function logic includes robust validation. If the input is invalid, do not throw an exception. Instead, return a JSON response explaining the error (e.g., {"error": "City name not found"}). The LLM can read this error and often correct itself or ask the user for clarification.
2. Dynamic Function Registration #
In complex systems, you might not want every function enabled for every user. You can enable functions dynamically per request:
// Inside your controller method
return chatClient.prompt()
.user(question)
.functions("currentWeather", "stockPriceService") // Enable specific tools for this request
.call()
.content();
3. Context Window Management #
Every tool definition you add consumes tokens in the “Context Window.” If you register 50 functions with massive descriptions, you might hit token limits or degrade the LLM’s reasoning ability.
Strategy: Group your functions into logical sets (e.g., “Financial Tools”, “Support Tools”) and inject only the relevant ones based on the user’s intent.
4. Security Implications #
Never expose sensitive actions (like deleteUser or transferMoney) without an explicit human-in-the-loop confirmation step or strict scoping. An LLM can be “jailbroken” or tricked into calling functions maliciously (Prompt Injection).
For sensitive operations, your function should return a confirmation token, requiring the user to type “CONFIRM” to proceed in a subsequent turn.
Conclusion #
Spring AI Function Calling effectively turns your Spring Boot application into a semantic gateway. It allows you to leverage the reasoning capabilities of LLMs while keeping your business logic and data secure and deterministic in Java.
By standardizing the interface between the probabilistic world of AI and the deterministic world of Java functions, Spring AI positions itself as the JdbcTemplate of the AI era—handling the boilerplate so you can focus on building intelligent agents.
In upcoming articles on Spring DevPro, we will explore how to chain multiple agents together and use Vector Databases for advanced RAG implementations.
Key Takeaways #
- Agency: Function calling transforms LLMs from text generators into active agents.
- Records: Use Java Records for clean, immutable data exchange with the AI.
- Descriptions: The
@Descriptionand@JsonPropertyDescriptionare your UI for the AI Model—write them carefully. - Statelessness: The underlying mechanism is stateless REST calls, but Spring AI abstracts the conversational complexity.
Enjoyed this tutorial? Subscribe to our RSS feed and check out our upcoming deep dive into Spring AI + RAG Architecture.