MCP servers extend the functionality of a Large Language Model (LLM). Inference engines allow you to define the MCP servers, but often you will need to write an MCP client yourself. In this blog, you will learn how to do so using Spring AI. Enjoy!

1. Introduction

In a previous post, you learnt how to create an MCP server using Spring Boot and Spring AI. The MCP server provides four tools:

  • get_artists: return a list of my favorite artists.
  • search_artist: search for an artist in the list of my favorite artists.
  • get_songs: return a list of my favorite songs.
  • search_song: search for a song in the list of my favorite songs.

In order to test the MCP server, the DevoxxGenie IntelliJ plugin was used. However, this was a temporary solution. You actually need to write an MCP client and that is exactly what you are going to do in this blog.

Sources used in this blog are available at GitHub in the client directory.

2. Prerequisites

Prerequisites for reading this blog are:

  • Basic Java knowledge;
  • Basic Spring Boot knowledge;
  • Basic LMStudio knowledge;

3. Build MCP Server

As you will make use of the MCP server built in the previous post, it is necessary to build it first. Clone the repository and navigate in a terminal to the server directory. Execute the following command.

mvn clean verify

The jar-file will be available in the server/target directory.

4. Create MCP Client

The official documentation provides a good starting point if you need more detailed information.

Navigate to the Spring Initializr and add dependencies:

  • Spring Web: because you will invoke the client via a Rest call.
  • Model Context Protocol Client: in order to create the MCP client.
  • OpenAI: for integrating with LMStudio.

The following dependencies are added to the pom.

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.ai</groupId>
	<artifactId>spring-ai-starter-mcp-client</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.ai</groupId>
	<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>

The integration with LMStudio will not work correctly when only the spring-ai-starter-model-openai dependency is added. When you test this with Spring AI, you will notice that requests are not executed at all. The solution is to add an extra dependency to the pom. More detailed information can be found here. The solution is simple, but it caused quite some headaches and time.

<dependency>
	<groupId>io.projectreactor.netty</groupId>
	<artifactId>reactor-netty-http</artifactId>
</dependency>

Create a basic Controller where the following is injected:

  • a ChatClient which will be used to interact with LMStudio;
  • a ToolCallbackProvider which will inject the tools. The tools are configured in the application.properties which will be explained in a few moments.

The tools are added to the ChatClient using the defaultToolCallbacks method.

@RestController
public class McpClientController {

    private final ChatClient chatClient;

    public McpClientController(ChatClient.Builder chatClientBuilder, ToolCallbackProvider tools) {
        this.chatClient = chatClientBuilder.defaultToolCallbacks(tools).build();
    }

    @GetMapping("/chat")
    String chat(@RequestParam String message) {
        return this.chatClient.prompt()
                .user(message)
                .call()
                .content();
    }

}

The magic happens in the application.properties file.

First, give your client a name.

spring.ai.mcp.client.name=mcp-client

Define the MCP server in exactly the same way as been done using the DevoxxGenie plugin in the previous post. The properties assume that the client is started from within the client directory in the repository and that java is installed on your machine. You can define several servers, in the example below only one server is defined. You are free to give it a name of your choice.

  • command: the command to execute, in this case it is java because it is a jar file, often you will use npx which allows you to execute an NPM module without installing it.
  • args[x]: the arguments to be used.
spring.ai.mcp.client.stdio.connections.server1.command=java
spring.ai.mcp.client.stdio.connections.server1.args[0]=-jar
spring.ai.mcp.client.stdio.connections.server1.args[1]=../server/target/mcp-server-0.0.1-SNAPSHOT.jar

The last properties are needed for the integration with LMStudio.

  • api-key: when running an LLM locally, it does not matter, but you need to provide it. So, a dummy key is used here.
  • base-url: the URL to access LMStudio.
  • model: the model you want to use.
spring.ai.openai.api-key=fake-key
spring.ai.openai.base-url=http://localhost:1234
spring.ai.openai.chat.options.model=qwen3-8b

Navigate to the client directory and run the MCP client.

mvn spring-boot:run

Execute the prompt give me a list of gunter’s favorite artists using the chat endpoint. In the thinking process you can see that the LLM invokes the MCP server and presents the correct results.

$ curl "http://localhost:8080/chat?message=give%20me%20a%20list%20of%20gunter's%20favorite%20artists"
<think>
Okay, the user asked for a list of Gunter's favorite artists. I used the mcp_client_server1_get_artists function, which returned two entries. The first one is Bruce Springsteen and the second is JJ Johnson. Now I need to present this information clearly. Let me check if there's any formatting needed, like bullet points or a simple list. Since the user might just want the names, I'll list them out in a straightforward way. Make sure it's easy to read and no markdown. Alright, that should do it.
</think>

Here are Gunter's favorite artists:

- Bruce Springsteen  
- JJ Johnson

5. Control Tool Execution

The MCP server is invoked automatically. But what if you would like to control the tool execution. For example, you would like to add a human-in-the-loop before invoking the tool. Well, that is possible using the controlled tool execution.

An example is shown in the McpClientWithHitlController where a message is printed to the console when tool invocation takes place.

Important things to notice:

  • In chatOptions the internalToolExecutionEnabled is set to false. This will prevent automatic invocation of tools.
  • In the controller, you can check whether the chatResponse includes tool calls. If you want to execute the tool call, you invoke toolCallingManager.executeToolsCalls.

The code shown here below does not actually implement a human-in-the-loop, but it shows that you can do so, if you want.

@RestController
public class McpClientWithHitlController {

    private final ChatModel chatModel;
    private final ToolCallingManager toolCallingManager;
    private final ChatOptions chatOptions;

    public McpClientWithHitlController(ChatModel chatModel, ToolCallbackProvider tools) {
        this.chatModel = chatModel;
        this.toolCallingManager = ToolCallingManager.builder().build();

        this.chatOptions = ToolCallingChatOptions.builder()
                .toolCallbacks(tools.getToolCallbacks())
                .internalToolExecutionEnabled(false)
                .build();
    }

    @GetMapping("/chathitl")
    String chat(@RequestParam String message) {

        Prompt prompt = new Prompt(message, chatOptions);

        ChatResponse chatResponse = chatModel.call(prompt);

        while (chatResponse.hasToolCalls()) {
            System.out.println("Chat response has tools calls");
            ToolExecutionResult toolExecutionResult = toolCallingManager.executeToolCalls(prompt, chatResponse);

            prompt = new Prompt(toolExecutionResult.conversationHistory(), chatOptions);

            chatResponse = chatModel.call(prompt);
        }

        return chatResponse.getResult().getOutput().getText();
    }

}

Execute the following command in a terminal.

curl "http://localhost:8080/chathitl?message=give%20me%20a%20list%20of%20gunter's%20favorite%20artists"

In the Spring Boot application, you see the console message.

Chat response has tools calls

The LLM response also mentions that the tool is called.

<think>
Okay, the user asked for a list of Gunter's favorite artists. I called the mcp_client_server1_get_artists function, which doesn't require any parameters. The response came back with a JSON array containing two artists: Bruce Springsteen and JJ Johnson. I need to present this information clearly. Let me check if the data is properly formatted. The text field has a JSON structure inside it, so I should parse that to make it readable. I'll list each artist on a new line. Make sure there are no markdown formats, just plain text. Alright, that's straightforward.
</think>

Here is the list of Gunter's favorite artists:

- Bruce Springsteen
- JJ Johnson

Next, ask to tell a joke.

curl "http://localhost:8080/chathitl?message=tell%20me%20a%20joke"

The console message is this time not printed and the LLM response also shows that it did not ask to invoke a tool.

<think>
Okay, the user asked for a joke. Let me see. The tools provided are for getting songs and artists from Gunter's favorites. But the user isn't asking about music; they want a joke. None of the functions listed can generate or retrieve jokes. So I can't use any of the tools here. I should just respond with a joke directly. Let me think of a simple one. Maybe a play on words... Like why don't skeletons fight each other? Because they don't have the guts! That's a classic. Alright, I'll tell that.
</think>

Here's a light-hearted joke for you:

Why don't skeletons fight each other?  
Because they don't have the *guts*! 😄

Let me know if you'd like another!

6. Conclusion

In this blog, you learnt how to create an MCP client using Spring AI. Again, Spring really made some effort to let you define an MCP client as easy as possible. A little bit of configuration and some code and it works out-of-the-box. If you want more complex scenario’s, you can implement them also by overriding the defaults.


Discover more from

Subscribe to get the latest posts sent to your email.