You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chatclient.adoc
+69-24Lines changed: 69 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
4
4
The `ChatClient` offers a fluent API for stateless interaction with an AI Model. It supports both a synchronous and reactive programming model.
5
5
6
-
The fluent API has methods for building up the constituent parts of a `Prompt` that is passed to the AI model as input.
6
+
The fluent API has methods for building up the constituent parts of a xref:api/prompt.adoc#_prompts[Prompt] that is passed to the AI model as input.
7
7
The `Prompt` contains the instructional text to guide the AI model's output and behavior. From the API point of view, prompts consist of a collection of messages.
8
8
9
9
The AI model processes two main types of messages: user messages, which are direct inputs from the user, and system messages, which are generated by the system to guide the conversation.
@@ -12,12 +12,18 @@ These messages often contain template placeholders that are substituted at runti
12
12
13
13
There are also Prompt options that can be specified., such as the name of the AI Model to generate content and the temperature setting that controls the randomness or creativity of the generated output.
14
14
15
-
== Using an autoconfigured ChatClient.Builder
15
+
== Creating a ChatClient
16
+
17
+
The `ChatClient` is created using a `ChatClient.Builder` object.
18
+
You can obtain an autoconfigured `ChatClient.Builder` instance for any xref:api/chatmodel.adoc[ChatModel] Spring Boot autoconfiguration or create one programmatically.
19
+
20
+
=== Using an autoconfigured ChatClient.Builder
16
21
17
22
In the most simple use case, Spring AI provides Spring Boot autoconfiguration, creating a prototype `ChatClient.Builder` bean for you to inject into your class.
18
23
Here is a simple example of retrieving a String response to a simple user request.
19
24
20
-
```java
25
+
[source,java]
26
+
----
21
27
@RestController
22
28
class MyController {
23
29
@@ -35,61 +41,100 @@ class MyController {
35
41
.content();
36
42
}
37
43
}
38
-
```
44
+
----
39
45
40
-
In this simple example, the user input sets the contents of the user message. The call method sends a request to the AI model, and the context method returns the AI model's response as a String.
46
+
In this simple example, the user input sets the contents of the user message.
47
+
The call method sends a request to the AI model, and the context method returns the AI model's response as a String.
41
48
49
+
=== Create a ChatClient programmatically
42
50
43
-
== Returing a `ChatResponse`
51
+
You can disable the `ChatClient.Builder` autoconfiguration by setting the property `spring.ai.chat.client.enabled=false`.
52
+
This is useful if multiple chat models are used together.
53
+
Then create a `ChatClient.Builder` instance for for every `ChatModel` programmatically:
44
54
45
-
The response from the AI model is a rich structure defined by the type ChatResponse.
46
-
ChatResponse includes metadata about how the response was generated and can also contain multiple responses, known as generations, each with its own metadata.
47
-
The metadata includes the number of tokens (each token is approximately 3/4 of a word) used to create the response. This information is important because hosted AI models charge based on the number of tokens used per request.
55
+
[source,java]
56
+
----
57
+
ChatModel myChatModel = ... // usually autowired
48
58
49
-
An example to return the `ChatResponse` object that contains the metadata is shown below by invoking `chatResponse()` after the `call()` method.
The response from the AI model is a rich structure defined by the type xref:api/chatmodel.adoc#_chatresponse[ChatResponse].
73
+
It includes metadata about how the response was generated and can also contain multiple responses, known as xref:api/chatmodel.adoc#_generation[Generation]s, each with its own metadata.
74
+
The metadata includes the number of tokens (each token is approximately 3/4 of a word) used to create the response.
75
+
This information is important because hosted AI models charge based on the number of tokens used per request.
76
+
77
+
An example to return the `ChatResponse` object that contains the metadata is shown below by invoking `chatResponse()` after the `call()` method.
78
+
79
+
[source,java]
80
+
----
81
+
ChatResponse chatResponse = chatClient.prompt()
54
82
.user("Tell me a joke")
55
83
.call()
56
84
.chatResponse();
57
-
```
85
+
----
58
86
59
-
== Returning an Entity
87
+
=== Returning an Entity
60
88
61
-
You often want to return an entity class that is mapped from the returned String. The `entity` method provides this functionality.
89
+
You often want to return an entity class that is mapped from the returned `String`.
90
+
The `entity` method provides this functionality.
62
91
63
92
For example, given the Java record:
64
93
65
-
```java
94
+
[source,java]
95
+
----
66
96
record ActorFilms(String actor, List<String> movies) {
67
97
}
68
-
```
98
+
----
69
99
70
100
You can easily map the AI model's output to this record using the `entity` method, as shown below:
71
101
72
-
```java
102
+
[source,java]
103
+
----
73
104
ActorFilms actorFilms = chatClient.prompt()
74
105
.user("Generate the filmography for a random actor.")
75
106
.call()
76
107
.entity(ActorFilms.class);
77
-
```
108
+
----
78
109
79
-
There is also an overloaded `entity` method with the signature `entity(ParameterizedTypeReference<T> type)` that lets you specify types such as generic Lists.
110
+
There is also an overloaded `entity` method with the signature `entity(ParameterizedTypeReference<T> type)` that lets you specify types such as generic Lists:
80
111
81
-
== Streaming Responses
112
+
[source,java]
113
+
----
114
+
List<ActorFilms> actorFilms = chatClient.prompt()
115
+
.user("Generate the filmography of 5 movies for Tom Hanks and Bill Murray.")
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/api/chatmodel.adoc
+6-4Lines changed: 6 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ This section provides a guide to the Spring AI Chat Model API interface and asso
17
17
18
18
=== ChatModel
19
19
20
-
Here is the link:https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/ChatModel.java[ChatModel] interface definition:
20
+
Here is the link:https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat//model/ChatModel.java[ChatModel] interface definition:
21
21
22
22
[source,java]
23
23
----
@@ -37,7 +37,7 @@ In real-world applications, it is more common to use the `call` method that take
37
37
38
38
=== StreamingChatModel
39
39
40
-
Here is the link:https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/StreamingChatModel.java[StreamingChatModel] interface definition:
40
+
Here is the link:https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/model/StreamingChatModel.java[StreamingChatModel] interface definition:
41
41
42
42
[source,java]
43
43
----
@@ -136,6 +136,7 @@ This is a powerful feature that allows developers to use model specific options
The structure of the `ChatResponse` class is as follows:
@@ -157,13 +158,14 @@ public class ChatResponse implements ModelResponse<Generation> {
157
158
}
158
159
----
159
160
160
-
The https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/ChatResponse.java[ChatResponse] class holds the AI Model's output, with each `Generation` instance containing one of potentially multiple outputs resulting from a single prompt.
161
+
The https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/model/ChatResponse.java[ChatResponse] class holds the AI Model's output, with each `Generation` instance containing one of potentially multiple outputs resulting from a single prompt.
161
162
162
163
The `ChatResponse` class also carries a `ChatResponseMetadata` metadata about the AI Model's response.
163
164
165
+
[[Generation]]
164
166
=== Generation
165
167
166
-
Finally, the https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/Generation.java[Generation] class extends from the `ModelResult` to represent the output assistant message response and related metadata about this result:
168
+
Finally, the https://github.com/spring-projects/spring-ai/blob/main/spring-ai-core/src/main/java/org/springframework/ai/chat/model/Generation.java[Generation] class extends from the `ModelResult` to represent the output assistant message response and related metadata about this result:
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/concepts.adoc
+3-5Lines changed: 3 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -136,17 +136,15 @@ Anthropic's Claude AI model features a 100K token limit, and Meta's recent resea
136
136
To summarize the collected works of Shakespeare with GPT4, you need to devise software engineering strategies to chop up the data and present the data within the model's context window limits.
137
137
The Spring AI project helps you with this task.
138
138
139
-
== Output Parsing
139
+
== Structured Output
140
140
141
141
The output of AI models traditionally arrives as a `java.lang.String`, even if you ask for the reply to be in JSON.
142
142
It may be the correct JSON, but it is not a JSON data structure. It is just a string.
143
143
Also, asking "`for JSON`" as part of the prompt is not 100% accurate.
144
144
145
-
This intricacy has led to the emergence of a specialized field involving the creation of prompts to yield the intended output, followed by parsing the resulting simple string into a usable data structure for application integration.
145
+
This intricacy has led to the emergence of a specialized field involving the creation of prompts to yield the intended output, followed by converting the resulting simple string into a usable data structure for application integration.
146
146
147
-
Output parsing employs meticulously crafted prompts, often necessitating multiple interactions with the model to achieve the desired formatting.
148
-
149
-
This challenge has prompted OpenAI to introduce 'OpenAI Functions' as a means to specify the desired output format from the model precisely.
147
+
The xref:api/structured-output-converter.adoc#_structuredoutputconverter[Structured output conversion] employs meticulously crafted prompts, often necessitating multiple interactions with the model to achieve the desired formatting.
0 commit comments