Skip to contents

Get completion from chat messages

Get content of a chat completion

Get the number of token of a chat completion

Usage

get_completion_from_messages(
  messages,
  model = "gpt-4o-mini",
  temperature = 0,
  max_tokens = NULL,
  endpoint = "https://api.openai.com/v1/chat/completions",
  seed = NULL,
  use_py = FALSE
)

get_content(completion)

get_tokens(completion, what = c("total", "prompt", "completion", "all"))

Arguments

messages

(list) in the following format: ⁠list(list("role" = "user", "content" = "Hey! How old are you?") (see: https://platform.openai.com/docs/api-reference/chat/create#chat/create-model)

model

(chr, default = "gpt-4o-mini") a length one character vector indicating the model to use (see: https://platform.openai.com/docs/models/continuous-model-upgrades)

temperature

(dbl, default = 0) a value between 0 (most deterministic answer) and 2 (more random). (see: https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature)

max_tokens

(dbl, default = 500) a value greater than 0. The maximum number of tokens to generate in the chat completion. (see: https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens)

endpoint

(chr, default = "https://api.openai.com/v1/chat/completions", i.e. the OpenAI API) the endpoint to use for the request.

seed

(chr, default = NULL) a string to seed the random number

use_py

(lgl, default = FALSE) whether to use python or not

completion

the number of tokens used for output of a get_completion_from_messages call

what

(chr) one of "total" (default), "prompt", "completion", or "all"

Value

(list) of two element: content, which contains the chr vector of the response, and tokens, which is a list of number of tokens used for the request (prompt_tokens), answer (completion_tokens), and overall (total_tokens, the sum of the other two)

(chr) the output message returned by the assistant

(int) number of token used in completion for prompt or completion part, or overall (total)

Details

For argument description, please refer to the official documentation.

Lower values for temperature result in more consistent outputs, while higher values generate more diverse and creative results. Select a temperature value based on the desired trade-off between coherence and creativity for your specific application. Setting temperature to 0 will make the outputs mostly deterministic, but a small amount of variability will remain.

Functions

  • get_content():

  • get_tokens():

Examples

if (FALSE) {
  prompt <- list(
    list(
      role = "system",
      content = "you are an assistant who responds succinctly"
    ),
    list(
      role = "user",
      content = "Return the text: 'Hello world'."
    )
  )
  res <- get_completion_from_messages(prompt)
  answer <- get_content(res) # "Hello world."
  token_used <- get_tokens(res) # 30
}

if (FALSE) {
  msg_sys <- compose_sys_prompt(
    role = "Sei l'assistente di un docente universitario.",
    context = "
      Tu e lui state preparando un workshop sull'utilizzo di ChatGPT
      per biostatisitci ed epidemiologi."
  )

  msg_usr <- compose_usr_prompt(
    task = "
      Il tuo compito è trovare cosa dire per spiegare cosa sia una
      chat di ChatGPT agli studenti, considerando che potrebbe
      esserci qualcuno che non ne ha mai sentito parlare (e segue
      il worksho incuriosito dal titolo o dagli amici).",
    output = "
      Riporta un potenziale dialogo tra il docente e gli studenti
      che assolva ed esemplifichi lo scopo descritto.",
    style = "Usa un tono amichevole, colloquiale, ma preciso."
  )

  prompt <- compose_prompt_api(msg_sys, msg_usr)
  res <- get_completion_from_messages(prompt, "gpt-4-turbo")
  answer <- get_content(res)
  token_used <- get_tokens(res)
}