OpenAI API model gpt-4o in US (region-scoped projection).
Provider: All OpenAI models | OpenAI API
Inference regions: us
https://api.openai.com/v1/chat/completionsInstall: pip install openai
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Hello, how are you?"},
],
max_completion_tokens=1024,
temperature=0.7,
)
print(response.choices[0].message.content)| Parameter | Type | Description |
|---|---|---|
| max_completion_tokens | integer | Maximum number of tokens to generate, including reasoning tokens. (≥1) |
| temperature | float | Controls randomness. Lower values make output more deterministic. (0–2) Default: 1. |
| top_p | float | Nucleus sampling threshold. (0–1) Default: 1. |
| frequency_penalty | float | Penalises tokens based on their frequency in the text so far. (-2–2) Default: 0. |
| presence_penalty | float | Penalises tokens based on whether they appear in the text so far. (-2–2) Default: 0. |
| stop | string | Up to 4 sequences where the model will stop generating. |