OpenAI is a company that focuses on advanced AI research and development to provide crucial AI methods and benefits. The OpenAI API works virtually on any sort of functioning which requires understanding or generating natural commands, language, code, image, and more.
OpenAI API with DronaHQ studio opens a vast spectrum to explore and boost your micro app with several AI-supported models.
Configuring OpenAI connector
To add a third-party API connector, under Studio > Connectors, click (+) Connector. Select OpenAI connector.
Select Connect to OpenAI.
Provide the details like the account name and API key of the OpenAI account.
You should have an account on OpenAI to get the API – keys. You can fetch API keys from here.
Now click Save .
After this, your account will be configured.
Here you can also Edit the configured account or delete an existing configured account.
-
Now Configure connector fields. Add a Connector name. Then add the API key for the connector account.
-
Once all details are added, click Finish. Your connector configuration is now done.
Using OpenAI connector
Now that you have configured your OpenAI Connector, you can see several endpoints/methods under OpenAI which you can apply in your apps as connectors or actions from the server-side.
The endpoints of OpenAI, you can view them in your connector list page.
Lets discuss some of the OpenAI endpoints.
Creating an Image file
In our Studio-supported OpenAi connector, we have a sub-API of CreateImage
which generates images for us with the help of some user inputs on the desired image.
Select the CreateImage
endpoint.
After selecting your account for using the OpenAI connector. We need to provide details regarding what type of image we want OpenAI to retrieve.
-
Prompt: A text description of the desired image is required as user input.
-
Size: To mention the dimension of the desired image. The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
-
User: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
-
Response Format: The retrieved image can be obtained either by
URL
orB64_JSON
. -
N: To provide the number of images required.
For example:
Now let’s see the Refresh response .
In the above image, you can see that we are getting two URLs as responses from our OpenAI connector CreateImage
endpoint.
We can also check the URL:
You can learn more details about this endpoint from its official API reference documentation link.
Creating Chat Completion
This subcategory of the OpenAI connector enables you to create a chat completion sentence for a given conversation.
Let’s see how to use CreateChatCompletion
endpoint and fetch the results.
Select the endpoint.
Next, select your account and continue.
We need to provide details for the endpoint request body. There are several attributes, some of them are required to make a successful request with no error in the response and some of them are optional to make AI come up with a better and more effective response.
-
Model: ID of the model to use. which models work with the Chat API.
-
Message: (Array). The messages to generate chat completions for, in the chat format. Ex - [{“role”: “user”, “content”: “Hello!”}]
-
Max Tokens: The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length.
-
Temperature: Sampling temperature to be used, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
-
N: Number of chat completion choices to generate for each input message.
-
Stream: If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. See the OpenAI Cookbook for example code.
-
Top P: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
-
Presence Penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
-
Frequency Penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
In the below image I have filled the required information.
Let’s do a refresh response to get the data.
In the above image we can see that, the content attribute has the message for us.