Skip to content

Commit

Permalink
Merge pull request #1 from all-in-aigc/feature/support-openai
Browse files Browse the repository at this point in the history
support openai api
  • Loading branch information
idoubi authored Sep 25, 2023
2 parents 166428d + 287a603 commit 8f1d248
Show file tree
Hide file tree
Showing 4 changed files with 65 additions and 4 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,5 @@ yarn-error.log*
# typescript
*.tsbuildinfo
next-env.d.ts

pnpm-lock.yaml
19 changes: 16 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,14 +81,27 @@ HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"

To run this kind of LLM locally, you can use [TGI](https://github.com/huggingface/text-generation-inference) (Please read [this post](https://github.com/huggingface/text-generation-inference/issues/726) for more information about the licensing).

### Option 3: Fork and modify the code to use a different LLM system
### Option 3: Use an OpenAI API Key

Another option could be to disable the LLM completely and replace it with another LLM protocol and/or provider (eg. OpenAI, Replicate), or a human-generated story instead (by returning mock or static data).
This is a new option added recently, where you can use OpenAI API with an OpenAI API Key.

To activate it, create a `.env.local` configuration file:

```bash
LLM_ENGINE="OPENAI"
# default openai api base url is: https://api.openai.com/v1
OPENAI_API_BASE_URL="Your OpenAI API Base URL"
OPENAI_API_KEY="Your OpenAI API Key"
OPENAI_API_MODEL="gpt-3.5-turbo"
```

### Option 4: Fork and modify the code to use a different LLM system

Another option could be to disable the LLM completely and replace it with another LLM protocol and/or provider (eg. Claude, Replicate), or a human-generated story instead (by returning mock or static data).

### Notes

It is possible that I modify the AI Comic Factory to make it easier in the future (eg. add support for OpenAI or Replicate)
It is possible that I modify the AI Comic Factory to make it easier in the future (eg. add support for Claude or Replicate)

## The Rendering API

Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
"html2canvas": "^1.4.1",
"lucide-react": "^0.260.0",
"next": "13.4.10",
"openai": "^4.10.0",
"pick": "^0.0.1",
"postcss": "8.4.26",
"react": "18.2.0",
Expand Down
47 changes: 46 additions & 1 deletion src/app/queries/predict.ts
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
"use server"

import { LLMEngine } from "@/types"
import { HfInference, HfInferenceEndpoint } from "@huggingface/inference"

import type { ChatCompletionMessage } from "openai/resources/chat"
import { LLMEngine } from "@/types"
import OpenAI from "openai"

const hf = new HfInference(process.env.HF_API_TOKEN)


// note: we always try "inference endpoint" first
const llmEngine = `${process.env.LLM_ENGINE || ""}` as LLMEngine
const inferenceEndpoint = `${process.env.HF_INFERENCE_ENDPOINT_URL || ""}`
const inferenceModel = `${process.env.HF_INFERENCE_API_MODEL || ""}`
const openaiApiKey = `${process.env.OPENAI_API_KEY || ""}`

let hfie: HfInferenceEndpoint

Expand All @@ -34,6 +38,16 @@ switch (llmEngine) {
throw new Error(error)
}
break;

case "OPENAI":
if (openaiApiKey) {
console.log("Using an OpenAI API Key")
} else {
const error = "No OpenAI API key defined"
console.error(error)
throw new Error(error)
}
break;

default:
const error = "No Inference Endpoint URL or Inference API Model defined"
Expand All @@ -45,6 +59,10 @@ export async function predict(inputs: string) {

console.log(`predict: `, inputs)

if (llmEngine==="OPENAI") {
return predictWithOpenAI(inputs)
}

const api = llmEngine ==="INFERENCE_ENDPOINT" ? hfie : hf

let instructions = ""
Expand Down Expand Up @@ -92,4 +110,31 @@ export async function predict(inputs: string) {
.replaceAll("<|assistant|>", "")
.replaceAll('""', '"')
)
}

async function predictWithOpenAI(inputs: string) {
const openaiApiBaseUrl = `${process.env.OPENAI_API_BASE_URL || "https://api.openai.com/v1"}`
const openaiApiModel = `${process.env.OPENAI_API_MODEL || "gpt-3.5-turbo"}`

const openai = new OpenAI({
apiKey: openaiApiKey,
baseURL: openaiApiBaseUrl,
})

const messages: ChatCompletionMessage[] = [
{ role: "system", content: inputs },
]

try {
const res = await openai.chat.completions.create({
messages: messages,
stream: false,
model: openaiApiModel,
temperature: 0.8
})

return res.choices[0].message.content
} catch (err) {
console.error(`error during generation: ${err}`)
}
}

0 comments on commit 8f1d248

Please sign in to comment.