Skip to content

Releases: vercel/modelfusion

v0.115.0

05 Jan 13:06
Compare
Choose a tag to compare

Removed

  • Anthropic support. Anthropic has a strong stance against open-source models and against non-US AI. I will not support them by providing a ModelFusion integration.

v0.114.1

05 Jan 12:14
Compare
Choose a tag to compare

Fixed

  • Together AI text generation and text streaming using OpenAI-compatible chat models.

v0.114.0

05 Jan 11:37
Compare
Choose a tag to compare

Added

  • Custom call header support for APIs. You can pass a customCallHeaders function into API configurations to add custom headers. The function is called with functionType, functionId, run, and callId parameters. Example for Helicone:

    const text = await generateText(
      openai
        .ChatTextGenerator({
          api: new HeliconeOpenAIApiConfiguration({
            customCallHeaders: ({ functionId, callId }) => ({
              "Helicone-Property-FunctionId": functionId,
              "Helicone-Property-CallId": callId,
            }),
          }),
          model: "gpt-3.5-turbo",
          temperature: 0.7,
          maxGenerationTokens: 500,
        })
        .withTextPrompt(),
    
      "Write a short story about a robot learning to love",
    
      { functionId: "example-function" }
    );
  • Rudimentary caching support for generateText. You can use a MemoryCache to store the response of a generateText call. Example:

    import { MemoryCache, generateText, ollama } from "modelfusion";
    
    const model = ollama
      .ChatTextGenerator({ model: "llama2:chat", maxGenerationTokens: 100 })
      .withTextPrompt();
    
    const cache = new MemoryCache();
    
    const text1 = await generateText(
      model,
      "Write a short story about a robot learning to love:",
      { cache }
    );
    
    console.log(text1);
    
    // 2nd call will use cached response:
    const text2 = await generateText(
      model,
      "Write a short story about a robot learning to love:", // same text
      { cache }
    );
    
    console.log(text2);
  • validateTypes and safeValidateTypes helpers that perform type checking of an object against a Schema (e.g., a zodSchema).

v0.113.0

03 Jan 08:55
Compare
Choose a tag to compare

Structure generation improvements.

Added

  • .asStructureGenerationModel(...) function to OpenAIChatModel and OllamaChatModel to create structure generation models from chat models.
  • jsonStructurePrompt helper function to create structure generation models.

Example

import {
  generateStructure,
  jsonStructurePrompt,
  ollama,
  zodSchema,
} from "modelfusion";

const structure = await generateStructure(
  ollama
    .ChatTextGenerator({
      model: "openhermes2.5-mistral",
      maxGenerationTokens: 1024,
      temperature: 0,
    })
    .asStructureGenerationModel(jsonStructurePrompt.text()),

  zodSchema(
    z.object({
      characters: z.array(
        z.object({
          name: z.string(),
          class: z
            .string()
            .describe("Character class, e.g. warrior, mage, or thief."),
          description: z.string(),
        })
      ),
    })
  ),

  "Generate 3 character descriptions for a fantasy role playing game. "
);

v0.112.0

02 Jan 09:20
Compare
Choose a tag to compare

Changed

  • breaking change: renamed useToolsOrGenerateText to useTools
  • breaking change: renamed generateToolCallsOrText to generateToolCalls

Removed

  • Restriction on tool names. OpenAI tool calls do not have such a restriction.

v0.111.0

01 Jan 15:48
Compare
Choose a tag to compare

Reworked API configuration support.

Added

  • All providers now have an Api function that you can call to create custom API configurations. The base URL set up is more flexible and allows you to override parts of the base URL selectively.
  • api namespace with retry and throttle configurations

Changed

  • Updated Cohere models.
  • Updated LMNT API calls to LMNT v1 API.
  • breaking change: Renamed throttleUnlimitedConcurrency to throttleOff.

v0.110.0

30 Dec 20:14
Compare
Choose a tag to compare

Changed

  • breaking change: renamed modelfusion/extension to modelfusion/internal. This requires updating modelfusion-experimental (if used) to v0.3.0

Removed

  • Deprecated OpenAI completion models that will be deactivated on January 4, 2024.

v0.109.0

30 Dec 12:51
Compare
Choose a tag to compare

Added

  • Open AI compatible completion model. It e.g. works with Fireworks AI.

  • Together AI API configuration (for Open AI compatible chat models):

    import {
      TogetherAIApiConfiguration,
      openaicompatible,
      streamText,
    } from "modelfusion";
    
    const textStream = await streamText(
      openaicompatible
        .ChatTextGenerator({
          api: new TogetherAIApiConfiguration(),
          model: "mistralai/Mixtral-8x7B-Instruct-v0.1",
        })
        .withTextPrompt(),
    
      "Write a story about a robot learning to love"
    );
  • Updated Llama.cpp model settings. GBNF grammars can be passed into the grammar setting:

    const text = await generateText(
      llamacpp
        .TextGenerator({
          maxGenerationTokens: 512,
          temperature: 0,
          // simple list grammar:
          grammar: `root ::= ("- " item)+
    item ::= [^\\n]+ "\\n"`,
        })
        .withTextPromptTemplate(MistralInstructPrompt.text()),
    
      "List 5 ingredients for a lasagna:\n\n"
    );

v0.107.0

29 Dec 18:49
Compare
Choose a tag to compare

Added

  • Mistral instruct prompt template

Changed

  • breaking change: Renamed LlamaCppTextGenerationModel to LlamaCppCompletionModel.

Fixed

  • Updated LlamaCppCompletionModel to the latest llama.cpp version.
  • Fixed formatting of system prompt for chats in Llama2 2 prompt template.

v0.106.0

28 Dec 15:07
Compare
Choose a tag to compare

v0.106.0 - 2023-12-28

Experimental features that are unlikely to become stable before v1.0 have been moved to a separate modelfusion-experimental package.

Removed

  • Cost calculation
  • guard function
  • Browser and server features (incl. flow)
  • summarizeRecursively function