Skip to content
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.

[Error: Failed to get embeddings] { code: 'GenericFailure' } #110

Open
ThomasDev28 opened this issue Jul 31, 2023 · 0 comments
Open

[Error: Failed to get embeddings] { code: 'GenericFailure' } #110

ThomasDev28 opened this issue Jul 31, 2023 · 0 comments

Comments

@ThomasDev28
Copy link

This code:

import { uuid } from "uuidv4";
import { LLamaEmbeddings } from "llama-node/dist/extensions/langchain.js";
import path from "path";
import { LLM } from "llama-node";
import { LLamaCpp } from "llama-node/dist/llm/llama-cpp.js";

const model = path.resolve(
  process.cwd(),
  "models/llama-2-7b-chat.ggmlv3.q4_0.bin"
);

const llama = new LLM(LLamaCpp);

const config = {
  modelPath: model,
  enableLogging: true,
  numPredict: 128,
  temperature: 0.2,
  topP: 1,
  topK: 40,
  repeatPenalty: 1,
  repeatLastN: 64,
  nCtx: 1024,
  seed: 0,
  f16Kv: false,
  logitsAll: false,
  vocabOnly: false,
  useMlock: false,
  useMmap: true,
  nGpuLayers: 0,
};

const run = async () => {
  await llama.load(config);
  const embeddings = new LLamaEmbeddings({ maxConcurrency: 1 } , llama )
  const documents = await processFiles("./documents");
  const documentsArr = documents.map((doc) => doc.pageContent);
  const embeddingsArr = await embeddings.embedDocuments(documentsArr);
}

run();

Output:

node:internal/process/esm_loader:91
    internalBinding('errors').triggerUncaughtException(
                              ^
[Error: Failed to get embeddings] { code: 'GenericFailure' }
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant