Meta-Llama-3.1-8B-Instruct

meta-llama/Meta-Llama-3.1-8B-Instruct

Meta Llama 3.1 is a family of multilingual large language models developed by Meta, featuring pretrained and instruction-tuned variants in 8B, 70B, and 405B parameter sizes. This 8B instruction-tuned model is optimized for multilingual dialogue use cases and outperforms many available open-source and closed chat models on common industry benchmarks. The model was trained on over 15 trillion tokens of publicly available data, using techniques like supervised fine-tuning and reinforcement learning with human feedback to enhance helpfulness and safety. Llama 3.1 supports text and code generation, with a knowledge cutoff of December 2023

API Usage

curl --request POST \
  --url https://api.ap.siliconflow.com/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
  "messages": [
    {
      "role": "user",
      "content": "What opportunities and challenges will the Chinese large model industry face in 2025?"
    }
  ],
  "stream": false,
  "max_tokens": 512,
  "stop": null,
  "temperature": 0.7,
  "top_p": 0.7,
  "top_k": 50,
  "frequency_penalty": 0.5,
  "n": 1,
  "response_format": {
    "type": "text"
  },
  "tools": [
    {
      "type": "function",
      "function": {
        "description": "<string>",
        "name": "<string>",
        "parameters": {},
        "strict": false
      }
    }
  ]
}'

Details

Model Provider

meta-llama

Type

text

Sub Type

chat

Size

8

Publish Time

Apr 23, 2025

Input Price

$

0.06

/ M Tokens

Output Price

$

0.06

/ M Tokens

Context length

32768

Tags

8B,32K

Ready to accelerate your AI development?

Ready to accelerate your AI development?

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.

© 2025 SiliconFlow Technology PTE. LTD.