In this blog post, we will walk you through how to integrate the Groq API with Langchain using a practical example. This step-by-step guide will help you understand the components involved, such as agents, tasks, and models, and how to use these to build a simple but effective code generation system.

Introduction to Groq and Langchain

  • Groq: Groq is a platform that provides a powerful API for using language models in various applications. It allows developers to leverage advanced AI models for tasks such as code generation, text analysis, and more.
  • Langchain: Langchain is a framework for building language model applications. It simplifies integrating language models into workflows like question-answering, agents, or even content generation.

Together, Groq and Langchain enable you to build complex language-driven tasks in an easy and manageable way.

Prerequisites

Before starting, ensure you have:

  • Python installed.
  • An API key from Groq, which you can set as an environment variable.
  • Installed the required libraries, such as crew, langchain, langchain_groq, and groq.

You can install them using pip:


pip install crew langchain_groq groq

Step 1: Setting Up Groq API Key

The first step is to configure the Groq API key. You should already have this key from your Groq account.

Here, we retrieve the API key from the environment using os.environ.get("GROQ_API_KEY"). If you haven’t set your API key, you can do so by exporting it as an environment variable:


export GROQ_API_KEY='your-groq-api-key-here'

Step 2: Instantiating the ChatGroq Model

Next, we initialize the ChatGroq model, which will be used by our agents to generate code and content.


from langchain_groq import ChatGroq

# Instantiate the ChatGroq model (replacing Ollama)
groq_llm = ChatGroq(
    model="mixtral-8x7b-32768",
    temperature=0.7,
    max_tokens=150,  # Limiting max tokens for brevity in response
    timeout=10,  # Optional: setting a timeout in case of slow responses
    max_retries=2,
)

In this setup:

  • We specify the model mixtral-8x7b-32768.
  • The temperature controls the randomness of the responses. A higher temperature makes the model more creative, while a lower temperature makes it more focused.
  • max_tokens limits the length of the response.
  • timeout ensures the model doesn’t take too long to respond.
  • max_retries allows the model to retry in case of errors or timeouts.

Step 3: Defining Agents with Roles

In this example, we use two agents: a Researcher and a Writer. Each agent is assigned a role, a goal, and a language model to operate on.


from crewai import Agent

# Define the Researcher agent
researcher = Agent(
    role='Researcher',
    goal='Generate code based on the provided explanation.',
    backstory="""
    You are a researcher. Using the information provided in the task, your goal is to generate code that meets the specified requirements.
    """,
    verbose=True,
    allow_delegation=False,
    llm=groq_llm  # Using Groq model for the researcher
)

# Define the Writer agent
writer = Agent(
    role='Tech Content Strategist',
    goal='Craft compelling content based on the generated code.',
    backstory="""
    You are a writer known for your ability to explain technical concepts in an engaging and informative way. 
    Your task is to create content that explains the generated code to the audience.
    """,
    verbose=True,
    allow_delegation=True,
    llm=groq_llm  # Using Groq model for the writer
)

Here:

  • Researcher: This agent generates code based on the task requirements.
  • Writer: This agent takes the generated code and creates educational content around it.

Step 4: Creating a Task for the Agents

Now, we create a task that the agents will work on. In this case, the task is to generate Python code and an explanation about Python programming.


from crewai import Task

# Define the type of code and explanation
code_type = "Python"
explanation = "Give one paragraph about Python programming."

# Create a task for generating the code
task = Task(
    description=f"""Generate {code_type} code based on the following explanation:\n\n{explanation}""",
    expected_output=f"{code_type} code generated successfully",
    agent=researcher
)

The task has:

  • A description explaining what the task is about.
  • An expected output, which helps track the task’s success.
  • An agent assigned to handle this task.

Step 5: Running the Crew

We now create a crew of agents who will collaborate to accomplish the task. In this case, the researcher generates the code, and the writer explains it.


from crewai import Crew

# Instantiate your crew with the researcher and writer agents
crew = Crew(
    agents=[researcher, writer],
    tasks=[task],
)

Step 6: Executing the Task

Finally, we define a function that runs the crew and executes the task. The kickoff method starts the crew’s work, and we handle potential errors to make sure the task runs smoothly.


# Function to run the crew task
def crew_result():
    print("Starting crew task...")  # Debugging message
    try:
        # Get your crew to work!
        result = crew.kickoff()
        print("Generated Result: ", result)  # Output the result
        return result
    except Exception as e:
        print(f"Error in crew_result: {e}")
        return None

# Call the function to run the tasks
result = crew_result()
print(result)

This function:

  • Kicks off the crew’s work.
  • Prints and returns the result after the agents complete their tasks.

Conclusion

In this post, you’ve learned how to use Groq’s API with Langchain to create a collaborative system where agents perform tasks such as generating code and writing content. By defining agents with distinct roles and goals, and using Groq’s powerful language models, you can build flexible workflows that leverage the strengths of both Groq and Langchain.

Optional: Save the Result

If you want to save the result of your task, you can write it to a file as shown below:


if result:
    with open("generated_code.txt", "w") as file:
        file.write(result)
    print("Result written successfully.")

This completes the guide. You now have the tools to use Groq and Langchain for your next AI-driven project!

Here are some other model names that you can use with Hugging Face’s Transformers library, similar to “mixtral-8x7b-32768”:

  1. OPT-125M: A 125 million parameter model trained using the Optimizer (OPT) architecture.
  2. MT5-large: A large-sized model from the Hugging Face’s MT5 model family, trained on a large dataset of text.
  3. mT5-base: A base-sized model from the Hugging Face’s MT5 model family, trained on a large dataset of text.
  4. T5-large: A large-sized model from the Hugging Face’s T5 model family, trained on a large dataset of text.
  5. T5-base: A base-sized model from the Hugging Face’s T5 model family, trained on a large dataset of text.
  6. mBART-large: A large-sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  7. mBART-base: A base-sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  8. Bart-xxl: A large sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  9. Bart-large: A large-sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  10. Bart-base: A base-sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  11. BART-large: A large-sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  12. BART-base: A base-sized model from the Hugging Face’s Bart model family, trained on a large dataset of text.
  13. BLURT-base: A base-sized model from the Hugging Face’s BLURT model family, trained on a large dataset of text.
  14. DALLE-2: A large-sized model from the DALL-E-2 model family, trained on a large dataset of images and text.
  15. DALL-E: A large-sized model from the DALL-E model family, trained on a large dataset of images and text.
  16. LLaMA-7B: A 7 billion parameter model from the LLaMA model family, trained on a large dataset of text.
  17. LLaMA-13B: A 13 billion parameter model from the LLaMA model family, trained on a large dataset of text.
  18. LLaMA-31B: A 31 billion parameter model from the LLaMA model family, trained on a large dataset of text.
  19. XLNet-Base: A base-sized model from the Hugging Face’s XLNet model family, trained on a large dataset of text.
  20. XLNet-Large: A large-sized model from the Hugging Face’s XLNet model family, trained on a large dataset of text.
  21. Roberta-Base: A base-sized model from the Hugging Face’s BERT model family, trained on a large dataset of text.
  22. Roberta-Large: A large-sized model from the Hugging Face’s BERT model family, trained on a large dataset of text.
  23. DistilBERT-Base: A base-sized model from the Hugging Face’s DistilBERT model family, trained on a large dataset of text.
  24. DistilBERT-Large: A large-sized model from the Hugging Face’s DistilBERT model family, trained on a large dataset of text.
  25. Longformer-Base: A base-sized model from the Hugging Face’s Longformer model family, trained on a large dataset of text.