Prototype AI Agents from the Ether

or Github šŸ˜‰

I’m on a Pursuit to learn everything in the AI Agents Niche. Here’s a step-by-step Guide on how to get started creating AI agents with GitHub, including all the golden nuggets

How to Get Started Creating AI Agents with GitHub

  1. Sign in to GitHub and Explore GitHub Models
    Go to github.com/marketplace/models and log in with your GitHub account. Browse the available generative AI models.

    • (Golden Nugget: GitHub models are free and hosted on Azure, just need a GitHub account to use them.)

  2. Try Models in the GitHub Playground
    Select a model (e.g., GPT-4.1) and test it using the built-in playground.

    • (Golden Nugget: Use the system prompt to set behavior, like "Always reply in Spanish.")

    • (Golden Nugget: You can tweak parameters like temperature for creativity or accuracy. 1 is high; use lower like 0.2–0.3 for RAG.)

  3. Click ā€œUse this Modelā€ to Get Code Snippets
    Choose Python as your language and pick your preferred SDK (Azure AI Inference or OpenAI).

    • (Golden Nugget: Python SDKs include Azure AI Inference SDK and OpenAI SDK. Azure SDK works well because GitHub models are hosted on Azure.)

  4. Create a Personal Access Token (PAT) on GitHub
    Go to GitHub > Settings > Developer Settings > Personal Access Tokens and generate one.

    • (Golden Nugget: Store the PAT securely in .env or as an environment variable.)

    • (Golden Nugget: In GitHub Codespaces, the token is automatically available as GITHUB_TOKEN.)

  5. Clone the GitHub Agent Repo and Launch a Codespace
    Use the demo repo (shared in the session) to launch a Codespace or clone locally.

    • (Golden Nugget: Codespaces come preconfigured with Python and dependencies installed—easy startup.)

  6. Set Up Your Model in Code
    Import the necessary libraries and authenticate using the GitHub token. Point to the endpoint: https://models.impanference.azure.com.

    • (Golden Nugget: GitHub models use OpenAI-compatible APIs—just change the base URL and key.)

  7. Start With Function Calling Basics
    Define Python functions with metadata (name, parameters, return values) to expose them as callable tools.

    • (Golden Nugget: Function calling is the foundation of most AI agent frameworks.)

  8. Choose and Experiment With Agent Frameworks
    Try different frameworks. Four featured:

    • OpenAI Agents

    • Autogen

    • PyAgentic

    • Semantic Kernel

    • (Golden Nugget: Each framework can be used with GitHub models or Azure OpenAI—choose based on flexibility, orchestration needs, and personal comfort.)

  9. Create a Simple Agent (e.g., Spanish Tutor)
    Use your framework to build a basic agent with a name, system prompt, and model connection.

    • (Golden Nugget: ā€œAn agent is just a named LLM with a goal.ā€)

  10. Add Tools to Your Agent (Function Calling)
    Define tools with decorators (e.g., @function_tool) or schema (Autogen) and pass them to your agent.

    • (Golden Nugget: Tools can be API calls, local functions, or even just Python logic like returning current time.)

  11. Implement Single-Agent Workflows
    Use tools in a loop until a final response is formed.

    • (Golden Nugget: Many frameworks internally call the LLM multiple times until a text response is reached—debug with logging if curious.)

  12. Scale to Multi-Agent Architectures
    Add supervisor agents that delegate to specialized agents (e.g., English/Spanish agents).

    • (Golden Nugget: Specialization reduces hallucination and confusion.)

  13. Experiment with Advanced Orchestration (Autogen, PyAgentic)
    Use group chats or graph architectures (e.g., round-robin or Magentic One) for more complex agent flows.

    • (Golden Nugget: Magentic One includes a task ledger, progress tracker, and termination conditions.)

  14. Use Human-in-the-Loop When Needed
    Add user input nodes in graph frameworks or insert human review for critical decisions.

    • (Golden Nugget: Human-in-the-loop improves accuracy, especially for ambiguous tasks.)

  15. Deploy or Keep Prototyping
    Use GitHub models for free prototyping. When ready for production, deploy using Azure OpenAI and follow infra instructions in the repo.

    • (Golden Nugget: GitHub models are rate-limited but free—perfect for early dev.)

  16. Compare Framework Tradeoffs Before Choosing One
    Ask: Does it support the LLM I want? Does it allow my preferred flow? Is it actively maintained? Is it intuitive?

    • (Golden Nugget: ā€œDateā€ a framework—spend 1 hour with each before committing.)

Reply

or to participate.