← Back to Blog

    Getting Started with Generative AI: Building Your First LLM App

    By Nikhil Nambiar on 2025-08-29
    Getting Started with Generative AI: Building Your First LLM App
    AILLMOpenAILangChain

    Introduction

    Generative AI and large language models (LLMs) let developers build powerful assistants, summarizers, and agents quickly. This guide walks you through a minimal, practical LLM app using OpenAI for inference and LangChain for orchestration.

    Prerequisites

    - Node.js 18+ and a package manager (npm/yarn/pnpm) - An OpenAI API key (set as OPENAI_API_KEY in your environment) - Basic TypeScript/JavaScript knowledge

    What you'll build

    A simple function (and example HTTP endpoint) that accepts a topic and returns a concise explanation generated by an LLM. We'll: 1. Call OpenAI's Chat Completions API 2. Wrap the call with LangChain to manage prompts and reuse logic 3. Run locally and discuss deployment

    Quick project setup

    Create a minimal project and install dependencies:

    mkdir llm-app && cd llm-app
    npm init -y
    npm install node-fetch dotenv
    # optional: langchain and an OpenAI client
    npm install langchain openai

    Create a file named .env with:

    OPENAI_API_KEY=your_api_key_here

    Calling OpenAI directly (minimal)

    A tiny example using fetch to call the Chat Completions endpoint. This keeps the payload explicit so you understand inputs/outputs.

    import fetch from 'node-fetch'
    import dotenv from 'dotenv'
    dotenv.config()
    async function explain(topic: string) {
      const res = await fetch('https://api.openai.com/v1/chat/completions', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          'Authorization': 'Bearer ${process.env.OPENAI_API_KEY}',
        },
        body: JSON.stringify({
          model: 'gpt-4o-mini',
          messages: [
            { role: 'system', content: 'You are a helpful assistant.' },
            { role: 'user', content: 'Explain ${topic} in simple terms with a short example.' },
          ],
          max_tokens: 300,
        }),
      })
      const json = await res.json()
      return json.choices?.[0]?.message?.content ?? 'No answer.'
    }

    Using LangChain for structured prompts

    LangChain helps you encapsulate prompt templates and create reusable chains. The example below shows a small chain that formats a prompt and calls an LLM.

    import dotenv from 'dotenv'
    dotenv.config()
    import { OpenAI } from 'langchain/llms/openai'
    import { PromptTemplate } from 'langchain/prompts'
    import { LLMChain } from 'langchain/chains'
    const llm = new OpenAI({ openAIApiKey: process.env.OPENAI_API_KEY })
    const template = new PromptTemplate({
      inputVariables: ['topic'],
      template: 'Explain {topic} in ~3 short paragraphs and provide one practical example.',
    })
    const chain = new LLMChain({ llm, prompt: template })
    async function explainWithLangChain(topic: string) {
      const resp = await chain.call({ topic })
      return resp.text
    }

    Prompt engineering tips

    - Start with a clear system message describing the assistant role. - Show examples (few-shot) when the output format matters. - Constrain length with max_tokens and ask for JSON or bullet output for easier parsing.

    Local testing and iteration

    Run a local script or small Express/Next endpoint to wire the explain function to an HTTP POST. Test with several prompts and refine the template.

    Deployment notes

    - Keep your API key in environment variables or a secrets store (never commit it). - For production, add caching for repeated queries and rate limiting. - Consider a serverless function (Vercel, Netlify Functions, or AWS Lambda) for low-cost hosting.

    Costs, safety, and guardrails

    - Monitor token usage and set reasonable max_tokens to manage cost. - Sanitize user input to avoid prompt-injection risks. - Add moderation or content filters if exposing the app publicly.

    Conclusion

    You now have the building blocks for an LLM-powered app: a raw OpenAI call, a tidy LangChain wrapper, and practical prompts. Next steps: add retrieval with embeddings, build a UI, and instrument usage for cost and safety.

    Enjoyed this post? Bookmark the blog and come back for more insights!