Prompt Engineering

This is a credit aligned course as per the NCrF credit framework and the universities can provide credit for the course

About Prompt Engineering:

Prompt Engineering is rapidly emerging as a foundational discipline in the era of Generative AI. It is the art and science of designing effective inputs for Large Language Models (LLMs) to generate reliable, accurate, and contextually appropriate outputs. As AI becomes central to research, industry, and public administration, the ability to communicate precisely with AI systems has become a critical professional skill. This course takes learners from understanding how LLMs process language all the way through to building advanced prompt pipelines, evaluating outputs, and deploying real-world AI-assisted applications.

Objective:

Develop comprehensive skills in Prompt Engineering — from foundational LLM theory to advanced prompting strategies, coding with AI, data extraction, agentic systems, and responsible AI use — equipping learners to build and deploy prompt-based applications across research, industry, and public sector contexts.

What You’ll Learn:

Understand the architecture and behaviour of Large Language Models (LLMs).

Master core prompting techniques: zero-shot, few-shot, chain-of-thought, and role prompting.

Design, test, and iterate prompts systematically using structured evaluation frameworks.

Build practical applications such as chatbots, data extractors, and AI-assisted coding tools.

Identify and mitigate risks including hallucination, bias, and prompt injection.

Develop professional skills for AI-assisted work in research, industry, and entrepreneurship.

Target Audience:

Students and professionals interested to learn prompt engineering.


Certificate Provider : SWAYAM Plus and IITM Pravartak Technologies Foundation

Duration of the course : 30 Hours

Course Start date : Mid-May 2026.

Mode of study : Fully virtual

Programme Faculty : Experienced academicians from IITs and other leading institutions, and Data Professionals from GITAA and subject matter experts.

Course Fees : 100 INR + GST

Certification :The exam at the end of the course will be an online proctored exam. The certification fee is 1900 INR + GST.


Course Structure:

Module 1 – Foundations of LLMs and Prompt Engineering

Learning Outcomes

– Explain how transformer-based LLMs generate text
– Distinguish between traditional programming and prompt-based interaction
– Set up and use API/playground environments to run prompts

Key Topics

– Overview of Transformer architecture
– Attention mechanism, tokenization, context windows
– LLM inference parameters: Temperature, Top-K, Top-P, Max Tokens
– The prompting paradigm vs traditional software programming
– Overview of major model families and their differences

Module 2 – Core Prompting Techniques

Learning Outcomes

– Apply zero-shot, one-shot, and few-shot prompting correctly
– Use role prompting and system messages to control model persona
– Structure prompts with clarity using proven templates

Key Topics

– Zero-shot vs. few-shot prompting: when and why to use each
– Designing effective examples for few-shot learning
– Role and persona prompting for consistent outputs
– Structuring prompts: RICE framework (Role, Instructions, Context, Examples)
– Output formatting: JSON, Markdown, structured lists

Module 3 – Advanced Prompt Strategies

Learning Outcomes

– Apply Chain-of-Thought (CoT) and Tree-of-Thought prompting
– Design ReAct-style and self-consistency prompts
– Break complex problems into multi-step prompt chains

Key Topics

– Chain-of-Thought: Zero-shot CoT vs. manual CoT
– Tree-of-Thought: branching reasoning paths for complex decisions
– Self-consistency: sampling multiple reasoning paths and aggregating
– ReAct prompting: Reason + Act cycles for interactive tasks
– Decomposition: breaking problems into sub-tasks across prompt chains

Module 4 – Prompt Evaluation & Iteration

Learning Outcomes

– Evaluate prompt outputs using quantitative and qualitative metrics
– Systematically iterate and improve prompts using structured frameworks
– Build simple prompt testing pipelines

Key Topics

– Human evaluation vs. automated evaluation (LLM-as-judge)
– Metrics: ROUGE, BERTScore (overview), task-specific rubrics
– Systematic prompt versioning and documentation
– Common failure modes and how to diagnose them

Module 5 – Coding with AI (Part I) – Generation & Translation

Learning Outcomes

– Generate clean, production-ready code from natural language specifications
– Translate code across languages and paradigms using targeted prompts
– Design prompts that encode style guides, constraints, and project context
– Effectively scaffold full project structures using AI assistance

Key Topics

– Anatomy of a good code generation prompt: language, framework, constraints, style
– Generating boilerplate and scaffolding (APIs, data models, config files, CLIs)
– Cross-language translation: Python to JavaScript, SQL to Pandas, pseudocode to implementation
– Documentation generation: docstrings, README files, inline comments
– Iterative refinement: using follow-up prompts to improve initial output
– Tool overview: GitHub Copilot, Claude, and others

Module 6 – Coding with AI (Part II) – Debugging & Testing

Learning Outcomes

– Use LLMs to diagnose and fix bugs by providing rich contextual prompts
– Generate comprehensive unit and integration tests automatically
– Apply AI-assisted code review for security, performance, and readability
– Build repeatable testing workflows with AI in the loop

Key Topics

– Debugging prompts: structuring error messages, stack traces, and reproduction steps
– Chain-of-thought for root cause analysis
– Generating unit tests with frameworks: pytest, unittest, Jest
– Edge case and boundary condition generation
– Code review prompts: security vulnerabilities, performance bottlenecks, readability
– Prompt patterns for refactoring legacy code safely

Module 7 – Data & Analysis – Unstructured to Structured

Learning Outcomes

– Extract structured data from free-form text, documents, and reports using prompts
– Design prompts that produce consistent, schema-conformant outputs (JSON, CSV, tables)
– Apply LLMs to classify, label, and annotate datasets at scale
– Build lightweight text analysis pipelines using prompt chaining

Key Topics

– Extraction prompts: pulling entities, dates, figures, and relationships from raw text
– Schema-driven extraction: enforcing output structure with JSON schemas and format instructions
– Classification and tagging: sentiment, intent, category labelling with few-shot examples
– Summarisation strategies: extractive vs. abstractive, document-level vs. section-level
– Handling messy inputs: OCR text, inconsistent formatting, multilingual content
– Prompt pipelines for tabular output: turning reports and emails into structured datasets
– Validation techniques: checking AI-extracted data for completeness and consistency

Module 8 – Agentic AI, RAG and Tool Use

Learning Outcomes

– Understand how LLMs function as agents with tools and memory
– Design prompts for Retrieval-Augmented Generation (RAG) pipelines
– Build a simple agent loop using function calling / tool use

Key Topics

– Agent architectures: ReAct, Plan-and-Execute, AutoGPT-style
– Memory types: in-context, external vector stores, episodic memory
– RAG pipeline: chunking, embedding, retrieval, prompt augmentation
– OpenAI / Anthropic function/tool calling schemas
– Frameworks: LangChain, LlamaIndex, CrewAI (overview)

Module 9 – Safety, Ethics & Limitations

Learning Outcomes

– Identify and mitigate hallucinations, biases, and security vulnerabilities
– Apply ethical guidelines for responsible AI prompt design
– Understand the limitations of LLMs and when not to use them

Key Topics

– Types of hallucination: factual, reasoning, source fabrication
– Grounding techniques: citation prompting, RAG, self-check prompts
– Prompt injection attacks and defensive prompting
– Fairness, privacy, and copyright considerations
– AI governance frameworks: EU AI Act overview, responsible use guidelines

Module 10 – Capstone Project

Learning Outcomes

– Synthesize all course learning into an independent end-to-end project
– Present and defend design choices with evaluation evidence
– Demonstrate practical prompt engineering competency

Your Image Description
image
Dr. V. Sankaran
IIT Madras Pravartak Technologies

Dr. V. Sankaran received his Ph.D. Degree from Anna University in 1986 in the areas of Computer Networks. He worked as Professor from 1986 to 1995 in the Department of Computer Science and Engineering in a few colleges like Sri Venkateswara, SRM and Sathyabama, Chennai. Also, he worked as Dean at VIT for an year in the year 2014. He is a guest faculty for IIIT Design and Development, Chennai.

He moved to IT industry in 1995 and worked as Project Manager, Delivery Manager, Program Manager and Director in companies like L-Cube Innovative Solutions, HCL Technologies, Packet Island and BroadSoft.

His IT industry experience is over 30+ years and his strength are in the areas of Computer Networks, Security, UI Technology, Linux Kernel Programming, Device Drivers and Data Engineering.

Course covers :

Electronics and Embedded controller experiments with Universal Trainer kit.

Advanced Embedded and Remote controlled IoT projects.

Proto type project design with schematic and PCB Design.

Electronics and Embedded Controller Experiments with Universal Trainer Kit:

Vehicle Speed Monitor :

single-08

Real Time stamped Data acquisition system with OLED :

single-08

Soil Moisture sensed Automatic Water planting with Remote monitoring :

single-08

Mobile App Control and Monitoring System :

single-08

RFID Detected Office Attendance Entry with Auto Room Power OFF @ Zero entry :

single-08

Assembling and testing of Automatic Car Parking with Anti-collision Alarm and day/night Headlight control :

single-08
Registration:

Link will be posted soon


Assistance:

  Digital Skills Academy
  Mail Id: swayamplus@iitmpravartak.net
  Contact No: 9498341969