This is a credit aligned course as per the NCrF credit framework and the universities can provide credit for the course
Prompt Engineering is rapidly emerging as a foundational discipline in the era of Generative AI. It is the art and science of designing effective inputs for Large Language Models (LLMs) to generate reliable, accurate, and contextually appropriate outputs. As AI becomes central to research, industry, and public administration, the ability to communicate precisely with AI systems has become a critical professional skill. This course takes learners from understanding how LLMs process language all the way through to building advanced prompt pipelines, evaluating outputs, and deploying real-world AI-assisted applications.
Develop comprehensive skills in Prompt Engineering — from foundational LLM theory to advanced prompting strategies, coding with AI, data extraction, agentic systems, and responsible AI use — equipping learners to build and deploy prompt-based applications across research, industry, and public sector contexts.
Understand the architecture and behaviour of Large Language Models (LLMs).
Master core prompting techniques: zero-shot, few-shot, chain-of-thought, and role prompting.
Design, test, and iterate prompts systematically using structured evaluation frameworks.
Build practical applications such as chatbots, data extractors, and AI-assisted coding tools.
Identify and mitigate risks including hallucination, bias, and prompt injection.
Develop professional skills for AI-assisted work in research, industry, and entrepreneurship.
Students and professionals interested to learn prompt engineering.
Certificate Provider : SWAYAM Plus and IITM Pravartak Technologies Foundation
Duration of the course : 30 Hours
Course Start date : Mid-May 2026.
Mode of study : Fully virtual
Programme Faculty : Experienced academicians from IITs and other leading institutions, and Data Professionals from GITAA and subject matter experts.
Course Fees : 100 INR + GST
Certification :The exam at the end of the course will be an online proctored exam. The certification fee is 1900 INR + GST.
– Explain how transformer-based LLMs generate text
– Distinguish between traditional programming and prompt-based interaction
– Set up and use API/playground environments to run prompts
– Overview of Transformer architecture
– Attention mechanism, tokenization, context windows
– LLM inference parameters: Temperature, Top-K, Top-P, Max Tokens
– The prompting paradigm vs traditional software programming
– Overview of major model families and their differences
– Apply zero-shot, one-shot, and few-shot prompting correctly
– Use role prompting and system messages to control model persona
– Structure prompts with clarity using proven templates
– Zero-shot vs. few-shot prompting: when and why to use each
– Designing effective examples for few-shot learning
– Role and persona prompting for consistent outputs
– Structuring prompts: RICE framework (Role, Instructions, Context, Examples)
– Output formatting: JSON, Markdown, structured lists
– Apply Chain-of-Thought (CoT) and Tree-of-Thought prompting
– Design ReAct-style and self-consistency prompts
– Break complex problems into multi-step prompt chains
– Chain-of-Thought: Zero-shot CoT vs. manual CoT
– Tree-of-Thought: branching reasoning paths for complex decisions
– Self-consistency: sampling multiple reasoning paths and aggregating
– ReAct prompting: Reason + Act cycles for interactive tasks
– Decomposition: breaking problems into sub-tasks across prompt chains
– Evaluate prompt outputs using quantitative and qualitative metrics
– Systematically iterate and improve prompts using structured frameworks
– Build simple prompt testing pipelines
– Human evaluation vs. automated evaluation (LLM-as-judge)
– Metrics: ROUGE, BERTScore (overview), task-specific rubrics
– Systematic prompt versioning and documentation
– Common failure modes and how to diagnose them
– Generate clean, production-ready code from natural language specifications
– Translate code across languages and paradigms using targeted prompts
– Design prompts that encode style guides, constraints, and project context
– Effectively scaffold full project structures using AI assistance
– Anatomy of a good code generation prompt: language, framework, constraints, style
– Generating boilerplate and scaffolding (APIs, data models, config files, CLIs)
– Cross-language translation: Python to JavaScript, SQL to Pandas, pseudocode to implementation
– Documentation generation: docstrings, README files, inline comments
– Iterative refinement: using follow-up prompts to improve initial output
– Tool overview: GitHub Copilot, Claude, and others
– Use LLMs to diagnose and fix bugs by providing rich contextual prompts
– Generate comprehensive unit and integration tests automatically
– Apply AI-assisted code review for security, performance, and readability
– Build repeatable testing workflows with AI in the loop
– Debugging prompts: structuring error messages, stack traces, and reproduction steps
– Chain-of-thought for root cause analysis
– Generating unit tests with frameworks: pytest, unittest, Jest
– Edge case and boundary condition generation
– Code review prompts: security vulnerabilities, performance bottlenecks, readability
– Prompt patterns for refactoring legacy code safely
– Extract structured data from free-form text, documents, and reports using prompts
– Design prompts that produce consistent, schema-conformant outputs (JSON, CSV, tables)
– Apply LLMs to classify, label, and annotate datasets at scale
– Build lightweight text analysis pipelines using prompt chaining
– Extraction prompts: pulling entities, dates, figures, and relationships from raw text
– Schema-driven extraction: enforcing output structure with JSON schemas and format instructions
– Classification and tagging: sentiment, intent, category labelling with few-shot examples
– Summarisation strategies: extractive vs. abstractive, document-level vs. section-level
– Handling messy inputs: OCR text, inconsistent formatting, multilingual content
– Prompt pipelines for tabular output: turning reports and emails into structured datasets
– Validation techniques: checking AI-extracted data for completeness and consistency
– Understand how LLMs function as agents with tools and memory
– Design prompts for Retrieval-Augmented Generation (RAG) pipelines
– Build a simple agent loop using function calling / tool use
– Agent architectures: ReAct, Plan-and-Execute, AutoGPT-style
– Memory types: in-context, external vector stores, episodic memory
– RAG pipeline: chunking, embedding, retrieval, prompt augmentation
– OpenAI / Anthropic function/tool calling schemas
– Frameworks: LangChain, LlamaIndex, CrewAI (overview)
– Identify and mitigate hallucinations, biases, and security vulnerabilities
– Apply ethical guidelines for responsible AI prompt design
– Understand the limitations of LLMs and when not to use them
– Types of hallucination: factual, reasoning, source fabrication
– Grounding techniques: citation prompting, RAG, self-check prompts
– Prompt injection attacks and defensive prompting
– Fairness, privacy, and copyright considerations
– AI governance frameworks: EU AI Act overview, responsible use guidelines
– Synthesize all course learning into an independent end-to-end project
– Present and defend design choices with evaluation evidence
– Demonstrate practical prompt engineering competency
Dr. V. Sankaran received his Ph.D. Degree from Anna University in 1986 in the areas of Computer Networks. He worked as Professor from 1986 to 1995 in the Department of Computer Science and Engineering in a few colleges like Sri Venkateswara, SRM and Sathyabama, Chennai. Also, he worked as Dean at VIT for an year in the year 2014. He is a guest faculty for IIIT Design and Development, Chennai.
He moved to IT industry in 1995 and worked as Project Manager, Delivery Manager, Program Manager and Director in companies like L-Cube Innovative Solutions, HCL Technologies, Packet Island and BroadSoft.
His IT industry experience is over 30+ years and his strength are in the areas of Computer Networks, Security, UI Technology, Linux Kernel Programming, Device Drivers and Data Engineering.
Manual is framed as self-practice by students with necessary wiring diagrams.
Manual starts with a Quick Overview of the Trainer System, describing the various functional blocks, and their locations.
Sufficient background Theory is given for better understanding.
Measurement tables are given with theoretical values to verify practical results.
Plots are given for quick analysis as well as the conclusion.
Wherever required, experiments are provided with numerical problems to understand the theory well.
Specifications, Characteristics, and functions of each component are discussed in detail.
Self-Assessment questions are given to gain confidence.
Appendix is given to recollect basics and apply in relevant experiments.
The range of topics covered, start from electricity basics, electrical circuits, test and measuring instruments, electronic components, semiconductor devices, transistor circuits, Analog Circuits, digital electronics, Solar photovoltaic as a renewable energy source, Input-Output devices, sensors and actuators till the embedded controller using Arduino UNO. Last two topics are devoted to the application-based exercises as relevant for use in our day-to-day life.
List of Dos and Don’ts and Safety Practice are given.
Needs only minimum guidance and provides great confidence.
Manual is prepared with the motive of imparting the required Knowledge and Skill competencies so as to nurture the students to build and develop their own electronic circuitry and systems in the future.
Electronics and Embedded controller experiments with Universal Trainer kit.
Advanced Embedded and Remote controlled IoT projects.
Proto type project design with schematic and PCB Design.
Link will be posted soon
  Digital Skills Academy
  Mail Id: swayamplus@iitmpravartak.net
  Contact No: 9498341969