Instructor

Ajay Nayak

Lead Instructor and CTO at UI5CN

Ajay Nayak is the Lead Instructor and CTO of UI5CN and has about 15 years of experience with Enterprise Technologies and Innovation. Previously he has worked with some of the reputed names like SAP®, Capgemini®, Skybuffer and Statoil® as a developer, consultant, architect and subject matter expert respectively. An Interesting fact about Ajay is, that he started development at a very early age of 15 and according to him, learning should be interactive and engaging. And keeping an element of fun in it can make even difficult concepts simple to understand.
For Latest and Best Offer Check Offer Page
Ongoing Course

Learn Essentials of AI with LLMs, Agents, Run LLMs, Practical Hands-on Usage of LLM, MCP, RAG and More

Key Highlights of the Course

  • Understand AI Agents and LLMs — Types, Sizes, and Parameter-Based Categorization
  • Learn Cloud vs Local Agents and Billion-Parameter Model Classification
  • Compare Safetensor vs GGUF Model Formats
  • Use Ollama and HuggingFace to Select the Right Model for Your Requirement
  • Hands-on Running GPT-OSS-20B on NVIDIA GPU using LLaMA.cpp
  • Learn MCP (Model Context Protocol) and RAG Concepts
  • Plan Inference Requirements and Hardware Optimization
  • Upcoming: CAPM + MCP + UI5 Development with Cloud and Local Agents

What We Cover in This Course – Topic Wise

1. Getting Started with AI Agents and LLMs

  • Usage of AI Agents / LLM for Development
  • Types of AI Agents Based on Size and Architecture
  • Hardware Requirements to Run AI Models
  • Privacy Concerns and Inferencing Speed

2. Planning Which AI LLM to Use

  • Using Ollama for Agent Search and Quantization
  • Using HuggingFace – Safetensor vs GGUF Models
  • Filtering by License, Model Type, and Layers
  • Using Unsloth Models and Inference Planning
  • Example Walkthrough: Qwen 3 Model

3. Running LLM on Local Hardware – Best Practices

  • Models Used: GPT-OSS-20B and Devestral-2-Small-24B
  • Hands-on: Running GPT-OSS-20B on NVIDIA GPU using LLaMA.cpp (Part 1, 2, 3)
  • Inspecting SAP® CAPM MCP Server and Tools

4. Understanding RAG, MCP and Model Training

  • Using RAG to Improve Agent Capabilities
  • Using MCP to Improve Agent Capabilities
  • Training Models to Enhance Performance

5. CAPM Development with Agents

  • Using CAPM Based MCP for Development
  • Cloud Agents vs Local Agents Integration
  • Using Cline with Development Workflow

6. UI5 Development with Agents (Coming Soon)

  • Using MCP in UI5 Projects
  • Cloud Agent + Local Agent Hybrid Development

7. Deep-Dive into AI and LLM Architecture (Coming Soon)

  • Understanding LLM Layers and Architecture
  • Model Internals and Optimization Techniques

Join This Advanced AI Development Course for SAP® Developers and Master LLMs, Agents, MCP, RAG and Local Model Deployment from Fundamentals to Advanced Level

Course curriculum

  • Section 1 : Getting Started

      Duration: 19 min
    • Prerequisites for the Course

    • Usage of AI Agents/LLM for Development

    • Types of AI Agents/LLM Based on Size and Types

    • Hardware Requirements to run AI Agents/LLM

    • Privacy Concerns to run AI Agents and Inferencing Speed

  • Section 2: Planning which AI LLM to Use

      Duration: 30 min
    • Using Ollama.com for Agents/LLM Search and Quantization of Models

    • Using Huggingface site, Safetensor vs GGUF Models Type

    • Usage of Filters of Licence, Type, Safetensor Model Layers

    • Use of Unsloth Models for GGUF Models and Calculation of Inference Planning

  • Section 3: Running LLM on Local Hardware with Best Practices

      Duration: 30 min
    • The Models which we will use, GPT-OSS-20B and Devestral-2-Small-24B

    • Hands-on: Using NVidia GPU to run GPT-OSS-20B Using LLaMAC++ - Part 1

    • Hands-on: Using NVidia GPU to run GPT-OSS-20B Using LLaMAC++ - Part 2

    • Hands-on: Using NVidia GPU to run GPT-OSS-20B Using LLaMAC++ - Part 3

    • Inspecting Usage of SAP® CAPM MCP Server and Tools Provided

  • Section 4: Knowing about RAG, MCP and Model Training

      Duration: 17 min
    • Usage of RAG to Improve Agent Capabilities

    • Usage of MCP to Improve Agent Capabilities

    • Usage of Training a Model to Improve Agent Capabilities

  • Section 5: Use CAPM Development with Agents

      Duration: 74 min
    • Requirements Passed to LLM

    • Adding Cline to VS Code and Setting Up the AI LLM

    • Executing a CAPM Requirement via Cline Without MCP

    • Testing the Generated Project and Debugging Errors

    • Setting Up MCP on the Local System

    • Using MCP with Cline to Generate the CAPM Project

    • Fixing the Project Generated via MCP and Cline Debugging Part 1

    • Fixing the Project Generated via MCP and Cline Debugging Part 2

    • Comparing the Generated Project With and Without MCP

    • Using Codex for CAPM App Generation

    • Debugging the CAPM App with Codex and Adding MCP Capabilities

    • Side by Side Comparison of the Codex Generated App With and Without MCP

    • Section Summary

  • Section 6: Using CAPM App Development with Local LLM

      Duration: 47 min
    • Requirements Passed to LLM

    • Running LLaMA Server and Configuring Cline on Local Hardware

    • Starting Greenfield CAPM Implementation with GPT OSS 20B in Cline

    • Greenfield CAPM Implementation with GPT OSS 20B in Cline Using MCP

    • Fixing Errors in CAPM Project Schema CDS File

    • Checking Project Status by Running the Application

    • Fixing Schema and Data Related Errors Using Cline

    • Fix Schema Prompt

    • Fix Data Prompt

    • Using Devstral 24B Q4KM Model with Cline for CAPM Project

    • Project Summary and Outcome Comparison

  • Section 7: Using UI5 Development with Agents - Coming Soon

    • Coming Soon Section

  • Section 8: Deep-dive into AI and LLMs - Coming Soon

    • Coming Soon Section