Exploring Adversarial Machine Learning
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
In this course, which is designed to be accessible to both data scientists and security practitioners, you'll explore the security risks and vulnerabilities that adopting machine learning might expose you to. You will also explore the latest techniques and tools being used by attackers and build some of your own attacks.
Fundamentals of Accelerated Computing with CUDA Python
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
Explore how to use Numba—the just-in-time, type-specializing Python function compiler—to create and launch CUDA kernels to accelerate Python programs on massively parallel NVIDIA GPUs.
Generative AI with Diffusion Models
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
Take a deeper dive into denoising diffusion models, which are a popular choice for text-to-image pipelines, with applications in creative content generation, data augmentation, simulation and planning, anomaly detection, drug discovery, personalized recommendations, and more.
Getting Started with Accelerated Computing in CUDA C/C++
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
Learn how to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll also learn an iterative style of CUDA development that will allow you to ship accelerated applications fast.
Getting Started with Deep Learning
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
Learn how deep learning works through hands-on exercises in computer vision and natural language processing.
Introduction to Deploying RAG Pipelines for Production at Scale
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
The course focuses on teaching production-level deployment of LLM applications especially enterprise-grade deployment of RAG pipelines. It covers various aspects for an end-to-end deployment using Helm and NVIDIA NIMs.
Introduction to Transformer-Based Natural Language Processing
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
Learn how Transformers are used as the building blocks of modern large language models (LLMs). You’ll then use these models for various NLP tasks, including text classification, named-entity recognition (NER), author attribution, and question answering.
Sizing LLM Inference Systems
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
This course teaches AI practitioners to optimize and deploy large language models using NVIDIA Inference Microservices. It covers techniques like streaming, prefill, decoding, tensor parallelism, and in-flight batching. Students learn to benchmark models, select inference hyperparameters, and ensure efficient scaling for real-world applications.
Techniques for Improving the Effectiveness of RAG Systems
![](/assets/m48-clock-time-256px-blk-DbkdBqRo.png)
![](/assets/m48-daily-information-256px-blk-DBVE5IMQ.png)
Learn techniques that can take your RAG system from an interesting proof-of-concept to a serious asset.