<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Neural Networks Math on RAVR Lab</title>
    <link>http://ravrlab.ru/en/courses/nnmath/</link>
    <description>Recent content in Neural Networks Math on RAVR Lab</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 01 Jan 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="http://ravrlab.ru/en/courses/nnmath/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>NN Domain map</title>
      <link>http://ravrlab.ru/en/courses/nnmath/nnmap/</link>
      <pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate>
      <guid>http://ravrlab.ru/en/courses/nnmath/nnmap/</guid>
      <description>This is bird-picture overview of NN domain, without going deep into specific applications.&#xA;Overview Aspects of building NN&#xA;Initialization methods Forward/Backpropagation Activation functions (sigmoid, tanh, ReLU, &amp;hellip;) Layer kinds Cost functions Training/Optimization algorythms Batching/Epochs approaches To sort: dropout, regularization&#xA;Levels of usage&#xA;understand math (derive forw/back-prop by hand) program base math (with nothing more than numpy and similar) build NN in framework (pytorch and similar) finetune existing model prompts lol Main domains</description>
    </item>
    <item>
      <title>Datasets popular in ML domain</title>
      <link>http://ravrlab.ru/en/courses/nnmath/nn_datasets/</link>
      <pubDate>Tue, 30 Dec 2025 00:00:00 +0000</pubDate>
      <guid>http://ravrlab.ru/en/courses/nnmath/nn_datasets/</guid>
      <description>MNIST&#xA;(Modified National Institute Standards and Technology database)&#xA;Contains grayscale 28x28 images of handwritten digits (from 0 to 9) Contains 60&#39;000 training images and 10&#39;000 testing images. CIFAR-10&#xA;(Canadian Institute For Advanced REsearch)&#xA;Contains 60&#39;000 32x32 color images in 10 different classes There are 6&#39;000 images of each class (airplane, car, bird, cat, deer, dog, frog, horse, ship, truck) ImageNet&#xA;Full original dataset (ImageNet-21K or ImageNet-22K):&#xA;Over 14&#39;000&#39;000 images in over 20&#39;000 categories Images are colored, but size vary (from ~50x50 to ~4000x3000) There are various popular subsets like ImageNet-1K</description>
    </item>
    <item>
      <title>01 Fully-connected network</title>
      <link>http://ravrlab.ru/en/courses/nnmath/fcn_backprop/</link>
      <pubDate>Sat, 20 Dec 2025 00:00:00 +0000</pubDate>
      <guid>http://ravrlab.ru/en/courses/nnmath/fcn_backprop/</guid>
      <description>Table of Content&#xA;Model description Forward propagation Backward propagation Math theory Deriving backward propagation formulas Output layer Hidden-2 Layer Hidden-1 Layer Updating weights and biases Optimizing on batch Model description Let&amp;rsquo;s say we have simple forward propagation neural network with 4 layers:&#xA;1 2 3 4 5 6 7 8 9 10 Cost function: C = (a₀ — y₀)² + (a₁ — y₁)² * * Output Layer | 2 neurons, 2*3 weights + 2 biases, Sigmoid activation /\/\\ | * * * Hidden-2 Layer | 3 neurons, 3*4 weights + 3 biases, Sigmoid activation /\//\\\ | * * * * Hidden-1 Layer | 4 neurons, 4*2 weights + 4 biases, Sigmoid activation \/\\/ | * * Input Layer | 2 neurons, 2 input values (say 0.</description>
    </item>
    <item>
      <title>02 Convolutional NN</title>
      <link>http://ravrlab.ru/en/courses/nnmath/cnn_base/</link>
      <pubDate>Fri, 19 Dec 2025 00:00:00 +0000</pubDate>
      <guid>http://ravrlab.ru/en/courses/nnmath/cnn_base/</guid>
      <description>Table of Content&#xA;Convolution 1D-convolution Definitions (calculation examples) Bias Stride Padding 2D-convolution 3D-convolution Pooling Simple CNN Convolution 1D-convolution Definitions (calculation examples) Non-full convolution (reffered to as &amp;ldquo;valid&amp;rdquo; in numpy):&#xA;Output size = Input size — Kernel size + 1&#xA;$$ Conv_{valid}\Big( \begin{pmatrix} a \\ b \\ c \\ d \\ e \end{pmatrix}, \begin{pmatrix} k_1 \\ k_2 \end{pmatrix}\Big) = \begin{vmatrix} k_1 &amp;amp; k_2 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; k_1 &amp;amp; k_2 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; k_1 &amp;amp; k_2 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; k_1 &amp;amp; k_2 \end{vmatrix} * \begin{pmatrix} a \\ b \\ c \\ d \\ e \end{pmatrix} = \begin{pmatrix} a * k_1 + b * k_2 \\ b * k_1 + c * k_2 \\ c * k_1 + d * k_2 \\ d * k_1 + e * k_2 \end{pmatrix} $$</description>
    </item>
    <item>
      <title>03 Recurrent NN</title>
      <link>http://ravrlab.ru/en/courses/nnmath/rnn_base/</link>
      <pubDate>Thu, 18 Dec 2025 00:00:00 +0000</pubDate>
      <guid>http://ravrlab.ru/en/courses/nnmath/rnn_base/</guid>
      <description>Table of Content</description>
    </item>
  </channel>
</rss>
