Artem Kirsanov
Artem Kirsanov
  • 32
  • 4 936 879
The Most Important Algorithm in Machine Learning
Shortform link:
shortform.com/artem
In this video we will talk about backpropagation - an algorithm powering the entire field of machine learning and try to derive it from first principles.
OUTLINE:
00:00 Introduction
01:28 Historical background
02:50 Curve Fitting problem
06:26 Random vs guided adjustments
09:43 Derivatives
14:34 Gradient Descent
16:23 Higher dimensions
21:36 Chain Rule Intuition
27:01 Computational Graph and Autodiff
36:24 Summary
38:16 Shortform
39:20 Outro
USEFUL RESOURCES:
Andrej Karpathy's playlist: ua-cam.com/play/PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ.html&si=zBUZW5kufVPLVy9E
Jürgen Schmidhuber's blog on the history of backprop:
people.idsia.ch/~juergen/who-invented-backpropagation.html
CREDITS:
Icons by www.freepik.com/
Переглядів: 189 850

Відео

My Strategy To Consume Information Effectively
Переглядів 31 тис.8 місяців тому
For a FREE trial and 20% discount to Shortform go to shortform.com/artem And to download the Shortform AI browser extension, visit bit.ly/45DCpuM My name is Artem, I'm a neuroscience PhD student at NYU. In this video I talk about how I use summaries to consume knowledge from books and video lectures. Patreon: www.patreon.com/artemkirsanov Twitter: ArtemKRSV Github: github.com/ArtemK...
How I make science animations
Переглядів 658 тис.9 місяців тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/ The first 200 of you will get 20% off Brilliant’s annual premium subscription. My name is Artem, I'm a neuroscience PhD student at NYU. In this long-requested video I share the creative process behind my videos - what software I use and break down some of my previous animations. Patreon: www.pat...
Building Blocks of Memory in the Brain
Переглядів 227 тис.10 місяців тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/ The first 200 of you will get 20% off Brilliant’s annual premium subscription. My name is Artem, I'm a computational neuroscience student and researcher. In this video we discuss engrams - fundamental units of memory in the brain. We explore what engrams are, how memory is allocated, where it is...
Can We Build an Artificial Hippocampus?
Переглядів 192 тис.Рік тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/ The first 200 of you will get 20% off Brilliant’s annual premium subscription. My name is Artem, I'm a computational neuroscience student and researcher. In this video we discuss the Tolman-Eichenbaum Machine - a computational model of a hippocampal formation, which unifies memory and spatial na...
How Your Brain Organizes Information
Переглядів 511 тис.Рік тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/ The first 200 of you will get 20% off Brilliant’s annual premium subscription. My name is Artem, I'm a computational neuroscience student and researcher. In this video we talk about cognitive maps - internal models of outside world that the brain to generate flexible behavior that is generalized...
Brain Criticality - Optimizing Neural Computations
Переглядів 205 тис.Рік тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/. The first 200 of you will get 20% off Brilliant’s annual premium subscription. My name is Artem, I'm a computational neuroscience student and researcher. In this video we talk about the concept of critical point - how the brain might optimize information processing by hovering near a phase tran...
Dendrites: Why Biological Neurons Are Deep Neural Networks
Переглядів 214 тис.Рік тому
Keep exploring at brilliant.org/ArtemKirsanov/ Get started for free, and hurry-the first 200 people get 20% off an annual premium subscription. My name is Artem, I'm a computational neuroscience student and researcher. In this video we will see why individual neurons essentially function like deep convolutional neural networks, equipped with insane information processing capabilities as well as...
A Map of Social Space in Your Brain
Переглядів 30 тис.Рік тому
Shortform link: shortform.com/artem My name is Artem, I'm a computational neuroscience student and researcher. In this video we talk about how hippocampus serves a "social map", representing information about conspecific individuals at different levels of abstraction. Patreon: www.patreon.com/artemkirsanov Twitter: ArtemKRSV OUTLINE: 00:00 Introduction 03:30 Overview of physical pla...
Theta rhythm: A Memory Clock
Переглядів 100 тис.Рік тому
Shortform link: shortform.com/artem My name is Artem, I'm a computational neuroscience student and researcher. In this video we talk about theta rhythm - a rhythmic pattern of brain activity (4-12 Hz), which is essential for memory encoding and retrieval. REFERENCES (in no particular order): 1. Hummos A, Nair SS. An integrative model of the intrinsic hippocampal theta rhythm. Lytton WW, editor....
Wavelets: a mathematical microscope
Переглядів 593 тис.Рік тому
Wavelet transform is an invaluable tool in signal processing, which has applications in a variety of fields - from hydrodynamics to neuroscience. This revolutionary method allows us to uncover structures, which are present in the signal but are hidden behind the noise. The key feature of wavelet transform is that it performs function decomposition in both time and frequency domains. In this vid...
Self-study computational neuroscience | Coding, Textbooks, Math
Переглядів 117 тис.Рік тому
Shortform link: shortform.com/artem This video is based on the article medium.com/neurotechx/how-to-get-started-in-computational-neuroscience-dde4b1817ccd My name is Artem, I'm a computational neuroscience student and researcher. In this video I share my experience on getting started with computational neuroscience. We will talk about programming languages, learning to code, recommended textboo...
Logarithmic nature of the brain 💡
Переглядів 223 тис.2 роки тому
Shortform link: shortform.com/artem My name is Artem, I'm a computational neuroscience student and researcher. In this video we will talk about the fundamental role of lognormal distribution in neuroscience. First, we will derive it through Central Limit Theorem, and then explore how it support brain operations on many scales - from cells to perception. REFERENCES: 1.Buzsáki, G. & Mizuseki, K. ...
How to overcome study procrastination | 3 powerful tips
Переглядів 13 тис.2 роки тому
Shortform link: shortform.com/artem In this video I'll talk about 3 amazing tips, which help you to overcome study procrastination. My name is Artem, I'm a computational neuroscience student and researcher. Socials: Twitter: ArtemKRSV OUTLINE: 00:00 Introduction 00:53 Shortform message 01:55 Why we procrastinate 03:10 Yin Yang technique 06:10 Time-based mindset 08:05 Spacing out 10:...
Your brain is moving along the surface of the torus 🤯
Переглядів 116 тис.2 роки тому
Shortform link: shortform.com/artem In this video we will explore a very interesting paper published in Nature in 2022, which describes the hidden torus in the neuronal activity of cells in the entorhinal cortex, known as grid cells. Place cell video: ua-cam.com/video/iV-EMA5g288/v-deo.html Neural manifolds video: ua-cam.com/video/QHj9uVmwA_0/v-deo.html My name is Artem, I'm a computational neu...
My university note-taking | Zettelkasten & more
Переглядів 49 тис.2 роки тому
My university note-taking | Zettelkasten & more
How to REMEMBER what you read 🧠
Переглядів 21 тис.2 роки тому
How to REMEMBER what you read 🧠
Memory Consolidation: Time Machine of the Brain
Переглядів 38 тис.2 роки тому
Memory Consolidation: Time Machine of the Brain
Zettelkasten workflow for research papers | Zotero & Obsidian link
Переглядів 116 тис.2 роки тому
Zettelkasten workflow for research papers | Zotero & Obsidian link
How to Effectively Teach Yourself ANYTHING
Переглядів 34 тис.2 роки тому
How to Effectively Teach Yourself ANYTHING
Place cells: How your brain creates maps of abstract spaces
Переглядів 52 тис.2 роки тому
Place cells: How your brain creates maps of abstract spaces
My simple note-taking setup | Zettelkasten in Obsidian | Step-by-step guide
Переглядів 644 тис.2 роки тому
My simple note-taking setup | Zettelkasten in Obsidian | Step-by-step guide
How to read papers effectively | Research reading technique
Переглядів 33 тис.2 роки тому
How to read papers effectively | Research reading technique
Mind mapping tutorial for students | Tips & Software
Переглядів 29 тис.2 роки тому
Mind mapping tutorial for students | Tips & Software
How to choose a note-taking app | Zettelkasten | Notion vs Roam vs Obsidian
Переглядів 47 тис.2 роки тому
How to choose a note-taking app | Zettelkasten | Notion vs Roam vs Obsidian
Neural manifolds - The Geometry of Behaviour
Переглядів 264 тис.2 роки тому
Neural manifolds - The Geometry of Behaviour
How to focus when studying from home
Переглядів 13 тис.2 роки тому
How to focus when studying from home
Your brain CAN'T Multitask - Here's why
Переглядів 14 тис.2 роки тому
Your brain CAN'T Multitask - Here's why
Interleaving vs Spaced repetition | Study hacks
Переглядів 19 тис.2 роки тому
Interleaving vs Spaced repetition | Study hacks
Understanding note-taking | Zettelkasten
Переглядів 107 тис.2 роки тому
Understanding note-taking | Zettelkasten

КОМЕНТАРІ

  • @TonyFarley-pv3nk
    @TonyFarley-pv3nk 6 годин тому

    So I'm curious about 1984 .

  • @rpcalee
    @rpcalee 14 годин тому

    Thanks!

  • @The-Martian73
    @The-Martian73 19 годин тому

    If you couldn’t understand this explanation, visualization, clearness … there’s nothing else can work with you I swear

  • @r_mclovin
    @r_mclovin День тому

    Insane... Your knowledge and skills are insane...

  • @pajeetsingh
    @pajeetsingh День тому

    Using opengl

  • @RohitKumar-pu4nm
    @RohitKumar-pu4nm 2 дні тому

    Спасибо, это лучший канал связок, все работает, буду это пробовать.

  • @Daniel-Six
    @Daniel-Six 2 дні тому

    Amazing. I believe you actually managed to upstage 3Blue1Brown with this vid, and I didn't think _anyone_ could do that! P.S. Loved the fingerprint smears on the monitor... nice "touch". 😂

  • @yqisq6966
    @yqisq6966 2 дні тому

    Never realized you could interface matplotlib with adobe aftereffect. It's a game changer.

  • @dermacon5172
    @dermacon5172 2 дні тому

    Absolutely breathtaking. Thank you for this.

  • @fuckingSickOfCreepyG
    @fuckingSickOfCreepyG 2 дні тому

    it's not true that backpropagation is in the foundation of all ML - they're only fundamental for MLP and similar ANN-based algorithms, which for many years were considered of minor importance

  • @rajdeepdutta1655
    @rajdeepdutta1655 3 дні тому

    Code + Media gives you super powerful leverage Artem Thanks for this video - God bless u

  • @jayp6955
    @jayp6955 3 дні тому

    I really like the idea of doing a topological sort on the network and visualizing the avalanche from left to right -- but as you said, it comes with the inability to allow for circular relationships. Not a neuroscientist, but I imagine there are some regions of the brain that are structured like this to a first approximation, for example the entorhinal cortex and the hippocampus subsystem.

  • @jayp6955
    @jayp6955 3 дні тому

    It seems "obvious" why the brain state must exist at the critical point in this toy model. If you judge the brain by its ability to make a variety of patterns that can be mapped to sensory input, then it seems the only successful brain would operate at the critical point. In the lens of statistical mechanics -- at super low temperature, the entropy available to the system is basically 0, so nothing useful can be stored. At extremely high temperature, you destroy the possibility for any non-trivial correlation length -- this renders the edges (synaptic connections) useless, so again nothing useful can be stored. At criticality, you end up with a wide enough variety of patterns to become a useful sensory input mapping machine.

  • @jayp6955
    @jayp6955 3 дні тому

    I was sick today and binged some of your videos. So far, they're all brilliant and I love the aesthetic and craftsmanship you put into them. I thought of the Ising model as you were talking about phase transitions, and then you bring it up -- truly comprehensive and love that you are bringing physics into your videos! Super interested in similar systems, like Kuramoto oscillators which can possibly describe large scale brain oscillations, and which have mathematical similarities to Bose-Einstein condensates.

  • @jayp6955
    @jayp6955 3 дні тому

    Video was mostly an explanation of inner products over hilbert spaces. Would be interested to see a followup video about how you apply functional analysis to neuroscience and AI.

  • @AgustinNicolasKILGELMANN
    @AgustinNicolasKILGELMANN 3 дні тому

    Can you talk about BCI? I am in a research proyect about that topic and it would be great if you make a video talking about what its a brain computer interface, how does it obtein the EEG signal, how to process it and the methods, etc. Great vid btw!

  • @acajoom
    @acajoom 3 дні тому

    Why don't you put the mic a bit higher and closer to the camera ?

  • @jayp6955
    @jayp6955 3 дні тому

    Were the place cells learning the frequency space representation, or were they actually learning the joystick angle? For example, if the rat was deaf, it could still develop an association between the joystick angle and the reward. I wonder if frequency is the relevant variable here? My guess is that both variables are learned because they are correlated. If you were to do this experiment, then put ear plugs in the rat & reset the joystick, I assume the rat would still be able move the joystick back to get the reward. The study would have to randomize/reset the joystick-to-frequency mapping between trials to control for this. I'm sure the study considers that, but I think it's an important distinction.

  • @Jeff-66
    @Jeff-66 3 дні тому

    New to Obisidian, and Admonition is awesome! many thanks

  • @MCSTNDTCAFAG
    @MCSTNDTCAFAG 3 дні тому

    Excellent!

  • @karthikrajeshwaran1997
    @karthikrajeshwaran1997 4 дні тому

    Superb explanation.

  • @MrSofazocker
    @MrSofazocker 5 днів тому

    Sooo... its just newton method? Like the fast-inverse square-toot in doom? AI is basically running on Doom.

  • @EDM179
    @EDM179 5 днів тому

    Great job Artem

  • @joebucket1471
    @joebucket1471 5 днів тому

    Lol the torus police joke was golden

  • @user-yb2sx4zz4y
    @user-yb2sx4zz4y 5 днів тому

    The world needs more of you bro

  • @RaptorT1V
    @RaptorT1V 6 днів тому

    Это было увлекательное путешествие! Жалко, что я уже с половины видео перестал что-либо понимать и просто наслаждался картинкой =D Но лайк и подписка после такого просто обеспечены

  • @RaptorT1V
    @RaptorT1V 6 днів тому

    Спасибо большое за видео! Я могу даже без перевода смотреть видео, благодаря твоему акценту!))

  • @georgeorr1042
    @georgeorr1042 6 днів тому

    Sometimes it takes years to realize something completely original is genius. I actually prefer the raw band version from the Anthology with the heavy rhythm guitar. It never gets old.

  • @danielgsfb
    @danielgsfb 6 днів тому

    What an amazing video. I hope one day they come up with some world prize for 'free education heroes'. 173k views for a video like this is simply disgusting. This guy deserves maybe 2 billion views. God damn it, that makes me mad.

  • @HiCARTIER
    @HiCARTIER 6 днів тому

    I posted this as a reply already but I'm posting it as a comment. I have a theory that Black Holes are 5D Hyper Toruses, being perceived by our brain in the 4D, but being seen with our eyes in the 3D. Learning that the internal thought processes of our brain also resemble a Torus field has allowed me to make an interesting discovery on the connection between Black Holes and the human brain in general. As if a singularity is infinite, or the feeling of 5D, unity, or God itself, then that has some equivalence to the singularity at the center of the torus field inside our own head as being represented as the "self" or the sense of "infinite" awareness of our being and connection within the Cosmos. A 5D hyper torus is essentially what it would look like to see the top of the Torus Field, and all the Sides of a Torus Field, if it were one. Just like a Black Hole. When you look at a Black Hole with your eyes, you're seeing the entire 3D shape of it, as light orbits around and picks up the back and sides of it, and your brain is perceiving it as 4 Dimensional, being XYZ + Time. It helps to watch a video of moving around a Black Hole as you're thinking about this. It's all math, but when you compare the two together, it makes it easier to visualize and connect the dots. This is why I believe the Universe we're existing in is inside this 5D Hyper Torus, in a continuous cycle of 3D matter and energy being recycled through 4D perceived Black Holes and White Holes on an atomic level (which is what spaghettification is, as you access the singularity, or 5D, or the all. I think all black holes lead to the same place, and all white holes lead to the same place. It's a giant loop of infinity. ∞ The singularity, or "all", being the point in the middle of this. At the singularity, or in between point, or 5D. You literally get spaghettified, torn back into atoms, which are also torus fields by the way, back into the "all", or 5D that you originally were, as space dust, and shot out the other end). Or something like that. General Relativity and a bunch of other stuff talks about that and got me thinking. At least, that's to the extent of my own research and understanding. I have to do more. Either way, it sure is interesting. Hopefully my insights inspire some new thoughts and ideas. I think that's why Black Holes fill me with dread. It's like I'm staring into the eye of God. 5D. If I get too close, that's where I'll end up too. But at the same time, I feel as though your "awareness" would remain, and you would just spread back out into 'infinity' as your 3D atoms making up your physical form disconnect back to their singular versions. Or perhaps your awareness stays in the 5D, or the singularity, as your 3D atoms get shot out the other end. I think that's why psychs make it easier to process the unseen. It's connecting you with the 4D and 5D and allowing you to make connections. I just took mushrooms a few days ago, but I really want to study the visual mathematics of a black hole on psychedelics. In accordance to Simulation Theory, this probably means we are the coder of our own hyper realistic VR video game which we call Life, where we implanted our consciousness, and we're using a shape of a Torus as the "stationary treadmill" that our awareness operates on. I think this means consciousness itself is a reflection of a 5D Hyper Toroid, which looks like what a Black Hole looks like. I'm not for sure.. But this sense of mathematics is fractalized into everything in this existence, allowing for beautiful morphations of math that form all the unique physical 3D objects in our reality. Since we are a fractal of the 'all', this probably means looking into a black hole is basically like looking into the center of you. This probably means we're in a simulation, in a simulation, in a simulation, because we can only mirror the math that already exists, and if this simulation is a replication of the math in the reality of whoever created it, then that means that it had to model the physics of the simulation that it is in too. So it's an infinite loop. So the creator of our simulation, us, probably doesn't consciously know who created the simulation outside of the simulation we're currently living in. But if we were smart enough to build this simulation, then we probably know. But if we're a fractal of infinite consciousness, then we should probably know in our heart, as it is inherently connected to the all, even if our 4D brains can't perceive infinite simulations like the back of our hand. So i think it is YOUR consciousness all the way down. Every simulated reality to the next. THAT'S the greatest paradox. And that's the thing that will keep you up at night. Who or what the fuck created it at the very very beginning of everything we know? And who or what is outside of that? *brain explodes*

  • @HiCARTIER
    @HiCARTIER 6 днів тому

    Dude this is crazy. Recently when I've been reading, or paying attention to specific peoples thought processes, I always visualize it as a Torus Field inside my head subconsciously. But like inside of it, just like the example at 30 seconds. So I googled it and found this video. What in the world.

  • @quicksilver2923
    @quicksilver2923 6 днів тому

    Great video

  • @lucaferlisi2486
    @lucaferlisi2486 6 днів тому

    This Is Weird. Need more videos about it

  • @prashantsinghrathore
    @prashantsinghrathore 6 днів тому

    That room just lost me.

  • @gilman2056
    @gilman2056 6 днів тому

    У тебя есть канал на русском? А то вроде русский но приходится смотреть через Яндекс переводчик

  • @naveen_malla
    @naveen_malla 7 днів тому

    Dude, this is the most beautiful ML video i've ever seen. Highly informative yes, but also beautifully made. Thank you for your work.

  • @jamesguan5225
    @jamesguan5225 7 днів тому

    This video is awesome!!

  • @dprophecyguy
    @dprophecyguy 8 днів тому

    # Back Propagation Algorithm ## Overview - Back propagation is the foundation of nearly all machine learning systems - It enables artificial networks to learn, but makes them fundamentally different from biological brains ## History - Concepts traced back to Leibniz in 17th century - First modern formulation by Seppo Linnainmaa in 1970 - Rumelhart, Hinton, and Williams applied it to neural networks in 1986, enabling them to solve problems and develop meaningful representations ## Curve Fitting Problem - Goal: Find curve that best fits a set of data points - Assume the curve is a 5th degree polynomial: $y = k_0 + k_1x + k_2x^2 + ... + k_5x^5$ - Objective: Find optimal coefficients $k_0$ through $k_5$ that minimize the loss function (squared distance between data points and curve) ## Loss Function - Maps curve parameters to a single number quantifying the quality of fit - Depends only on the coefficients $k_0$ through $k_5$ - Goal is to find the configuration of coefficients that minimizes the loss ## Differentiability - Most computations, including curve fitting, are differentiable - Allows for efficient computation of optimal parameters using derivatives ## Derivatives - Derivative $\frac{dy}{dx}$ measures the instantaneous rate of change (slope) of a function $y$ with respect to its input $x$ - Tells you which direction to nudge input to decrease/increase output - For functions of multiple variables, partial derivatives measure change with respect to each input variable independently ## Gradient Descent - Iterative optimization algorithm to find minimum of differentiable function - Computes gradient (vector of partial derivatives) at current point - Takes small step in direction opposite to gradient to decrease function value - Repeats until convergence to local minimum ## Chain Rule - Allows computation of derivatives of complex functions by breaking them down into simpler building blocks - If $y = f(g(x))$, then $\frac{dy}{dx} = \frac{dy}{dg} \cdot \frac{dg}{dx}$ - Enables sequential application of chain rule to compute gradients in neural networks and other ML models ## Backpropagation Algorithm 1. Forward pass: Compute loss function value for current parameters 2. Backward pass: Compute gradient of loss with respect to each parameter using chain rule 3. Update parameters by taking small step in direction opposite to gradient 4. Repeat until convergence ## Applicability - Works for any model architecture decomposable into differentiable operations - Includes neural networks, which can approximate any function - Enables solving wide range of problems like image classification, text generation, etc.

  • @terragame.4068
    @terragame.4068 8 днів тому

    инглиш вери гут

  • @arnoudh6203
    @arnoudh6203 8 днів тому

    This is so goddamn cool

  • @ac695
    @ac695 9 днів тому

    Amazing video. Underrated channel.

  • @RolandoLopezNieto
    @RolandoLopezNieto 9 днів тому

    Great video sir, thanks. Please continue with more videos on AI.

  • @raspberrynode8802
    @raspberrynode8802 9 днів тому

    Exceptional explanation. You'll find yourself asking 'did he just make that sound simple...?' playground.tensorflow should link to this video to explain what is happening.

  • @andreaisabelsantoscavero6772
    @andreaisabelsantoscavero6772 10 днів тому

    I wonder how this can work for studying history, is it possible? or there is a better technique for history learning?

  • @upsydaysy3042
    @upsydaysy3042 10 днів тому

    @DezziexLollygag you were absolutely right, it's the first time I really understand how to set up my Zettelkasten in or out of Obsidian, ESPECIALLY the nature, function and use of the index cards! This video is fire, thanks Artem! EDIT: Subscribed. Your whole channel is amazing...

  • @maxvell77
    @maxvell77 10 днів тому

    Thanks

  • @smanticus
    @smanticus 10 днів тому

    FINALLY. A video that successfully conveyed to my simple brain what the FFT is doing.

  • @korigamik
    @korigamik 10 днів тому

    this is a real good explanation. Can you tell us what you use to create the animations and how you edit the video? would you share she source code for this video as well :)

  • @yoverale
    @yoverale 11 днів тому

    18:51 does it imply that a more accurate model of how neurons work would be actually a base 3 or ternary numeral system? It’s not 1 or 0, but 0, 1, 2 states equivalent to under current threshold, over current threshold, over saturation threshold 🧠

  • @detemegandy
    @detemegandy 11 днів тому

    abstraction is often frowned upon?