CodeEmporium
CodeEmporium
  • 332
  • 6 989 445
Hyper parameters - EXPLAINED!
Let's talk about hyper parameters and how they are used in neural networks and deep learning!
ABOUT ME
⭕ Subscribe: ua-cam.com/users/CodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: ua-cam.com/play/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU.html
⭕ Natural Language Processing 101: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ Reinforcement Learning 101: ua-cam.com/play/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha.html&si=AuThDZJwG19cgTA8
Natural Language Processing 101: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html&si=LsVy8RDPu8jeO-cc
⭕ Transformers from Scratch: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ ChatGPT Playlist: ua-cam.com/play/PLTl9hO2Oobd9coYT6XsTraTBo4pL1j4HJ.html
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStatistics
📕 Bayesian Statistics: imp.i384100.net/BayesianStatistics
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow
Переглядів: 1 136

Відео

How much training data does a neural network need?
Переглядів 2,2 тис.Місяць тому
Let's answer the question: "how much training data does a neural network need"? ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101 PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ua...
NLP with Neural Networks | ngram to LLMs
Переглядів 2,1 тис.2 місяці тому
Let's talk about NLP with neural networks and highlight ngrams to Large Language Models (LLMs) ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/blob/main/embeddings.ipynb PLAYLISTS FROM MY CHANN...
Transfer Learning - EXPLAINED!
Переглядів 2,8 тис.2 місяці тому
Let's talk about a neural network concept called transfer learning. We use this in BERT, GPT and the large language models today. ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/huggingface/notebooks/blob/main/examples/qu...
Embeddings - EXPLAINED!
Переглядів 4,2 тис.3 місяці тому
Let's talk about embeddings in neural networks ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/blob/main/embeddings.ipynb PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ua-cam.com/play/PLTl9hO2...
Batch Normalization in neural networks - EXPLAINED!
Переглядів 2,3 тис.3 місяці тому
Let's talk batch normalization in neural networks ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/tree/main [2] Batch Normalization main paper: arxiv.org/pdf/1502.03167.pdf PLAYLISTS FROM MY CH...
Loss functions in Neural Networks - EXPLAINED!
Переглядів 4,7 тис.3 місяці тому
Let's talk about Loss Functions in neural networks ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/main PLAYLISTS FROM MY CHANNEL ⭕ Reinforcement Learning 101: ua-cam.com/pl...
Optimizers in Neural Networks - EXPLAINED!
Переглядів 2 тис.3 місяці тому
Let's talk about optimizers in neural networks. ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/main [2] More details on Activation functions: ua-cam.com/video/s-V7gKrsels/v...
Activation functions in neural networks
Переглядів 2,4 тис.4 місяці тому
Greetings fellow learners! This is the next video in a playlist of videos where are are going to talk about the fundamentals of building neural networks! In this video, we talk about Activation functions and how they are used in neural networks. ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com...
Backpropagation in Neural Networks - EXPLAINED!
Переглядів 2,7 тис.4 місяці тому
Greetings fellow learners! This is the 2nd video in a playlist of videos where are are going to talk about the fundamentals of building neural networks! Here, we cover back propagation ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first ...
Building your first Neural Network
Переглядів 3,5 тис.4 місяці тому
This is the first in a playlist of videos where are are going to talk about the fundamentals of building neural networks! ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/mai...
Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF
Переглядів 10 тис.5 місяців тому
Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF
Proximal Policy Optimization | ChatGPT uses this
Переглядів 9 тис.5 місяців тому
Proximal Policy Optimization | ChatGPT uses this
Deep Q-Networks Explained!
Переглядів 15 тис.5 місяців тому
Deep Q-Networks Explained!
Monte Carlo in Reinforcement Learning
Переглядів 7 тис.5 місяців тому
Monte Carlo in Reinforcement Learning
Reinforcement Learning: on-policy vs off-policy algorithms
Переглядів 5 тис.6 місяців тому
Reinforcement Learning: on-policy vs off-policy algorithms
Q-learning - Explained!
Переглядів 11 тис.6 місяців тому
Q-learning - Explained!
Foundation of Q-learning | Temporal Difference Learning explained!
Переглядів 10 тис.6 місяців тому
Foundation of Q-learning | Temporal Difference Learning explained!
Bellman Equation - Explained!
Переглядів 12 тис.6 місяців тому
Bellman Equation - Explained!
How to solve problems with Reinforcement Learning | Markov Decision Process
Переглядів 8 тис.7 місяців тому
How to solve problems with Reinforcement Learning | Markov Decision Process
Multi Armed Bandits - Reinforcement Learning Explained!
Переглядів 6 тис.7 місяців тому
Multi Armed Bandits - Reinforcement Learning Explained!
Elements of Reinforcement Learning
Переглядів 6 тис.7 місяців тому
Elements of Reinforcement Learning
ChatGPT: Zero to Hero
Переглядів 3,8 тис.7 місяців тому
ChatGPT: Zero to Hero
[ 100k Special ] Transformers: Zero to Hero
Переглядів 36 тис.7 місяців тому
[ 100k Special ] Transformers: Zero to Hero
20 papers to master Language modeling?
Переглядів 8 тис.8 місяців тому
20 papers to master Language modeling?
Llama - EXPLAINED!
Переглядів 24 тис.9 місяців тому
Llama - EXPLAINED!
How AI (like ChatGPT) understands word sequences.
Переглядів 2,9 тис.9 місяців тому
How AI (like ChatGPT) understands word sequences.
Convolution in NLP
Переглядів 4,3 тис.9 місяців тому
Convolution in NLP
Word2Vec, GloVe, FastText- EXPLAINED!
Переглядів 16 тис.11 місяців тому
Word2Vec, GloVe, FastText- EXPLAINED!
Word Embeddings - EXPLAINED!
Переглядів 12 тис.11 місяців тому
Word Embeddings - EXPLAINED!

КОМЕНТАРІ

  • @user-pb6yt8qh3w
    @user-pb6yt8qh3w 5 годин тому

    Thank you so much!!!!!!!!!!!!

  • @yvettewang7221
    @yvettewang7221 21 годину тому

    This is the best explanation of likelihood function. thank you so much for the video.

  • @blairnicolle2218
    @blairnicolle2218 День тому

    Tremendous!

  • @blairnicolle2218
    @blairnicolle2218 День тому

    Excellent videos! Great graphing for intuition of L1 regularization where parameters become exactly zero (9:45) as compared with behavior of L2 regularization.

  • @saileshshiwakoti2160
    @saileshshiwakoti2160 2 дні тому

    i am starting from 1st video, will escape topic i remember, and finish all his course. duirng holidays

  • @saileshshiwakoti2160
    @saileshshiwakoti2160 2 дні тому

    nice video

  • @kennethroark917
    @kennethroark917 3 дні тому

    corny sounds shit

  • @user-qu4is5uk3p
    @user-qu4is5uk3p 4 дні тому

    thank you so much that was so helpful

  • @abdulrahmankerim2377
    @abdulrahmankerim2377 5 днів тому

    Excellent! Thanks :)

  • @harshsonar9346
    @harshsonar9346 5 днів тому

    ✨Quiiizz Timmmeeee✨

  • @sloth_in_socks
    @sloth_in_socks 5 днів тому

    Great video! It's funny you mentioned unsupervised learning at the start but didn't mention LLMs

  • @vijaysen9739
    @vijaysen9739 6 днів тому

    nice work ! how much data(parallel corpora) is sufficient or atleast required for machine translation ?

  • @user-md9wl6bu1e
    @user-md9wl6bu1e 6 днів тому

    great explanation, could you make a video about Direct Preference Optimization (DPO)?

  • @gayatri8728
    @gayatri8728 7 днів тому

    Amazing explanation 🎉🎉🎉🎉

  • @ian-haggerty
    @ian-haggerty 8 днів тому

    I swear the quiz time backing track is from Hedgewars 😄

  • @Nediler
    @Nediler 8 днів тому

    I did all Calcs in Engineering school - partical differential equations (PEDs) being the most brutal - but its crazy how people thought of this shit. Blows my mind....

  • @ian-haggerty
    @ian-haggerty 8 днів тому

    I love the downplayed "Quiz time" :D

    • @ian-haggerty
      @ian-haggerty 8 днів тому

      Currently getting my fingers very dirty with all this stuff! It's been a tough road firing this up on a GPU with WSL2 and AtlasOS.

  • @KulkarniPrashant
    @KulkarniPrashant 8 днів тому

    Thanks!

  • @KulkarniPrashant
    @KulkarniPrashant 8 днів тому

    Amazing video! Thank you.

  • @anshumansinha5874
    @anshumansinha5874 9 днів тому

    I think your code is not correct, LayerNorm will not be over the batch. Try and think what it means to take normalisation over the batches. For a layer norm each entity of the batch is unique, and each should be normalised across the layer of the MLP (essentially everything is MLP).

  • @khabibownsmysoul7836
    @khabibownsmysoul7836 9 днів тому

    May be wrong I am not an expert but isn’t the Bellman equation supposed to add the reward of the S1 not S2?

  • @prathameshdinkar2966
    @prathameshdinkar2966 11 днів тому

    Nicely explained! I got better understanding of this, could you also include some examples which give some feel about the calculations...

  • @paultvshow
    @paultvshow 11 днів тому

    It should have been plain vanilla RNNs that originated in 1990s, not LSTM which should have been developed on top on RNNs later.

  • @ayeshariaz3382
    @ayeshariaz3382 11 днів тому

    where to det your slides?

  • @kanehooper00
    @kanehooper00 11 днів тому

    Excellent job. There is way too much "mysticism" around neural networks. This shows clearly that for a classification problem all the nerual net is doing is creating a boundary function. Of course it gets complicated in multiple dimensions. But your explanations and use of graphs is excellent

  • @s8x.
    @s8x. 11 днів тому

    what makes these different than tokenizers

  • @Trubripes
    @Trubripes 11 днів тому

    Dense but informative.

  • @sidbhattnoida
    @sidbhattnoida 11 днів тому

    I think the start token, padding token and end token should have some different name other that just empty string in the vocabulary as otherwise while initialising a language_to_index dictionary from vocabulary the last index having the value of empty string is overwriting all the previous indices with empty string.

  • @hoomanrs3804
    @hoomanrs3804 12 днів тому

    👏👏👏❤️

  • @igorg4129
    @igorg4129 12 днів тому

    oh, this is much better, update your weights:)

  • @vtrandal
    @vtrandal 12 днів тому

    Thanks! Excellent videos!

  • @igorg4129
    @igorg4129 12 днів тому

    I less like your channel since you have an obvious overuse of terminology instead of an explanation, which looks to me like intentional supercilious upscaling of the entrance threshold

    • @CodeEmporium
      @CodeEmporium 12 днів тому

      Yea. This is a really old video. Hope the more recent videos in the deep learning 101 playlist are better for understanding. Thanks for checking the channel out!

    • @igorg4129
      @igorg4129 12 днів тому

      @@CodeEmporium Good luck For instance at 3:25 One sees such an "explanation" and rushes to Josh Starmer whose greatness is saying same things using words rectangles and arrows instead of those .... BTW I bet that the reason why math is thought to be hard is the TIMING of usage of ugly notations. Such a notations can be used way beyond you have explained or understood a topic. The conspiracy theory would also claim it is done intentionally.

  • @alfinal5787
    @alfinal5787 12 днів тому

    A bigger issue is “Arabic Numbers” being actually Hindu. With the concept of zero.

  • @sharjeel_mazhar
    @sharjeel_mazhar 12 днів тому

    Can you please make a video that showcases how we can generate custom word embedding on a custom dataset from scratch? Without using anything pre-built? Say IMDb dataset? and then later load them to train a classification model?

  • @mollysullivan6414
    @mollysullivan6414 12 днів тому

    greek. yogurt.

  • @SarvaniChinthapalli
    @SarvaniChinthapalli 13 днів тому

    mind BLOWING..lucky enough to find your lectures

  • @SarvaniChinthapalli
    @SarvaniChinthapalli 14 днів тому

    Great lecture..Thank you so much for this video.. Great resource..

  • @sameepshah3835
    @sameepshah3835 15 днів тому

    Your vides deserve more recognition. Thank you for helping out sir. Looking forward to keep learning from you.

  • @pablocalvache-nr5wr
    @pablocalvache-nr5wr 15 днів тому

    Amazing work! congratz!

  • @codywan3816
    @codywan3816 16 днів тому

    Hochreiter, S., & Schmidhuber, J"urgen. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.

  • @shivam_dxrk
    @shivam_dxrk 16 днів тому

    The Best Creator for DS I've found, Thanks a lot!

  • @kesun852
    @kesun852 16 днів тому

    good course, but I don't get how the data collected for Zi, because Zi = 1 when the sample (in treatment group and convert) OR the sample (in the control group and not convert). Persuadable should be the 'AND' between the above relationship, is it?

  • @thanhtuantran7926
    @thanhtuantran7926 17 днів тому

    i literally understand all of it, thank you so much

  • @mansoorsoomro8585
    @mansoorsoomro8585 17 днів тому

    Thank you for providing these papers

  • @devinbrown9925
    @devinbrown9925 17 днів тому

    For Quiz Time 1 at 3:47, Shouldn't the answer be B: 0.5 sq units. I think the entire premise is that you know the area of a region, you know the ratio of balls dropped in both regions, and the ratio of balls dropped equals the ratio of area. Therefore you can use this information to determine the unknown area.

  • @SunnySingh-tp6nt
    @SunnySingh-tp6nt 18 днів тому

    can I get these slides?

  • @tariqislam9388
    @tariqislam9388 18 днів тому

    Thank you for this fantastic tutorial.

  • @ishapandey40
    @ishapandey40 18 днів тому

    Are you an Indian?

  • @sameertupe6094
    @sameertupe6094 19 днів тому

    Very Well explained by you sir,It helped alot

  • @WeeHooTM
    @WeeHooTM 19 днів тому

    8:09 I stopped watching when he thinks 1.5 is greater than 2.1 lmao