- 332
- 6 989 445
CodeEmporium
United States
Приєднався 27 тра 2016
Everything new and interesting in Machine Learning, Deep Learning, Data Science, & Artificial Intelligence. Hoping to build a community of data science geeks and talk about future tech! Projects demos and more! Subscribe for awesome videos :)
Hyper parameters - EXPLAINED!
Let's talk about hyper parameters and how they are used in neural networks and deep learning!
ABOUT ME
⭕ Subscribe: ua-cam.com/users/CodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: ua-cam.com/play/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU.html
⭕ Natural Language Processing 101: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ Reinforcement Learning 101: ua-cam.com/play/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha.html&si=AuThDZJwG19cgTA8
Natural Language Processing 101: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html&si=LsVy8RDPu8jeO-cc
⭕ Transformers from Scratch: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ ChatGPT Playlist: ua-cam.com/play/PLTl9hO2Oobd9coYT6XsTraTBo4pL1j4HJ.html
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStatistics
📕 Bayesian Statistics: imp.i384100.net/BayesianStatistics
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow
ABOUT ME
⭕ Subscribe: ua-cam.com/users/CodeEmporium
📚 Medium Blog: medium.com/@dataemporium
💻 Github: github.com/ajhalthor
👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/
RESOURCES
[1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101
PLAYLISTS FROM MY CHANNEL
⭕ Deep Learning 101: ua-cam.com/play/PLTl9hO2Oobd_NwyY_PeSYrYfsvHZnHGPU.html
⭕ Natural Language Processing 101: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ Reinforcement Learning 101: ua-cam.com/play/PLTl9hO2Oobd9kS--NgVz0EPNyEmygV1Ha.html&si=AuThDZJwG19cgTA8
Natural Language Processing 101: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html&si=LsVy8RDPu8jeO-cc
⭕ Transformers from Scratch: ua-cam.com/play/PLTl9hO2Oobd_bzXUpzKMKA3liq2kj6LfE.html
⭕ ChatGPT Playlist: ua-cam.com/play/PLTl9hO2Oobd9coYT6XsTraTBo4pL1j4HJ.html
MATH COURSES (7 day free trial)
📕 Mathematics for Machine Learning: imp.i384100.net/MathML
📕 Calculus: imp.i384100.net/Calculus
📕 Statistics for Data Science: imp.i384100.net/AdvancedStatistics
📕 Bayesian Statistics: imp.i384100.net/BayesianStatistics
📕 Linear Algebra: imp.i384100.net/LinearAlgebra
📕 Probability: imp.i384100.net/Probability
OTHER RELATED COURSES (7 day free trial)
📕 ⭐ Deep Learning Specialization: imp.i384100.net/Deep-Learning
📕 Python for Everybody: imp.i384100.net/python
📕 MLOps Course: imp.i384100.net/MLOps
📕 Natural Language Processing (NLP): imp.i384100.net/NLP
📕 Machine Learning in Production: imp.i384100.net/MLProduction
📕 Data Science Specialization: imp.i384100.net/DataScience
📕 Tensorflow: imp.i384100.net/Tensorflow
Переглядів: 1 136
Відео
How much training data does a neural network need?
Переглядів 2,2 тис.Місяць тому
Let's answer the question: "how much training data does a neural network need"? ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for Deep Learning 101 playlist: github.com/ajhalthor/deep-learning-101 PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ua...
NLP with Neural Networks | ngram to LLMs
Переглядів 2,1 тис.2 місяці тому
Let's talk about NLP with neural networks and highlight ngrams to Large Language Models (LLMs) ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/blob/main/embeddings.ipynb PLAYLISTS FROM MY CHANN...
Transfer Learning - EXPLAINED!
Переглядів 2,8 тис.2 місяці тому
Let's talk about a neural network concept called transfer learning. We use this in BERT, GPT and the large language models today. ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/huggingface/notebooks/blob/main/examples/qu...
Embeddings - EXPLAINED!
Переглядів 4,2 тис.3 місяці тому
Let's talk about embeddings in neural networks ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/blob/main/embeddings.ipynb PLAYLISTS FROM MY CHANNEL ⭕ Deep Learning 101: ua-cam.com/play/PLTl9hO2...
Batch Normalization in neural networks - EXPLAINED!
Переглядів 2,3 тис.3 місяці тому
Let's talk batch normalization in neural networks ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code for this video: github.com/ajhalthor/deep-learning-101/tree/main [2] Batch Normalization main paper: arxiv.org/pdf/1502.03167.pdf PLAYLISTS FROM MY CH...
Loss functions in Neural Networks - EXPLAINED!
Переглядів 4,7 тис.3 місяці тому
Let's talk about Loss Functions in neural networks ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/main PLAYLISTS FROM MY CHANNEL ⭕ Reinforcement Learning 101: ua-cam.com/pl...
Optimizers in Neural Networks - EXPLAINED!
Переглядів 2 тис.3 місяці тому
Let's talk about optimizers in neural networks. ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/main [2] More details on Activation functions: ua-cam.com/video/s-V7gKrsels/v...
Activation functions in neural networks
Переглядів 2,4 тис.4 місяці тому
Greetings fellow learners! This is the next video in a playlist of videos where are are going to talk about the fundamentals of building neural networks! In this video, we talk about Activation functions and how they are used in neural networks. ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com...
Backpropagation in Neural Networks - EXPLAINED!
Переглядів 2,7 тис.4 місяці тому
Greetings fellow learners! This is the 2nd video in a playlist of videos where are are going to talk about the fundamentals of building neural networks! Here, we cover back propagation ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first ...
Building your first Neural Network
Переглядів 3,5 тис.4 місяці тому
This is the first in a playlist of videos where are are going to talk about the fundamentals of building neural networks! ABOUT ME ⭕ Subscribe: ua-cam.com/users/CodeEmporium 📚 Medium Blog: medium.com/@dataemporium 💻 Github: github.com/ajhalthor 👔 LinkedIn: www.linkedin.com/in/ajay-halthor-477974bb/ RESOURCES [1] Code to build first neural network: github.com/ajhalthor/deep-learning-101/tree/mai...
Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF
Переглядів 10 тис.5 місяців тому
Reinforcement Learning through Human Feedback - EXPLAINED! | RLHF
Proximal Policy Optimization | ChatGPT uses this
Переглядів 9 тис.5 місяців тому
Proximal Policy Optimization | ChatGPT uses this
Monte Carlo in Reinforcement Learning
Переглядів 7 тис.5 місяців тому
Monte Carlo in Reinforcement Learning
Reinforcement Learning: on-policy vs off-policy algorithms
Переглядів 5 тис.6 місяців тому
Reinforcement Learning: on-policy vs off-policy algorithms
Foundation of Q-learning | Temporal Difference Learning explained!
Переглядів 10 тис.6 місяців тому
Foundation of Q-learning | Temporal Difference Learning explained!
How to solve problems with Reinforcement Learning | Markov Decision Process
Переглядів 8 тис.7 місяців тому
How to solve problems with Reinforcement Learning | Markov Decision Process
Multi Armed Bandits - Reinforcement Learning Explained!
Переглядів 6 тис.7 місяців тому
Multi Armed Bandits - Reinforcement Learning Explained!
[ 100k Special ] Transformers: Zero to Hero
Переглядів 36 тис.7 місяців тому
[ 100k Special ] Transformers: Zero to Hero
20 papers to master Language modeling?
Переглядів 8 тис.8 місяців тому
20 papers to master Language modeling?
How AI (like ChatGPT) understands word sequences.
Переглядів 2,9 тис.9 місяців тому
How AI (like ChatGPT) understands word sequences.
Word2Vec, GloVe, FastText- EXPLAINED!
Переглядів 16 тис.11 місяців тому
Word2Vec, GloVe, FastText- EXPLAINED!
Thank you so much!!!!!!!!!!!!
This is the best explanation of likelihood function. thank you so much for the video.
Tremendous!
Excellent videos! Great graphing for intuition of L1 regularization where parameters become exactly zero (9:45) as compared with behavior of L2 regularization.
i am starting from 1st video, will escape topic i remember, and finish all his course. duirng holidays
nice video
corny sounds shit
thank you so much that was so helpful
Excellent! Thanks :)
✨Quiiizz Timmmeeee✨
Great video! It's funny you mentioned unsupervised learning at the start but didn't mention LLMs
nice work ! how much data(parallel corpora) is sufficient or atleast required for machine translation ?
great explanation, could you make a video about Direct Preference Optimization (DPO)?
yah, I appreciate it.
Amazing explanation 🎉🎉🎉🎉
I swear the quiz time backing track is from Hedgewars 😄
I did all Calcs in Engineering school - partical differential equations (PEDs) being the most brutal - but its crazy how people thought of this shit. Blows my mind....
I love the downplayed "Quiz time" :D
Currently getting my fingers very dirty with all this stuff! It's been a tough road firing this up on a GPU with WSL2 and AtlasOS.
Thanks!
Amazing video! Thank you.
I think your code is not correct, LayerNorm will not be over the batch. Try and think what it means to take normalisation over the batches. For a layer norm each entity of the batch is unique, and each should be normalised across the layer of the MLP (essentially everything is MLP).
May be wrong I am not an expert but isn’t the Bellman equation supposed to add the reward of the S1 not S2?
Nicely explained! I got better understanding of this, could you also include some examples which give some feel about the calculations...
It should have been plain vanilla RNNs that originated in 1990s, not LSTM which should have been developed on top on RNNs later.
where to det your slides?
Excellent job. There is way too much "mysticism" around neural networks. This shows clearly that for a classification problem all the nerual net is doing is creating a boundary function. Of course it gets complicated in multiple dimensions. But your explanations and use of graphs is excellent
what makes these different than tokenizers
Dense but informative.
I think the start token, padding token and end token should have some different name other that just empty string in the vocabulary as otherwise while initialising a language_to_index dictionary from vocabulary the last index having the value of empty string is overwriting all the previous indices with empty string.
👏👏👏❤️
oh, this is much better, update your weights:)
Thanks! Excellent videos!
I less like your channel since you have an obvious overuse of terminology instead of an explanation, which looks to me like intentional supercilious upscaling of the entrance threshold
Yea. This is a really old video. Hope the more recent videos in the deep learning 101 playlist are better for understanding. Thanks for checking the channel out!
@@CodeEmporium Good luck For instance at 3:25 One sees such an "explanation" and rushes to Josh Starmer whose greatness is saying same things using words rectangles and arrows instead of those .... BTW I bet that the reason why math is thought to be hard is the TIMING of usage of ugly notations. Such a notations can be used way beyond you have explained or understood a topic. The conspiracy theory would also claim it is done intentionally.
A bigger issue is “Arabic Numbers” being actually Hindu. With the concept of zero.
Can you please make a video that showcases how we can generate custom word embedding on a custom dataset from scratch? Without using anything pre-built? Say IMDb dataset? and then later load them to train a classification model?
greek. yogurt.
mind BLOWING..lucky enough to find your lectures
Great lecture..Thank you so much for this video.. Great resource..
Your vides deserve more recognition. Thank you for helping out sir. Looking forward to keep learning from you.
Amazing work! congratz!
Hochreiter, S., & Schmidhuber, J"urgen. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.
The Best Creator for DS I've found, Thanks a lot!
good course, but I don't get how the data collected for Zi, because Zi = 1 when the sample (in treatment group and convert) OR the sample (in the control group and not convert). Persuadable should be the 'AND' between the above relationship, is it?
i literally understand all of it, thank you so much
Thank you for providing these papers
For Quiz Time 1 at 3:47, Shouldn't the answer be B: 0.5 sq units. I think the entire premise is that you know the area of a region, you know the ratio of balls dropped in both regions, and the ratio of balls dropped equals the ratio of area. Therefore you can use this information to determine the unknown area.
can I get these slides?
Thank you for this fantastic tutorial.
Are you an Indian?
Very Well explained by you sir,It helped alot
8:09 I stopped watching when he thinks 1.5 is greater than 2.1 lmao