When there’s a new technology, there’s always a slew of people who want to educate you about it. Some want to teach you to use their tools, some want you to pay for training, and others will use free training to entice you to buy further training. Since AI is the new hot buzzword, there are plenty of free classes from reputable sources. The nice thing about a free class is that if you find it isn’t doing it for you, there’s no penalty to just quit.
We noticed NVIDIA — one of the companies that has most profited from the AI boom — has some courses (not all free, though). Generative AI Explained, and Augment your LLM Using Retrieval Augmented Generation caught our eye. There’s also Building a Brain in 10 Minutes, and Introduction to Physics-informed Machine Learning with Modulus. These are all quite short, though.
Surprisingly, Google hasn’t been as successful with AI, but it does have some longer and possibly more technical topics in its Google Cloud Skills Boost program. There are actually several “paths” with AI content, such as “Generative AI for Developers.” If you prefer Amazon, they have a training center with many free courses, including those on AI topics like Foundations of Prompt Engineering. Of course, you can expect these offerings will center on what a great idea the Google systems or the Amazon systems are. Can’t blame them for that.
They are all, of course, playing catchup to OpenAI. If you prefer to see what classes they offer, you can check out their partner DeepLearning.ai. Many other vendors have training here, also.
If you want something more rigorous, edX has a plethora of AI classes ranging from Harvard’s CS50 introduction class that uses Python (see the almost 12-hour video below) to offerings from IBM, Google, and others. These are typically free, but you have to pay if you want grading and a certificate or college credit. Microsoft also offers a comprehensive 12-week study program.
Naturally, there are more. The good news is you have choices. The bad news is that it is probably easy to make the wrong choice. Do you have any you’ve taken that you’d recommend or not recommend? Leave us a comment!
We are always amazed at how much you can learn online if you are structured and disciplined about it. There is no shortage of materials from very reputable schools available.
I tried a few courses but I always abandon them when they finish teaching about linear regression and other basic stuff and tell you to import one of the example image data sets.
That’s just disillusioning to me, maybe I’m just dumb it makes no sense to me how everything works. What would make a lot more sense is giving me a very, very simple time series data set and doing something with it, instead of images.
Anyway, I can’t really comment because I don’t know jack shit about AI.
It might be they are trying to keep it simple and assume you can build upon a single image example yourself by iterating through your own set of images or video at a later stage. But i agree, it does not inspire confidence (excuse the pun) in the system if you dont see a time series example working at an early stage and makes you want to quit.
Time series sample is just an example, but what I mean is a limited, one dimensional data set, instead of images, which are large matrices instead.
I’m not sure if I remember correctly from my college years, when we were playing with the neural networks several years before any big public LLM, but you very basic approach is to treat 2D data of the picture as 1D. You just take the first pixel line, then add the second, then the third, etc. Neural network doesnt care. You can do the same thing even with 3D data.
Note: in this context, AI = Neural network. I always hated how everybody calls it AI, when it is just a glorified xxl neural network with huge ammount of training data.
@shinsukke I’m completing the DLS series at Deeplearning.ai now– And I have to say Prof Ng is an excellent teacher. I can’t speak directly as to where everyone is on their journey, so you have to have at least *some* experience with the requisite Maths, as well as Python as that isn’t taught. Also having experience with ‘traditional’ ML (Yes, regression, but also say kNN, SVM, etc) really helps you to grasp the concepts better.
They even have an MLS course that covers some of this Machine Learning, but I haven’t taken it as I did my ML elsewhere. But, other than that, crucially they don’t automatically ‘assume’ you know a good part of Linear Algebra, or have all the experience for running your chain functions for derivatives in doing back prop– He explains it, but then just gives you the formula.
None of which to say formal knowledge is not useful to make sure you *really* understand in the end what you are learning, but also towards developing your own novel models– But there it is not ‘entirely necessary’.
Though yes, other courses I’d agree, they either use proprietary libraries (I have a lot of respect, but Fast.AI comes to mind), strictly use other architectures such as TensorFlow or Pytorch– Or conversely make the curriculum too simple that it just feels like you are not learning anything.
P.s. I haven’t gone through it yet but if your specific interest is in LLMs one Al did not mention but I heard is really good is Karpathy’s (formerly Stanford and until very recently OpenAI) ‘Zero to Hero’ series. It will show you, from scratch, how to build your own GPT-2 level model: https://karpathy.ai/zero-to-hero.html
… Tensorflow has a time-series (weather) tutorial. https://www.tensorflow.org/tutorials/structured_data/time_series
Personally for time-series I’ve also been interested in checking out Meta’s model ‘Prophet’:
https://facebook.github.io/prophet/
But in the end this field is so huge and there are so many particulars.
I still like to call it ML, not ‘AI’, and there is so much more to it than just transformers and diffusion…
Thanks Nicholas, I have no idea how I missed this! It going to be a few nights after work! Many thanks
For those who care less about cookbook steps and are more mathematically inclined, 3Blue1Brown has just released more videos in his Deep Learning series. They’re specifically about the math of Transformers used in things like Chat GPT.
Part 5: https://www.youtube.com/watch?v=wjZofJX0v4M
Part 6: https://www.youtube.com/watch?v=eMlx5fFNoYc
heh i am not particularly interested in AI but i developed a vague curiosity about this whole ‘opencl’ thing. i figured that since integrated gpu/apu sort of things are so popular, probably my PCs had amazing opencl performance i wasn’t even aware of. who knows what possibilities i might imagine, once i know the tool exists?
so i found a little reference material and made what i thought was a decent ‘hello world’, and i ran it on my celeron N4000 laptop and it was much slower than the regular CPU. not sure if it wasn’t being accelerated, or if my test case just measured overhead. so just casting about for another thing to try, i ran the same example on my AMD ryzen 3 2200G. and it failed (unkillable zombie) and hard-locked the “opencl” part of my computer! after running my test, clinfo and radeontop also become unkillable zombies instead of working.
i find i don’t really understand how any of the kernel interfaces to this stuff works. sometimes i have a hard time telling if an opengl thing is running accelerated or just happens to be fast enough on modern CPUs. it is all a mystery. but apparently the kernel interface is a PoS and full of bugs and probably a huge security vulnerability surface as well. so that’s reassuring.
a little off topic but kind of a bummer about what you find when you start poking around. i guess that’s one of the reasons people use so many libraries-on-top-of-libraries for this sort of stuff.
I’m promoting my first post on Instagram I read through some of the comments on here about the AI logistics and it seems like what I kind of expect with a new thing that’s not well understood by outliers. I don’t really think having a moving thing from a early stage object is any particular thing about it. I think having better understood architecture and maybe graphic model user interface systems that show the knowledge in the drop-downs of some sort for portraying how the logic works in the architectural layouts of the CPUs or the logic units could be better and are well understood at this point for myself after doing a single layout object in rhino 3D with grasshopper. I used in advance parameter and put in the parameters that I had after laying it out in the computer it still didn’t recognize it probably cuz it’s just going to a different strategy area or a different graphic user from the particular area where the logic is called from. I wasn’t sure I thought maybe if I design something to a certain stage and then I use the logic that’s in the computer it would organically apprehend that logic and set it into something else. It didn’t work like that.
Here is a summarized version of a very long response but AI that’s really timely and sort of cool that your initials match with the subject matter but this is my favorite website without question second only perhaps to GitHub but in combination that’s how their best used. And that to a lifelong journey in computer hardware and computing itself starting with basic and my own BBC as a teenager.
Being a little more handy than my purely computer driven roommates I ran a phone line through the walls of the dorm we were living in tapped into the telephone switching device of our dorm and give him access from his Apple too to the various bbs’s that were common in his home of New York City.
Of course I’m not going to speak about the 800 extenders or black and blue boxes that were being used at that particular moment in Time but to connect from one state to the other staying online for that amount of time and at the database that we had available to us well there was only one way to not incur the wrath of the University as well as being minors and kind of out of the reach of the technology of the day we avoided the potential negative backlashes until they found my wire actually they never found my wire they found him using his computer somehow connected to the phone system late at night and Bam.
I didn’t get in trouble out of pure educational purposes and the fact that what we were doing was so cutting edge compared to our similarly aged schoolmates.
But I found this GitHub repository to be very helpful but I’ve also had I graduate level biostatistics some data science as well as experience with SAS SPSS and R as well as GIS and health Data and other tools available for epidemiology from the USCDC etc
At the moment my focus is on creating open source use cases specific fusion models both compact for use on the edge and a full strength for use in post processing and and other types of still stealth applications and Switzerland really would be a bad environment to use commercial provided solutions especially when I hear that employees are doing so under the radar to improve their work but that’s exposing their company’s potentially very sensitive IP to a company and servers outside of Swiss soil which is both a criminal and a civil offense as we have private criminal actions here and those were most assured the fall under that category so not only would you lose your job, but you can also be sued for the loss of the intellectual property, but you would spend time in prison. It is my opinion that in most cases Swiss companies should be only using open source models adapting those models using their own commercial level and GPU customers to train them on very reliable and specific IP as long as well as their internal IP and having separate yet combined rag models to go out and update their knowledge faces but all of that should be put under a microscope by both specifically trained algorithms but also a bit of a human touch looking for things meant to invade detection and potentially cause weakness in the corporate security layers.
But having the unique life that I have lived which I would have never imagined but I started in philosophy probably the most helpful area in deep machine learning and you would understand that if you had a degree in philosophy along with another degree . But I have now become a cybersecurity expert who could probably pass most of the exams and that was out of necessity having been the target of a one-year sustained zero day attack meant to absolutely cripple my ability to do work and in retrospect I should have gone back to a self-correcting typewriter of fax machine and a the use of Federal Express.
We take for granted the speed at which we can work and then necessitates the use of high bandwidth low latency data transfer and you can replicate that to some extent low-tech but never to its full potential as you could unrestricted.
This is my favorite website without question and has helped me to understand more about things that are interesting to me than any other website that I have used and I have to thank you for creating such wonderful content and you’re all absolutely good at both being journalists as well as scientists yourself but also very good science communicators.
Your entire site promotes and helps to advance STM and helps to keep some potentially misguided younger people from perhaps falling into bad situations and once they’re in those situations they would find it very hard to get out or at least the perception of it and in some cases it could be very dangerous to them because people are using their knowledge to gain money and I have learned the very hard way or what people will do to other human beings for mere and money. So I try and remain positive about human nature but my experiences over the last year have helped to bolster those hopes but I tentatively have hope for my future and I tentatively believe that the progress I have made which has been like night and day level difference but that even what compromised I did one of the most granular pieces of health economics done to date all alone and my work was taken by the National association of attorney generals and applied to the entire United States as I am the author of The methodology which gave rise to the damage award figure used in the Perdue pharmaceutical case and I only learned that when I called back to Kentucky and looking for a friend and they called me “the guy”. I was a little shocked when I took my work to the University of Kentucky biostatistics Department and I said I did my best and I’ve had some training as a research assistant and infectious diseases working on an nihl sub report agreement in Malaysia but I’m still just an office and the director of the department looked me in the eye and said, “you’re no novice!” Remind you that at this moment in my life I was extremely compromised with severe complex post-traumatic stress disorder and in the words of a former Marine test pilot who said to me I should add to my skill set on the 10 recovering from combat related injuries something I could never say about myself but something which a man with his level of achievement as well as actual honor could award me with based on his intimate knowledge of my situation.
He is not wrong or as I’ve survived two assassination attempts and torture abroad as well as one of the events being so extraordinarily violent that I’m on surgery five of six with one more left to go which is one is very scary to me as it’s the fourth entry into my shoulder.
They tried to break my neck from behind but I went down on my knees by instinct and stood up in a triangle a very skills fighters who all took extremely powerful shots to my head but only one connected the other ripped my lip from mine that to my nose traumatically and the other was on the street not the sidewalk the difference between my shoulder and my head and I’m 2 m tall so imagine the skill level and necessary to knock me out at 2 m given that I myself have had MMA for over 5 years and not once despite several very close calls that have been knocked out and I’ve hit my head on absolutely everything in the world because nothing is made for someone 2 meters.
I don’t have male pattern baldness I have a collection of scar tissues on my scalp from scraping my head on absolutely every doorway my entire life and I used to have to thin my hair and as you know skin doesn’t grow on scar tissue.
So check out this repository I have about five projects which would be commercially viable but given that people judge me based on my CV not on even interviewing me and the fact that I’ve been just now given the authorization to start looking for work again and my own medical team believes that I could be more productive and successful now than before which I find to be very unlikely because I was much more successful than I even realized and I’ve had two people tell me that I was the second smartest lawyer in my field and I was very happy with that I would have agreed not to be the second but being less smart in my clients and best friend who is no longer with us for geopolitical reasons as he pushed back.
https://github.com/academic/awesome-datascience
Thanks a lot AI your articles along with Dan’s and I assume your brother and our other relatives as well as your entire team are absolutely fantastic and have been an integral part of my recovery thank you very much.